Thursday, 16 April

19:49

Tell Congress: Don't Let Anyone Own The Law [EFF Action Center]

A large portion of the regulations we all live by (such as fire safety codes, or the national electrical code) are initially written—by industry experts, government officials, and other volunteers—under the auspices of standards development organizations (SDOs). Federal, state, or municipal policymakers then review the codes and decide whether the standard is good broad rule. The Pro Codes Act effectively endorses the claim that SDOs can “retain” copyright in codes, even after they are made law, as long as they make the codes available through a “publicly accessible” website – which means read-only, and subject to licensing limits.

That's bad for all of us. Anyone wishing to make the law accessible in a better format would find themselves litigating whether or not they are sheltered by the fair use doctrine – a risk that many won’t want to take.

We have a constitutional right to read, share and discuss the law. SDOs have already lost this battle in court after court, which have recognized that no one can own the law. Tell Congress you agree the law should be open to us all and urge them to reject this bill.

18:49

Rust 1.95.0 released [LWN.net]

Version 1.95.0 of the Rust language has been released. Changes include the addition of a cfg_select! macro, the capability to use if let guards to allow conditionals based on pattern matching, and many newly stabilized APIs. See the release notes for a full list of changes.

18:21

16:49

Jamie McClelland: Mailman3 has 2 databases. Whoops. [Planet Debian]

At May First we have been carefully planning our migration of about 1200 lists from mailman2 to mailman3 for almost six months now. We did a lot of user communications, had several months of beta testing with a handful of lists ported over, and everything was looking good. So we kicked off the migration!

But, about 15% of the way through I started seeing sqlite lock errors. Wait, what? I carefully re-configured mailman3 to use postgres, not sqlite. Well, yes, but apparently that was for the database managing the email list configuration, not the database powering the django web app, which, incidentally, also includes hundresds of gigabytes of archives. In other words, the one we really need in postgres, not sqlite.

Moving from sqlite to postgres

Well that sucks. We immediately stopped the migration to deal with this.

I noticed that the web is full of useful django instructions on how to migrate your database from one database to antoher. However, if you read the fine print, those convenient looking “dumpdata loaddata” workflows are designed to move the table definitions and a small amount of data. In our case, even after just 15% of our lists moved, our sqlite database was about 30GB.

I considered some of the hacks to manage memory and try to run this via django, but eventually decided that pgloader was a more robust option. This option also allowed me to more easily test things out on a copy of our sqlite database (made while mailman was turned off). This way I could migrate and re-migrate the sqlite database over and over without impacting our live installation until I was satisfied it was all working.

My first decision was to opt out of pgloader’s schema creation. I used django’s schema creation tool by:

  • Turning off mailman3 and mailman3-web and changing the mailman web configuration to use the new postgresql database.
  • Running mailman-web migrate
  • Changing the mailman web configuration back to sqlite and starting everything again.

Note: I tried just adding new database settings in the mailman web configuration indexed to ’new’ - django has the ability to define different databases by name, then you can run mailman-web migrate --database new. But, during the migration, I caught django querying the sqlite database for some migrations that required referencing existing fields (specifically hyperkitty’s 0003_thread_starting_email). I didn’t want any of these steps to touch the live database so I opted for the cleaner approach.

Once I had a clean postgres schema, I dumped it so I could easily return to this spot.

Next I started working on our pgloader load file. After a lot of trial and error, I ended with:

LOAD DATABASE
    FROM sqlite:///var/lib/mailman3/sqlite-postgres-migration/mailman3web.clean.backup.db
    INTO postgresql://mailmanweb:xxxxxxxxxxx@localhost:5432/mailmanweb

WITH data only,
    reset sequences,
    include no drop,
    disable triggers,
    create no tables,
    batch size = 5MB,
    batch rows = 500,
    prefetch rows = 50,
    workers = 2,
    concurrency = 1

SET work_mem to '64MB',
    maintenance_work_mem to '512MB'

CAST type datetime to timestamptz drop default drop not null,
    type date to date drop default drop not null,
    type int when (= precision 1) to boolean using tinyint-to-boolean,
    type text to varchar using remove-null-characters;

The batch, prefetch, workers and concurreny settings are all there to ensure memory doesn’t blow up.

I also discovered that I had to make some changes to the schema before loading data. Mostly truncating tables that the django migrate command populated to avoid duplicate key errors:

TRUNCATE TABLE django_migrations CASCADE;
TRUNCATE TABLE django_content_type CASCADE;
TRUNCATE TABLE auth_permission CASCADE;
TRUNCATE TABLE django_site CASCADE;

And also, I had to change a column type. Apparently the mailman import process allowed an attachment file name that exceeds the limit for postgres, but was allowed into sqlite:

ALTER TABLE hyperkitty_attachment ALTER COLUMN name TYPE text

When pgloader runs, we still get a lot of warnings from pgloader, which wants to cast columns differently than django does. These are harmless (I was able to import the data without a problem).

And there are still a lot of warnings along the lines of:

2026-03-30T14:08:01.691990Z WARNING PostgreSQL warning: constraint “hyperkitty_vote_email_id_73a50f4d_fk_hyperkitty_email_id” of relation “hyperkitty_vote” does not exist, skipping

These are harmless as well. They appear because disable triggers disables foreign key constraints. Without it, we wouldn’t be able to load tables that require values in tables that have not yet been populated.

After all the tweaking, the import of our 30GB sqlite database took about 40 minutes.

Final Steps

I think the reset sequences from pgloader should take care of this, but just in case:

mailman-web sqlsequencereset hyperkitty mailman_django auth | mailman-web dbshell

And, just to ensure postgres is optimized, run this in the psql shell:

ANALYZE VERBOSE;

Last thoughts

I understand very well all the decisions the mailman3 devs made in designing the next version of mailman, and if I was in the same place I may have made them the same ones. For example, separating the code running the mailing list from the code managing the archives and the web interface makes perfectly good sense - many people might want to run just the mailing list part without a web interface. And building the web interface in django makes a lot of sense as well - why re-invent the wheel? I’m sure a lot of time and effort was saved by simply using the built in features you get for free with django.

But the unfortunate consequence of these decisions is that sys admins have a much harder time. Almost everyone wants the email lists along with the web interface and the archives. But nobody wants two different configuration files with different syntaxes and logic, not to mention two different command lines to use for maintenance and configuration with completely different APIs. Trying to understand how to change a default template or set list defaults requires a lot of research and usually you have to write a python script to do it.

I have finally come to the conclusion that mailman2 is designed for sys admins, while mailman3 is designed for developers.

Despite these short comings, I am impressed with the community and their quick and friendly responses to the questions of a confused sys admin. That might be more valuable than anything else.

16:35

Forgejo 15.0 released [LWN.net]

Version 15.0 of the Forgejo code-collaboration platform has been released. Changes include repository-specific access tokens, a number of improvements to Forgejo Actions, user-interface enhancements, and more. Forgejo 15.0 is considered a long-term-support (LTS) release, and will be supported through July 15, 2027. The previous LTS, version 11.0, will reach end of life on July 16, 2026. See the announcement and release notes for a full list of changes.

16:07

The Big Idea: Cameron Johnston [Whatever]

The Scientific Method is immensely helpful, but so is literal magic. Would the power of science prove to be more powerful than the power of wizardry? It’s tough to say, but author Cameron Johnston certainly speculates on the idea in the Big Idea for his newest novel, First Mage on the Moon. Read on to see how the Space Race might’ve happened with the help of a wizard’s staff.

CAMERON JOHNSTON:

For a bunch of wise folk that meddle with reality and break the rules of standard physics on a regular basis, wizards and mages in fantasy media seem a remarkably uncurious lot. Sometimes magic users are far more interested in other dimensions and eldritch creatures than in the mortal world they themselves inhabit. How many of them look up at the stars and wonder what they are, or gaze at the moon and ponder what that shining silver disc really is…and how they might get there?

First Mage On The Moon was born from a single Big Idea (OK, OK…the idle thought of a fantasy-fan): Without science, how would wizards describe gravity? Inevitably, that grew arms and legs and tentacles and thingamabobs into: What would they make of outer space? How would they breathe in a spacecraft when they don’t even know what oxygen is or why air ‘goes bad’. What about aerodynamics? and a whole host of other questions I didn’t then have answers for. When you only have a magical understanding of the world and the closest thing to science is the semi-mystical and secretive practice of alchemy, well, then things get complicated if you want to build something to visit the moon. Magic is not going to solve everything if you fly straight up and try to hit a moving object like the moon, and don’t factor in the calculations for orbits, gravity… or indeed the speed/friction of re-entry.

Science is an amazing and collaborative process and Earth’s 20th-century Space Race was a species-defining moment, but what if that happened in a fantasy world of mages, golems, vat-grown killing machines and grinding warfare. What if a group of downtrodden mages sick of building weapons of mass destruction for their oligarch overlords decided to go rogue and divert war materials into building a vessel to go to the moon, the home of their gods, and ask for divine intervention in stopping the war. When you have no culture of shared science, where do you even begin? 

All those thoughts and ideas stewed away in the back of my brain while I was writing my previous novel, The Last Shield. As all authors know, there comes a stage of writing a book when your brain goes “Ooh, look at the shiny new thing!” Very helpful, brain, coming up with magical rocket ships when I’m trying to write a book set in a fantasy version of the Scottish Bronze Age – thanks very much! That idea of wizard-science and magical engineering lodged there, immovable, and my next book just had to become First Mage On The Moon. Which was handy, as I was contracted to write another standalone novel.

While the US/USSR Space Race and modern science of our very own Earth was inevitably a huge influence on my novel, so too were the theories and writing of its ancient thinkers. Around 500 BCE, Pythagoras proposed a spherical world, and Aristotle later wrote several arguments for the same theory, such as ships sailing over the horizon disappearing hull-first and different constellations being visible at different latitudes (all of which may have given the Phoenician sailors and navigators certain thoughts too). And then comes Eratosthenes, Chief Librarian of Alexandria, and a very smart dude who was able to calculate the circumference of Earth by using two sticks in two locations and comparing the angles of their shadows. If those ancient Earth scholars could calculate such things, then surely fantasy mages, with all the magic at their disposal, could do more than fling fireballs at each other. There had to be some among them with the desire to explore beyond the bounds of myth and magic, gods and monsters, and given the opportunity to work with like-minds to build something that has never been done before, they would surely take it…despite the risks.

Found family, magical engineering, and mad ideas of actual science in a magical world all came together to form First Mage On The Moon. As much as I love my morally grey characters in realms of swords and sorcery, it was deeply satisfying to write something that little bit different, a hopeful story about human ingenuity in an increasingly fraught world. 


First Mage On The Moon: Amazon|Amazon UK|Barnes & Noble|Bookshop|Powell’s|Waterstones

Author socials: Website|Bluesky|Facebook|Instagram

16:00

What’s up with window message 0x0091? We’re getting it with unexpected parameters [The Old New Thing]

A customer, via their customer liaison, reported quite some time ago that their program stopped working on Windows XP. (I told you it was quite some time ago.)

The customer’s investigations revealed that the problem occurred because their window was receiving message 0x0091, and the parameters are wrong. Who is sending this message with the wrong parameters?

Okay, first of all, how do you even know that the parameters are wrong? The message is not listed in winuser.h or in MSDN (as it was then called).

We explained that message 0x0091 is an internal message that they should just pass to Def­Window­Proc unchanged. What makes the customer think that the message is being received with the wrong parameters?

The customer said that their program was using that message as a custom message, and now, in addition to getting it when their program sends the message, they are also getting spurious copies of the message with WPARAM and LPARAM values that don’t correspond to any values that the program itself sent.

We informed them that they shouldn’t have been using that message for their own purposes. Those messages are in the system-defined range, which means that they are off-limits to applications. If they want to send a private message, use one in the application space.

It’s like finding an empty closet in an office building and using it to store your bicycle, but now, when you come to work, you find that the closet is filled with other stuff and there’s no room for your bicycle any more. “Why is there stuff in that closet?” Because it wasn’t your closet in the first place.

The liaison took our advice back to the customer, but mentioned that the customer probably won’t like that answer. The message 0x0091 was not the only message they were using. They also used other messages below WM_USER, and they were all causing problems; they just wanted to start their investigation with 0x0091.

Oh well. But I hope it’s as simple as just changing a macro definition from

#define WM_MYSECRETMESSAGE 0x0091

to

#define WM_MYSECRETMESSAGE (WM_APP + 1020) // or something

Pick a message in the range available to applications for custom use.

The post What’s up with window message <CODE>0x0091</CODE>? We’re getting it with unexpected parameters appeared first on The Old New Thing.

15:49

Generative AI in the Real World: Aishwarya Naresh Reganti on Making AI Work in Production [Radar]

As the founder and CEO of LevelUp Labs, Aishwarya Naresh Reganti helps organizations “really grapple with AI,” and through her teaching, she guides individuals who are doing the same. Aishwarya joined Ben to share her experience as a forward-deployed expert supporting companies that are putting AI into production. Listen in to learn the value all roles—from data folks and developers to SMEs like marketers—bring to the table when launching products; how AI flips the 80-20 rule on its head; the problem with evals (or at least, the term “evals”); enterprise versus consumer use cases; and when humans need to be part of the loop. “LLMs are super powerful,” Aishwarya explains. “So I think you need to really identify where to use that power versus where humans should be making decisions.” Watch now.

About the Generative AI in the Real World podcast: In 2023, ChatGPT put AI on everyone’s agenda. In 2026, the challenge will be turning those agendas into reality. In Generative AI in the Real World, Ben Lorica interviews leaders who are building with AI. Learn from their experience to help put AI to work in your enterprise.

Check out other episodes of this podcast on the O’Reilly learning platform or follow us on YouTube, Spotify, Apple, or wherever you get your podcasts.

Transcript

This transcript was created with the help of AI and has been lightly edited for clarity.

00.58
All right. So today we have Aishwarya Reganti, founder and CEO of LevelUp Labs. Their tagline is “Forward-deployed AI experts at your service.” So with that, welcome to the podcast.

01.13
Thank you, Ben. Super excited to be here.

01.16
All right. So for our listeners, “forward-deployed”—that’s a term I think that first entered the lexicon mainly through Palantir, I believe: forward-deployed engineers. So that communicates that Aishwarya and team are very much at the forefront of helping companies really grapple with AI and getting it to work. So, first question is, we’re two years into these AI demos. What actually separates a real AI product from a good demo at this point?

01.53
Yeah, very timely question. And yeah, we are a team of forward-deployed experts. A bit of a background to also tell you why we probably have seen quite a few demos failing. We work with enterprises to build a prototype for them, educate them about how to improve that prototype over time. I think one of the biggest things that differentiates a good AI product is how much effort a team is spending on calibrating it. I typically call this the 80-20 flip. 

A lot of the folks who are building AI products as of today come from a traditional software engineering background. And when you’re building a traditional product, a software product, you spend 80% of the time on building and 20% of the time on what happens after building, right? You’re probably seeing a bunch of bugs, you’re resolving them, etc. 

But in AI, that kind of gets flipped. You spend 20% of the time maybe building, especially with all of the AI assistants and all of that. And you spend 80% of the time on what I call “calibration,” which is identifying how your users behave with the product [and] how well the product is doing, and incorporating that as a flywheel so that you can continue to improve it, right? 

03.11
And why does that happen? Because with AI products, the interface is very natural, which means that you’re pretty much speaking with these products, or you’re using some form of natural language communication, which means there are tons of ways users could talk and approach your product versus just clicking buttons and all of that, where workflows are so deterministic—which is why you open up a larger surface area for errors. 

And you will only understand how your users are behaving with the system as you give them more access to it right. Think of anything as mainstream as ChatGPT. How users interact with ChatGPT today is so much more different than how they would do say three years ago or when it was released in November 2022. So what differentiates a good product is that idea of constant calibration to make sure that it’s getting aligned with the users and also with changing models and stuff like that. So the 80-20 flip I think is what differentiates a good product from just a prototype.

04.14
So actually this is an important point in the in the sense that the persona has changed as to who’s building these data and AI products, because if you rewind five years ago, you had people with some knowledge of data science, ML, and now because it’s so accessible, developers—actually even nondevelopers, vibe coders—can can start building. So with that said, Aishwarya, what do these kinds of nondata and AI people still consistently get wrong when they move from that traditional mindset of building software to now AI applications?

05.05
For one, I truly am one of those people who believes that AI should be for everyone. Even if you’re coming from a traditional machine learning background, there’s so much to catch up on. Like I moved to a team in AWS where. . . I moved from a team in AWS in 2023 where I was working with traditional natural language processing models—I was a part of the Alexa team. And then I moved into an org called GenAI Innovation Center, where we were building generative AI solutions for customers. And I feel like there was so much to learn for me as well. 

But if there’s one thing that most people get wrong and maybe AI and traditional ML folks get right, it’s to look at your data, right? When you’re building all of these products, people just assume that “Oh, I’ve tested this for a few use cases” and then it seems to work fine, and they don’t pay so much attention to the kind of data distribution that they would get from their users. And given this obsession to automate everything, people go like, “OK, I can maybe ask an LLM to identify what kind of user patterns I’m seeing, build evals for itself, and update itself.” It doesn’t work that way. You really need to spend the time to understand workflows very well, understand context, understand all this data, pretty much. . . 

I think just taking the time to manually do some of the setting up work for your agents so that they can perform at their maximum is super underrated. Traditional ML folks tend to understand that a little better because most of the time we’ve been doing that. We’ve been curating data for training our machine learning models even after they go into production. There’s all of this identifying outliers and updating and stuff. But yeah, if there’s one single takeaway for anybody building AI products: Take the time to look at your data. That’s the most important foundation for building them.

07.01
I’ll flip this a little bit and give props to the traditional developers. What do they get right? In other words, traditional developers write code; some of them write tests, run unit tests [and] integration tests. So they had something to build on that maybe the data scientists who were not writing production code were not used to doing. So what do the traditional developers bring to the table that the data and ML people can learn from?

07.40
That’s an interesting question because I don’t come from a software background and I just feel traditional developers have a very good design thinking: How do you design architectures so that they can scale? I was so used to writing in notebooks and kind of just focusing so much on the model, but traditional developers treat the model as an API and they build everything very well around it, right? They think about security. They think about what kind of design makes sense at scale and all of that. And even today I feel like so much of AI engineering is traditional software engineering—but with all of the caveats that you need to be looking at your data. You need to be building evals which look very different. But if you kind of zoom out and see, it’s pretty much the same process, and everything that you do around the model (assuming that the model is just a nondeterministic API), I think traditional software engineers get it like bang on.

08.36
You recently wrote a post about evals, which was quite interesting actually, [arguing] that it’s a bit of an overused and poorly defined term. I agree with the thesis of the post, but were you getting frustrated? Is that the reason why you wrote the post? [laughs] What was the genesis of the post? 

09.03
A baseline is most of my posts come out of frustration and noise in this space. It just feels like if you kind of see the trajectory. . . In November 2022, ChatGPT was out, and [everybody was] like, “Oh, chat interfaces are all you need.” And then there was this concept of retrieval-augmented generation, they go “Oh, RAG is all you need. Chat just doesn’t work.” And then there was this concept of agents and like “Agents are all you need; evals are all you need.” So it just gets super annoying when people hang on to these concepts and don’t really understand the depth of it. 

Even now I think there are tons of people who go like “Oh, RAG is dead. It’s not going to be used” and stuff, and there’s so much nuance to it. And with evals as well. I teach a lot of courses: I teach at universities; I also have my own courses. I feel like people just stuck to the term, and they were like “Oh, there is this use case I’m building. I need hundreds of evals in order to make sure that it’s tested very well.” And they just heard the fact that “Oh, evals are what you need to do differently for AI products” and really didn’t understand in depth like what evals mean—how you need to build a flywheel around it, and the entire you know act of building a product, calibrating it, and building a set of evaluations and also doing some A/B testing online to understand how your users are behaving with it. All of that just went into one term “evals,” and people are just like throwing it around everywhere, right?

10.35
And there’s also this confusion around model eval versus product eval, which is all of these frontier companies build evals on their models to make sure that they understand where they are on the leaderboard. And I was speaking to someone one day, and they went like, “Oh, GPT-5 point something has been tested on a particular eval dataset, which means it’s the best for my use case, so I’m going to be using it.” And I’m like, “That’s not the evals that you should be worrying about, right?” So just overloading so much into a term and hyping it up is kind of what I felt was annoying. And I wanted to write a post to say that evals is a process. It’s a long process. It’s pretty much the process of building something and calibrating it over time. And there are tons of components to it, so don’t kind of try to stuff everything in a word and confuse people. 

I’ve also seen people who do things like, “Oh, I’m going to build hundreds of evals” and maybe 10 of them are actionable. Evals also need to be super actionable: What is the information you can get from them, and how can you act on that? So I kind of stuffed all of that frustration into the post to kind of say it’s a longer process. There’s so much nuance in it. Don’t try to water that down.

11.48
So it seems like this is an area where the people that were from the prior era—the people building ML and data science products—maybe could bring something to the table, right? Because they had experience, I don’t know, shipping recommendation engines and things like that. They have some prior notion of what continuous evaluation and rigorous evaluation brings to the table. 

Actually I was talking to someone about this a few weeks ago in the sense that maybe the data scientists actually have a growing employment opportunity here because basically what they bring to the table seems increasingly important to me. Given that code is essentially free and discardable, it seems like someone with a more rigorous background in stats and ML might be able to distinguish themselves. What do you think?

12.56
Yes and no, because it’s true that machine learning and data scientists understand data very well, but just the way you build evals for these products is so much more different than how you would build, say, your typical metrics (accuracy, F-score, and all of that) that it takes quite some thinking to extend that and also some learning to do. . .

13.21
But at least you might actually go in there knowing that you need it.

13.27
That is true, but I don’t think that’s a super. . . I’ve seen very good engineers pick that up as well because they understand at a design level “What are the metrics I need to be measuring?” So they’re very outcome focused and kind of enter with that. So one: I think everybody has to be more coachable—not really depend on things that they learned like X years ago, because things are changing so quickly. But I also believe that whenever you’re building a product, it’s not really one set of folks that have the edge. 

Another maybe distribution that is completely different is just subject-matter experts, right? When you’re building evals, you need to be writing rubrics for your LLM judges. Simple example: Let’s say you’re building a marketing pipeline for your company, and you need to write copy—marketing emails or something like that. Now even if I come from a data science background, if I were thrown at that problem, I just don’t understand what to look for and how to get closer to a brand voice that my company would be satisfied with. But I really need a marketing expert to kind of tell me “This is the brand voice we use, and this is the evals that we can build, or this is how the rubric should look like.” So it should almost be like a cross-functional thing. I feel like each of us have different pieces to that puzzle, and we need to work together. 

14.42
That kind of also brings me to this other thing of collaborating in a much tighter manner [than] before. Before it was like, “OK, machine learning folks get data; they build models; and then there is a separate testing team; there is a separate SME team that’s going to look at how this product is behaving.” And now you cannot do that. You need to be optimizing for the same feedback loop. You need to be talking a lot more with all of the stakeholders because even when building, you want to understand their perspective.

15.14
So it seems also the case that as more people build these things, they realize that actually. . . You know sometimes I struggle with the word “eval” in the sense that maybe the right word is “optimize,” because basically what you really want is to understand “What am I optimizing for?” Obviously reliability is one of them, but latency and cost are also important factors, right? So it’s just a discussion that you’re increasingly coming across, and people are recognizing that there’s trade-offs and they have to balance a bunch of things.

15.57
Yes, definitely. I don’t see it being discussed heavily mainstream. But whenever I approach a problem, it’s always that, right? It’s performance, effort, cost, and latency. And all of these four things are kind of. . . You’re trying to balance each of them and trade off each of them. And I always say, start off with something that’s very low effort so that you kind of have an upper ceiling to what can be achieved. Then optimize for performance. 

Again, don’t optimize for cost and latency when you get started because you just want to see the realm of possible to make sure that you can build a product and it can work fine. And cost and latency [are] something that ought to be optimized for—even when building for enterprises—after we’ve had a decent prototype that can do well on evals. Right now, if I built something with, say, a good mid-tier model and it can hit all of my eval datasets, then I know that this is possible, and now I can optimize for the latency and cost based on the constraints. But always follow that pyramid, right? Go with [the] lowest effort. Try to optimize for performance. And then cost and latency is something that. . . There are tons of tricks you can do. There’s caching; there’s using smaller models and all of that. That’s kind of a framework that I typically use.

17.08
In prior generations of machine learning, I think a lot of focus was on accuracy to some extent. But now increasingly, because we’re in this kind of generative AI world, it’s more likely that people are interested in reliability and predictability in the following sense: Even if I’m only 10% accurate, as long as I know what that 10% is, I would prefer that [to] a model that’s more accurate but I don’t know when it’s accurate. Right?

17.47
Right. That’s kind of the boon and bane of generative AI models. I guess the fact that they can generalize is amazing, but sometimes they end up generalizing in ways that you wouldn’t want them to. And whenever we work on enterprise use cases, I think for us always in my mind—something that I want to tell myself—is if this can be a workflow, don’t make it autonomous if it can solve a problem with a simple LLM call and if you can audit decisions. For instance, let’s say we’re building a customer support agent. You could literally build it in five minutes: You can throw SOPs at your customer support agent and say “OK, pick up the right resolution, talk to the user, and that’s it.” Building is very cheap today. I can literally have Claude Code build it up in a few minutes. 

But something that you want to be more intentional about is “What happens if things go wrong? When should I escalate to humans?” And that’s where I would just break this into a workflow. First, identify the intent of the human and then give me a draft—almost be a copilot for me, where I can collaborate. And then if that draft looks good, a human should approve it so that it goes further. 

Right now, you’re introducing auditability at each point so that you as a human can make decisions before, you know, an agent goes up and messes up things for you. And that’s also where your design decisions should really take over. Like I could build anything today, but how much thinking am I doing before that building so that there’s reliability, there’s auditability, and all of those things. LLMs are super powerful. So I think you need to really identify where to use that power versus where humans should be making decisions.

19.28
And you touched on the notion of human auditors or humans in the loop. So obviously people also try to balance LLM as judge versus human in the loop, right? Obviously there’s no one piece of advice, but what are some best practices around how you demarcate between when to use a human and when you’re comfortable using another model as a judge?

20.04
A lot of this usually depends on how much data you have to train your judge, right? I feel humans have this problem, which is: Sometimes you can do a task but you can’t explain why you arrived at that decision in a very structured format. I can today take a look at an article and tell you. . . Especially, I write a lot on Substack and LinkedIn; this is a very super personal use case. If you give me an article and ask me, “Ash, will this go viral on LinkedIn?” I can tell you yes or no for my profile right, because I’ve done it for so many years. But if you ask me, “How did you make that decision?” I probably cannot codify it and write it down as a bunch of rubrics. Which is again, when you translate this to an LLM judge, “Can I build an LLM that can tell me if a post will go viral or not?” Maybe not because I just don’t have all the constraints that I use as a human when I make decisions. 

Now, take this to more production-like use cases or enterprise-like use cases. You want to have a human judge until you can codify or you can create a framework of how to evaluate something and you can write that out in natural language. And what that means is you maybe want to take 100 or 200 utterances and say, “OK, does this make sense? What’s the reasoning behind why I graded it a certain way?” And you can feed all of that information into your LLM judge to finally give it a set of rubrics and build your evals. But that’s kind of how you make a decision, which is “Do we have enough information to provide to an LLM judge that it can replace human judgment?” 

But otherwise don’t do it—if you have very vague high-level ideas of what good looks like, you probably don’t want to go to an LLM judge. Even when building your systems, I would always recommend that your first pass when you’re doing your eval should be judged by a human, and you should also ask them to give you reasoning as to why they judge it because that reasoning is so important for training your LLM judges.

21.58
What are some signs that you look for? What are signals that you look for when one of these AI applications or systems go live? What are some of the signals you look for that [show] maybe the quality is degrading or breaking down?

22.18
It really depends on the use case, but there are a lot of subtle signals that users will give you, and you can log them, right? Things like “Are users swearing at your product?” That’s something we always use, right? “What kind of words are they using? How many conversation turns if it’s a chatbot, right?” Usually when you’re building your chatbot, you identify that the average number of turns is 10, but it turns out that customers are having only two turns of conversation. That kind of means that they’re not interested to talk to your chatbot. Or sometimes they’re having 20 conversations, which means they’re probably annoyed, which is why they’re having longer conversations. 

There are typical things: You know, ask your user to give a thumbs up or thumbs down and all of that, but we know that feedback kind of doesn’t. . . People don’t give feedback unless they’re annoyed at something. So you can have those as well. If you’re building something like a coding agent like Claude Code etc., very obvious logging you can do is “Did the user go and change the code that it generated?” which means it’s wrong. So it’s very specific to your context, but really think of ways you can log all of this behavior you can log anomalies. 

Sometimes just getting all of these logs and doing some topic clustering which is “What are our users typically talking about, and do any of those show signs of frustration? Do they show signs of being annoyed with the system?” and things like that. You really need to understand your workflows very well so that you can design these monitoring strategies.

23.50
Yeah, it’s interesting because I was just on a chatbot for an airline, and I was surprised how bad it was, in the sense that it felt like a chatbot of the pre-LLM era. So give us give us kind of your sense of “Are these chatbots now really being powered by foundation models or. . .?” I mean because I was just shocked, Aishwarya, about how bad it was, you know? So what’s your sense of, as far as you know, are enterprises really deploying these generative AI foundation models in consumer-facing apps?

24.41
Very few. To just give you a quick stat that might not be super correct: 70% to 80% of the engagements that we take up at LevelUp Labs happen to be productivity and ops focused rather than customer focused. And the biggest blocker for that has always been trust and reliability, because if you build these customer-facing agents [and] they make one mistake, it’s enough to put you on news media or enough to put you in bad PR. 

But I think what good companies are doing as of today is doing a phased approach, which is they have already identified buckets that can be completely autonomous versus buckets that would require humans to navigate, right? Like this example that you gave me, as soon as a user comes up with a query, they have a triaging system that would determine if it should go to an AI agent versus a human, depending on the history of the user, depending on the kind of query. (Is it complicated enough?) Right? Let’s say Ben has this history of. . .

25.44
Hey, hey, I had great status on this airline.

25.47
[laughs] Yeah. So it’s probably not you, but just the kind of query you’re coming up with and all of that. So they’ve identified buckets where automation is possible, and they’re doing it, and they’ve done that because of past behavior data, right? What are low-hanging fruits that we could automate versus escalate to humans. I have not seen a lot of these chat systems that are completely taken over by agents. There’s always some human oversight and very good orchestration mechanisms to make sure that customers are not affected.

26.16
So you mentioned that you mostly are in the technical and ops application areas, but I’ll ask you this question anyway. To what extent do legal things come up? In other words, I’m about to deploy this model. I know I have guardrails, but honestly, just between you and me, I haven’t gone through the proper legal evaluation, you know? [laughs] So in other words, legality or compliance—anything to do with laws—do they come up at all in your discussions with companies?

26.59
As an external implementation team, I think one thing that we do with most companies is give them a high-level overview of the architecture we’ll be building, the requirements, and ask them to do a security and legal review so that they’re okay with it, because we’ve had experiences in the past where we pretty much built out everything and then you have your CISO come in and say, “OK, this doesn’t fall into what we could deploy.” So many companies make that mistake of not really involving your governance and compliance folks in the beginning and then end up scrapping entire projects. 

I am not an expert who knows all of these rules and legalities, but we always make sure that they understand: “Where is the data coming from? Do we have any issues productionizing this?” and all of that, but we haven’t really worked. . . I mean I don’t have a lot of background on how to do this. We’re mostly engineering folks, but we make sure that we have a sign-off so that we are not kind of landing in surprises.

28.07
Yeah, the reason I bring it up is obviously, now that everything is much more democratized, more people can build—so in reality the people can move fast and break things literally, right? So I just wonder if there’s any discussion at all. It sounds like you are proactive, but mostly out of experience, but I wonder if regular teams are talking about this. 

Speaking of which, you brought up earlier leaderboards—obviously I’m guilty of this too: “I’m about to build something. OK, let me look at a leaderboard.” But, you know, I’m not literally going to take the leaderboard’s advice, right? I’m going to still kick the tires on the specific application and use case. But I’m sure though, in your conversations, people tell you all sorts of things like, “Hey, we should use this because I saw somewhere that this is ranked number one,” right? So is this still a frustration on your end, or are people much more savvy now?

29.19
For one, I want to quickly clarify that it’s not wrong to look at a leaderboard. It’s always. . . You know, you get a high-level idea of “Who are your best competitors at this point?” But what I have a problem with is being so obsessed with just that leaderboard that you don’t build evals for yourself.

29.34
In my experience, when we work with a lot of these companies, I think over the past two years the discussion has really shifted away from the model because of two reasons: One is most companies already have existing partnerships. They’re either working with a major model provider vendor and they’re OK doing that now just because all of these model providers are racing towards feature parity, leaderboard success, and all of that. If Anthropic has something, you know, if their model is performing well on a leaderboard today, Gemini and OpenAI will probably be there in a week. So people are not too concerned about model performance. They know that in a couple of weeks, that will kind of be built into other models. So they’re not worried about that. 

And two is companies are also thinking much more about the application layer right now. There’s so much discussion around all of these harnesses like Claude Code, OpenClaw, and stuff like that. So I’ve not seen a lot of complaints on “Oh, this is the model that we should be using.” It seems like they have a shared understanding of how models perform. They want to optimize the harness and the application layer much more.

30.48
Yeah. Yeah. Obviously another one of these buzzwords is “harness engineering,” and whatever you think about it, the one good thing is it really elevates the notion that you should worry about the things around the model rather than the model itself. 

But speaking of. . . I guess I’m kind of old school in the sense that I want to still make sure that I can swap models out, not necessarily because I believe one model is better than the other but one model may be cheaper than the other, right? 

And at least up until recently—I haven’t had this conversation in a while—it seemed to me that people got stuck on a model because their prompts were so specific for a model that porting to another model seemed like a lot of work. But nowadays though you have tools like DSPy and GEPA that it seems like you can do that more easily. So what’s your sense of model portability as a design principle—model neutrality?

32.06
For one, I think the gap between models is much more exaggerated for consumer use cases just because people care quite a bit about the personality, about how the model…

32.22
No, I care about latency and cost.

32.24
Yeah. In terms of latency and cost, right, most of the model providers pretty much are competing to make sure they are in the market. I don’t know. Do you think that there are models. . .

32.35
Well, I think that you can still get good deals with Gemini. [laughs]

32.40
Interesting.

32.41
But honestly, I use OpenRouter and OpenCode. So, I’m much more kind of I don’t want to get locked into a single [model]. When I build something, I want to make sure that I build in a way that I can move to a different model provider if I have to. But it doesn’t sound like you think that this is something that people worry about right now. They’re just worried about building something usable and then we can worry about that later.

33.12
Yes. And again, I come from a very enterprise point, like “What are companies thinking about this?” And like I said, I’m not seeing a lot of competition for model neutrality because these companies have deals with vendors and they’re okay sticking with the same model provider. 

Now, when it comes to consumers, like if you’re building something for the kind of use cases that you were saying, Ben, I feel that, like I said, personality is super important for consumer builders. And I still think we’re not at a point where you can easily swap out models and be like, “OK, this is going to work as good as before,” just because you have over time learned how the model behaves. So you’ve kind of gotten calibrated with these models, and these models also have very specific personalities. So there’s a lot of you know reengineering that you have to do.

34.07
And when I say reengineering, it just might mean changing the way your prompts are written and stuff like that. It will still functionally work, which is why I say that enterprises don’t care about this much because the kind of use cases I see are like document processing or code generation, in which case functionality is of much more importance than personality. But for consumer use cases, I don’t think we’re at a point—to your point on building with OpenRouter, you can do that, but I think it’s a lot of overhead given that you’ll have to write specific prompts for all of these models depending on your use case. 

I recently ported my OpenClaw from Anthropic to OpenAI because of all of the recent things, and I had to change all of my SOUL.md files, USER.md files, so that I could kind of set the behavior. And it [took] quite some time to do it, and I’m still getting used to interacting with OpenClaw using OpenAI because it seems like it makes different mistakes than what Anthropic would do. 

35.03
So hopefully at some point [the] personalities of these models will converge but I do not think so because this is not a capability problem. It’s more of design choices that these model providers have made while building these models. So I don’t see a time where. . . We’re already at a point where capability-wise most models are getting closer, but personality-wise I don’t think model vendors would prefer to converge them because these are kind of your spiky edges which will make people with a certain personality gravitate towards your models. You don’t want to be making it like an average.

35.38
So in closing, you do a bit of teaching as well, right? One of the things I’ve really paid attention to is, in my conversations with people who are very, very early in their career, maybe still looking for the first job, literally, there’s a lot of worry out there. I mean, not necessarily if you’re a developer and you have a job—as long as you embrace the AI tools, you’re probably going to be fine. It’s just getting to that first job is getting harder and harder for people. 

And unfortunately, you need that first job to burnish your credentials and your résumé. And honestly companies also I think neglect the fact that this is your pipeline for talent within the company as well: You have to have the top of the funnel of your talent pipeline. So what advice do you give to people who are literally still trying to get to that first job?

36.51
For one, I have had a lot of success with hiring young folks because I think they are very agent native. I call them like agent-native operators. If you’ve been working in software, in IT, for about 10 years or something like me, you’ve gotten used to certain workflows without using AI. I feel like we’re so stuck in that old mindset that I really need someone who’s agent native to come and tell me, “Hey you could literally ask Claude Code to do this.” So I’ve had a lot of luck hiring folks who are early career because they are very coachable, one, and two, they just understand how to be agent native. 

So my suggestion would still be around that: Be a tinkerer. Try to find out what you can do with these tools, how you can automate them, and be extremely obsessed with designing and thinking and not really execution, right? Execution is kind of being taken over by agents. 

So how do you really think about “What can I delegate?” versus “What can I augment?” and really sitting in the position of almost being an agent manager and thinking “How can you set up processes so that you can make end-to-end impact?” So just thinking a lot around those lines—and those are the kind of people that we’d like to hire as well. 

And if you see a lot of these latest job roles ,you’ll also see roles blurring, right? People who are product managers are expected to also do GTM, also do a bit of engineering, and all of that. So really understand the stack end to end. And the best way to do it, I feel, is build a product of your own [and] try to sell it. You’ll get to see the whole thing. [That] doesn’t mean “Oh, stop looking for jobs—go become an entrepreneur” but really understanding workflows end to end and making that impact and sitting at the design layer will be super valued is what I think.

38.34
Yeah, the other thing I tell people is you have interests so go deep in your interest and build something in whatever you’re interested in. Domain knowledge is going to be valuable moving forward, but also you end up building something that you would want to use yourself and you learn a lot of things along the way and then maybe that’s how you get your name out there, right?

38.59
Exactly. Solving for your own problem is the best advice: Try to build something that solves your own pain point. Try to also advocate for it. I feel like social media and all of this is so good at this point that you can really make a mark in nontraditional ways. You probably don’t even have to submit a job application. You can have a GitHub repository that gets a lot of stars—that might land you a job. So think of all of these ways to bring yourself more visibility as you build so that you don’t have to go through your typical job queue.

39.30
And with that, thank you, Aishwarya. 

39.32
Thank you.

14:28

CodeSOD: We'll Hire Better Contractors Next Time, We Promise [The Daily WTF]

Nona writes: "this is the beginning of a 2100 line function."

That's bad. Nona didn't send us the entire JavaScript function, but sent us just the three early lines, which definitely raise concerns:

if (res.length > 0) {
  await (function () {
    return new Promise((resolve, reject) => {

We await a synchronous function which retuns a promise, passing a function to the promise. As a general rule, you don't construct promises directly, you let asynchronous code generate them and pass them around (or await them). It's not a thing you never do, but it's certainly suspicious. It gets more problematic when Nona adds:

This function happens to contain multiple code repetition snippets, including these three lines.

That's right, this little block appears multiple times in the function, inside of anonymous function getting passed to the Promise.

No, the code does not work in its current state. It's unclear what the 2100 line function was supposed to do. And yes, this was written by lowest-bidder third-party contractors.

Nona adds:

I am numb at this point and know I gotta fix it or we lose contracts

Management made the choice to "save money" by hiring third parties, and now Nona's team gets saddled with all the crunch to fix the problems created by the "savings".

[Advertisement] Plan Your .NET 9 Migration with Confidence
Your journey to .NET 9 is more than just one decision.Avoid migration migraines with the advice in this free guide. Download Free Guide Now!

14:21

WordPress is a monoculture [Scripting News]

I've been designing and developing software like WordPress for over thirty years. I have stong opinions about where the product should have gone, but mostly I've not been talking about that, because I don't want to interfere with what Matt is doing.

I've known him since he was a teenager in Silicon Valley, a boy wonder to whom the web has always been there, whereas to people my age, it was a miracle that came along to put down all the dominant BigCo's who made it impossible for individuals to create.

But I've never believed in open source the way Matt does, as I explained last week. I think there needs to be competition in the writer's UI for WordPress, and in all other areas of the user interface. I think that's what it suffers from. There isn't enough diversity. Creativity is crowded into a very small space, plug-ins. Because there's an API that covers the full functionality of product, there's no technical reason it has to be this way. I believe in competition, because it encourages listening. People don't listen to their friends, I've discovered, but people do listen to their competitors.

The community is paralyzed, it can't fix basic problems that have been there forever. Gutenberg was a good idea for a site designer and a not-good approach for writers. But it should always be a choice for writers, if they like Gutenberg. There should be no single recommended editor for WordPress.

Imho, there are ways to navigate this landscape, but it's going to require immediate and radical restructuring.

WordPress is not the last hope of the web and the web is not going to disappear in our lifetimes. Everything is built on it. People who say it's about to disappear are alarmist purveyors of clickbait. You'll still be able to ship apps for the web five and ten years from now. But WordPress is an important part of the web, and I don't mean because it runs a certain percentage of all the sites, which is imho a meaningless stat. It's a uniquely valuable API and an implementation that's debugged and scales and is profitable, and can sustain a large organization supporting it. It's one of those things we could lose, but we'd be much poorer if that happened.

WordPress is unique in the products that came to us from Silicon Valley. It's universally useful and it doesn't lock you in.

If a product like EmDash were to be the successor it claims to be, well there goes all the open stuff, because I don't think they have it in their blood the way Matt and WordPress do.

My conclusion after being a software developer since the early days of Unix and personal computers, and at times being part of the Silicon Valley -- there have to be a variety of UIs for WordPress, where all our work is compatible, regardless of what our tools look like, the approaches for users could be radically different. It's been a monoculture, and imho that's the problem. Break it apart, yet retain the compatibility -- that's the most powerful position possible in tech.

PS: After writing this piece, I looked for early references to Matt on my blog, and came across this piece he wrote in 2006. He totally understood what was going on in RSS land. Here's another Matt post from 2010, which after reading I concluded that he saw WordPress as I saw it and still do, as the rightful heir to the legacy of Twitter. And Matt and team did develop the API he talks about in 2010 as hypothetical, it's there now, ready to lead us out of the darkness.

[$] The first half of the 7.1 merge window [LWN.net]

The 7.1 merge window opened on April 12 with the release of the 7.0 kernel. Since then, 3,855 non-merge changesets have been pulled into the mainline repository for the next release. This merge window is thus just getting started, but there has still been a fair amount of interesting work moving into the mainline.

KDE Gear 26.04 released [LWN.net]

Version 26.04 of the KDE Gear collection of applications has been released. Notable changes include improvements in the Merkuro Calendar schedule view and event editor, support for threads in the NeoChat Matrix chat client, as well as the ability to add keyboard shortcuts in the Dolphin file manager "to nearly any option in any menu, plugin or extension". See the changelog for a full list of updates, enhancements, and bug fixes.

Security updates for Thursday [LWN.net]

Security updates have been issued by AlmaLinux (bind, bind9.16, bind9.18, cockpit, fence-agents, firefox, fontforge, git-lfs, grafana, grafana-pcp, kernel, nghttp2, nginx, nginx:1.24, nginx:1.26, nodejs:20, nodejs:22, nodejs:24, pcs, perl-XML-Parser, perl:5.32, resource-agents, squid:4, thunderbird, and vim), Debian (incus, lxd, and python3.9), Fedora (cef, composer, erlang, libpng, micropython, mingw-openexr, moby-engine, NetworkManager-ssh, perl, perl-Devel-Cover, perl-PAR-Packer, polymake, pypy, python-cairosvg, python-flask-httpauth, and python3.15), Mageia (kernel, kmod-virtualbox, kmod-xtables-addons and kernel-linus), Oracle (\cockpit, bind, bind9.16, bind9.18, firefox, git-lfs, go-toolset:ol8, grafana, grafana-pcp, grub2, kea, kernel, libtiff, nghttp2, nginx, nginx:1.24, nginx:1.26, nodejs22, nodejs24, nodejs:22, nodejs:24, perl-XML-Parser, python3.9, thunderbird, uek-kernel, and vim), Red Hat (delve, go-toolset:rhel8, golang, golang-github-openprinting-ipp-usb, osbuild-composer, and rhc), SUSE (bind, Botan, cockpit, cockpit-subscriptions, expat, flatpak, glibc, goshs, himmelblau, kea, kernel, kubo, libpng16, libssh, log4j, mariadb, Mesa, netty, netty-tcnative, nfs-utils, nghttp2, nodejs20, openssl-3, pam, pcre2, python, python310, python311, python311-aiohttp, python311-rfc3161-client, python313, python36, rubygem-bundler, sqlite3, sudo, tigervnc, tomcat, tomcat10, tomcat11, util-linux, vim, and webkit2gtk3), and Ubuntu (dotnet8, dotnet9, dotnet10, frr, and linux-azure, linux-azure-4.15).

13:35

Link [Scripting News]

WordPress has remained mostly constant for its whole life, at least from the point of view of this outsider. But now the world its embedded in is turning upside down and WordPress must change, but no one really knows how that change will manifest.

Link [Scripting News]

The opmlProjectEditor explainer needed a light edit. It should make a lot more sense now if you're a newcomer. Perhaps the most important thing is that it now includes an example that you can open in Drummer to see how it works in an outline editor. OPML has become a really important format for apps that use RSS, but it's far more broadly adaptable. It's a good package for a whole app, and you can teach your tools to use the common structure to make it easier to share it with others, to keep a repo current, and to deploy the resulting code.

13:28

Tim Bradshaw: Structures of arrays [Planet Lisp]

Or, second system.

A while ago, I decided that I’d like to test my intuition that Lisp (specifically implementations of Common Lisp) was not, in fact, bad at floating-point code and that the ease of designing languages in Lisp could make traditional Fortran-style array-bashing numerical code pretty pleasant to write.

I used an intentionally naïve numerical solution to a gravitating many-body system as a benchmark, so I could easily compare Lisp & C versions. The brief result is that the Lisp code is a little slower than C, but not much: Lisp is not, in fact, slow. Who knew?

The point here though, is that I wanted to dress up the array-bashing code so it looked a lot more structured. To do this I wrote a macro which hid what was in fact an array of (for instance) double floats behind a bunch of syntax which made it look like an array of structures. That macro took a couple of hours.

This was fine and pretty simple, but it only dealt with a single type for each conceptual array of objects, there was no inheritance and it was restricted in various other ways. In particular it really was syntactic sugar on a vector: there was no distinct implementational type at all. So I thought well, I could make it more general and nicer.

Big mistake.

The second system

Here is an example of what I wanted to be able to do (this is in fact the current syntax):

(define-soa-class example ()
  ((x :array t :type double-float)
   (y :array t :type double-float)
   (p :array t :type double-float :group pq)
   (q :array t :type double-float :group pq)
   (r :array t :type fixnum)
   (s)))

This defines a class, instances of which have five array slots and one scalar slot. Of the array slots:

  • x and y share an array and will be neighbouring elements;
  • p and q share a different array, because the group option says they must not share with x and y;
  • r will be in its own array, unless the upgraded element type of fixnum is the same as that of double-float;
  • s is just a slot.

The implementation will tell you this:

> (describe (make-instance 'example :dimensions '(2 2)))
#<example 8010059EEB> is an example
[...]
dimensions      (2 2)
total-size      4
rank            2
tick            1
its class example has a valid layout
it has 3 arrays:
 index 0, element type double-float, 2 slots
 index 1, element type (signed-byte 64), 1 slot
 index 2, element type double-float, 2 slots
it has 5 array slots:
 name x, index 0 offset 0
 name y, index 0 offset 1
 name r, index 1 offset 0
 name p, index 2 offset 0
 name q, index 2 offset 1

This is already too complicated: the ability to control sharing via groups is almost certainly never going to be useful: it’s only even there because I thought of it quite early on and never removed it.

The class definition macro then needs to arrange life so that enough information is available so that a macro can be written which turns indexed slot access into indexed array access of the underlying arrays which are secretly stored in instances, inserting declarations to make this as fast as possible: anything slower than explicit array access is not acceptable. This might (and does) look like this, for example:

(with-array-slots (x y) (thing example)
  (for* ((i ...) (j ...))
    (setf (x i j) (- (y i j) (y j i)))))

As you can see from this, the resulting objects should be allowed to have rank other than 1. Inheritance should also work, including for array slots. Redefinition should be supported and obsolete macro expansions and instances at least detected.

In other words there are exactly two things I should have aimed at achieving: the ability to define fields of various types and have them grouped into (generally fewer) underlying arrays, and an implementational type to hold these things. Everything else was just unnecessary baggage which made the implementation much more complicated than it needed to be.

I had not finished making mistakes. The system needs to store some metadata about how slots map onto the underlying arrays, element types and so on, so the macro can use this to compile efficient code. There are two obvious ways to do this: use the property list of the class name, or subclass standard-class and store the metadata in the class. The first approach is simple, portable, has clear semantics, but it’s ‘hacky’; the second is more complicated, not portable, has unclear semantics1, but it’s The Right Thing2. Another wrong decision I made without even trying.

The only thing that saved me was that the nature of software is that you can only make a finite number of bad decisions in a finite time.

More bad decisions

I was not done. Early on, I thought that, well, I could make this whole thing be a shim around defstruct: single inheritance was more than enough, and obviously I could store metadata on the property list of the type name as described above. And there’s no nausea with multiple accessors or any of that nonsense.

But, somehow, I found writing a thing which would process the (structure-name ...) case of defstruct too painful, so I decided to go for the shim-around-defclass version instead. I even have a partly-complete version of the defstructy code which I abandoned. Another mistake.

I also decided that The Right Thing was to have the system support objects of rank 0. That constrains the underlying array representation (it needs to use rank \(n+1\) arrays for an object of rank \(n\)) in a way which I thought for a long time might limit performance.

Things I already knew

At any point during the implementation of this I could have told you that it was too general and the implementation was going to be too complicated for no real gain. I don’t know why I made so many bad choices.

The whole process took weeks and I nearly just gave up several times.

The light at the end of the tunnel

Or: all-up testing.

Eventually, I had a thing I thought might work. The macro syntax was a bit ugly (that macro still exists, with a different name) but it seemed to work. But since the whole purpose of the thing was performance, that needed to be checked. I wasn’t optimistic.

What I did was to write a version of my naïve gravitational many-body system using the new code, based closely on the previous one. The function that updates the state of the particles looks like this:

(defun/quickly step-pvs (source destination from below dt G &aux
                                (n (particle-vector-length source)))
  ;; Step a source particle vector into a destination one.
  ;;
  ;; Operation count:
  ;;  3
  ;;  + (below - from) * (n - 1) * (3 + 8 + 9)
  ;;  + (below - from) * (12 + 6)
  ;;  = (below - from) * (20 * (n - 1) + 18) + 3
  (declare (type particle-vector source destination)
           (type vector-index from)
           (type vector-dimension below)
           (type fpv dt G)
           (type vector-dimension n))
  (when (eq source destination)
    (error "botch"))
  (let*/fpv ((Gdt (* G dt))
             (Gdt^2/2 (/ (* Gdt dt) (fpv 2.0))))
    (binding-array-slots (((source particle-vector :check nil :rank 1 :suffix _s)
                           m x y z vx vy vz)
                          ((destination particle-vector :check nil :rank 1 :suffix _d)
                           m x y z vx vy vz))
      (for ((i1 (in-naturals :initially from :bound below :fixnum t)))
        (let/fpv ((ax/G zero.fpv)
                  (ay/G zero.fpv)
                  (az/G zero.fpv)
                  (x1 (x_s i1))
                  (y1 (y_s i1))
                  (z1 (z_s i1))
                  (vx1 (vx_s i1))
                  (vy1 (vy_s i1))
                  (vz1 (vz_s i1)))
          (for ((i2 (in-naturals n t)))
            (when (= i1 i2) (next))
            (let/fpv ((m2 (m_s i2))
                      (x2 (x_s i2))
                      (y2 (y_s i2))
                      (z2 (z_s i2)))
              (let/fpv ((rx (- x2 x1))
                        (ry (- y2 y1))
                        (rz (- z2 z1)))
                (let/fpv ((r^3 (let* ((r^2 (+ (* rx rx) (* ry ry) (* rz rz)))
                                      (r (sqrt r^2)))
                                 (declare (type nonnegative-fpv r^2 r))
                                 (* r r r))))
                  (incf ax/G (/ (* rx m2) r^3))
                  (incf ay/G (/ (* ry m2) r^3))
                  (incf az/G (/ (* rz m2) r^3))))))
          (setf (x_d i1) (+ x1 (* vx1 dt) (* ax/G Gdt^2/2))
                (y_d i1) (+ y1 (* vy1 dt) (* ay/G Gdt^2/2))
                (z_d i1) (+ z1 (* vz1 dt) (* az/G Gdt^2/2)))
          (setf (vx_d i1) (+ vx1 (* ax/G Gdt))
                (vy_d i1) (+ vy1 (* ay/G Gdt))
                (vz_d i1) (+ vz1 (* az/G Gdt)))))))
  destination)

And it not only worked, the performance was very close to the previous version, straight out of the gate. The syntax is not as nice as that of the initial, quick-and-dirty version, but it is much more general, so I think that’s worth it on the whole.

There have been problems since then: in particular the dependency on when classes get defined. It will never be as portable as I’d like because of the unnecessary MOP dependencies3, but it is usable and quick4.

Was it worth it? May be, but it should have been simpler.


  1. When exactly do classes get defined? Right. 

  2. Nothing that uses the AMOP MOP is ever The Right Thing, because the whole thing was designed by people who were extremely smart, but still not as smart as they needed to be and thought they were. It’s unclear if any MOP for CLOS can ever be satisfactory, in part because CLOS itself suffers from the same smart-but-not-smart-enough problem to a large extent not helped by bring dropped wholesale into CL at the last minute: by the time CL was standardised people had written large systems in it, but almost nobody had written anything significant using CLOS, let alone the AMOP MOP. 

  3. A mistake I somehow managed to avoid was using the whole slot-definition mechanism the MOP wants you to use. 

  4. I will make it available at some point. 

12:49

Pluralistic: A Pascal's Wager for AI Doomers (16 Apr 2026) [Pluralistic: Daily links from Cory Doctorow]

->->->->->->->->->->->->->->->->->->->->->->->->->->->->-> Top Sources: None -->

Today's links



A killer 1940s robot zapping two large domes with eye-lasers; trapped under the domes are two children, taken from 1910s photos of child laborers; one, a little girl in a straw hat, is holding two heavy buckets. The other, a newsie with a shoulder bag, is picking his nose. The background is the collapsing pillars seen in Dore's engraving of The Death of Solomon.

A Pascal's Wager for AI Doomers (permalink)

Lest anyone accuse me of bargaining in bad faith here, let me start with this admission: I don't think AI is intelligent; nor do I think that the current (admittedly impressive) statistical techniques will lead to intelligence. I think worrying about what we'll do if AI becomes intelligent is at best a distraction and at worst a cynical marketing ploy:

https://locusmag.com/feature/cory-doctorow-full-employment/

Now, that said: among some of the "AI doomers," I recognize kindred spirits. I, too, worry about technologies controlled by corporations that have grown so powerful that they defy regulation. I worry about how those technologies are used against us, and about how the corporations that make them are fusing with authoritarian states to create a totalitarian nightmare. I worry that technology is used to spy on and immiserate workers.

I just don't think we need AI to do those things. I think we should already be worried about those things.

Last week, I had a version of this discussion in front of several hundred people at the Bronfman Lecture in Montreal, where I appeared with Astra Taylor and Yoshua Bengio (co-winner of the Turing Prize for his work creating the "deep learning" techniques powering today's AI surge), on a panel moderated by CBC Ideas host Nahlah Ayed:

https://www.eventbrite.ca/e/artificial-intelligence-the-ultimate-disrupter-tickets-1982706623885

It's safe to say that Bengio and I mostly disagree about AI. He's running an initiative called "Lawzero," whose goal is to create an international AI consortium that produces AI as a "digital public good" that is designed to be open, auditable, transparent and safe:

http://lawzero.org

Bengio said he'd started Lawzero because he was convinced that AI was going to get a lot more powerful, and, in the absence of some public-spirited version of AI, we would be subject to all kinds of manipulation and surveillance, and that the resulting chaos would present a civilizational risk.

Now, as I've stated (and as I said onstage) I am not worried about any of this. I am worried about AI, though. I'm worried a fast-talking AI salesman will convince your boss to fire you and replace you with an AI that can't do your job (the salesman will be pushing on an open door, since if there's one thing bosses hate, it's paying workers).

I'm worried that the seven companies that comprise 35% of the S&P 500 are headed for bankruptcy, as soon as someone makes them stop passing around the same $100b IOU while pretending it's in all their bank accounts at once. I'm worried that when that happens, the chatbots that badly do the jobs of the people who were fired because of the AI salesman will go away, and nothing and no one will do those jobs. I'm worried that the chaos caused by vaporizing a third of the stock market will lead to austerity and thence to fascism:

https://pluralistic.net/2026/04/13/always-great/#our-nhs

I worry that the workers who did those jobs will be scattered to the four winds, retrained or "discouraged" or retired, and that the priceless process knowledge they developed over generations will be wiped out and we will have to rebuild it amidst the economic and political chaos of the burst AI bubble:

https://pluralistic.net/2026/04/08/process-knowledge-vs-bosses/#wash-dishes-cut-wood

In short, I worry that AI is the asbestos we're shoveling into our civilization's walls, and our descendants will be digging it out for generations:

https://pluralistic.net/2026/01/06/1000x-liability/#graceful-failure-modes

But Bengio disagrees. He's very smart, and very accomplished, and he's very certain that AI is about to become "superhuman" and do horrible things to us if we don't get a handle on it. Several times at our events, he insisted that the existence of this possibility made it wildly irresponsible not to take measures to mitigate this risk.

Though I didn't say so at the time, this struck me as an AI-inflected version of Pascal's wager:

A rational person should adopt a lifestyle consistent with the existence of God and should strive to believe in God… if God does not exist, the believer incurs only finite losses, potentially sacrificing certain pleasures and luxuries; if God does exist, the believer stands to gain immeasurably, as represented for example by an eternity in Heaven in Abrahamic tradition, while simultaneously avoiding boundless losses associated with an eternity in Hell.

https://en.wikipedia.org/wiki/Pascal%27s_wager

Smarter people than me have been poking holes in Pascal's wager for more than 350 years. But when it comes to this modern Pascal's AI Wager, I have my own objection: how do you know when you've lost?

As of this moment, the human race has lit more than $1.4t on fire to immanentize this eschaton, and it remains stubbornly disimmanentized. How much more do we need to spend before we're certain that god isn't lurking in the word-guessing program? Sam Altman says it'll take another $2-3t – call it six months' worth of all US federal spending. If we do that and we still haven't met god, are we done? Can we call it a day?

Not according to Elon Musk. Musk says we need to deconstruct the solar system and build a Dyson sphere out of all the planets to completely encase the sun, so we can harvest every photon it emits to power our word-guessing programs:

https://www.pcmag.com/news/elons-next-big-swing-dyson-sphere-satellites-that-harness-the-suns-power

So let's say we do that and we still haven't met god – are we done? I don't see why we would be. After all, Musk's contention isn't that our sun emits one eschaton's worth of immanentizing particles. Musk just thinks that we need a lot of these sunbeams to coax god into our plane of existence. If one sun won't do it, perhaps two? Or two hundred? Or two thousand? Once we've committed the entire human species to this god-bothering project to the extent of putting two kilosuns into harness, wouldn't we be nuts to stop there? What if god is lurking in the two thousand and first sun? Making god out of algorithms is like spelling "banana" – easy to start, hard to stop.

But as Bengio and I got into it together on stage at the Montreal Centre, it occurred to me that maybe there was some common ground between us. After all, when someone starts talking about "humane technology" that respects our privacy and works for people rather than their bosses, my ears grow points. Throw in the phrase "international digital public goods" and you've got my undivided attention.

Because there's a sense in which Bengio and I are worried about exactly the same thing. I'm terrified that our planet has been colonized by artificial lifeforms that we constructed, but which have slipped our control. I'm terrified that these lifeforms corrupt our knowledge-creation process, making it impossible for us to know what's true and what isn't. I'm terrified that these lifeforms have conquered our apparatus of state – our legislatures, agencies and courts – and so that these public bodies work against the public and for our colonizing alien overlords.

The difference is, the artificial lifeforms that worry me aren't hypothetical – they're here today, amongst us, endangering the very survival of our species. These artificial lifeforms are called "limited liability corporations" and they are a concrete, imminent risk to the human race:

https://pluralistic.net/2026/04/15/artificial-lifeforms/#moral-consideration

What's more, challenging these artificial lifeforms will require us to build massive, "international, digital public goods": a post-American internet of free/open, auditable, transparent, enshittification-resistant platforms and firmware for every purpose and device currently in service:

https://pluralistic.net/2026/01/01/39c3/#the-new-coalition

And even after we've built that massive, international, digital public good, we'll still face the challenge of migrating all of our systems and loved ones out of the enshitternet of defective, spying, controlling American tech exports:

https://pluralistic.net/2026/01/30/zucksauce/#gandersauce

Every moment that we remain stuck in the enshitternet is a moment of existential risk. At the click of a mouse, Trump could order John Deere to switch off all the tractors in your country:

https://pluralistic.net/2022/05/08/about-those-kill-switched-ukrainian-tractors/

He doesn't need tanks to steal Greenland. He can just shut off Denmark's access to American platforms like Office365, iOS and Android and brick the whole damned country. It would be another Strait of Hormuz, but instead of oil and fertilizer, he'd control the flow of Lego, Ozempic and deliciously strong black licorice:

https://pluralistic.net/2026/01/29/post-american-canada/#ottawa

These aren't risks that could develop in the future. They're the risks we're confronted with today and frankly, they're fucking terrifying.

So here's my side-bet on Pascal's Wager. If you think we need to build "international digital public goods" to head off the future risk of a colonizing, remorseless, malevolent artificial lifeform, then let us agree that the prototype for that project is the "international digital public goods" we need right now to usher in the post-American internet and save ourselves from the colonizing, remorseless, malevolent artificial lifeforms that have already got their blood-funnels jammed down our throats.

Once we defeat those alien invaders, we may find that all the people who are trying to summon the evil god have lost the wherewithal to do so, and your crisis will have been averted. But if that's not the case and the evil god still looms on our horizon, then I will make it my business to help you mobilize the legions of skilled international digital public goods producers who are still flush from their victory over the limited liability corporation, and together, we will fight the evil god you swear is in our future.

I think that's a pretty solid offer.


Hey look at this (permalink)



A shelf of leatherbound history books with a gilt-stamped series title, 'The World's Famous Events.'

Object permanence (permalink)

#25yrsago Every pirate ebook on the internet https://web.archive.org/web/20010724030402/https://citizen513.cjb.net/

#20yrsago Retired generals diss Donald Rumsfeld https://nielsenhayden.com/makinglight/archives/007432.html#007432

#20yrsago How to break HDCP https://blog.citp.princeton.edu/2006/04/14/making-and-breaking-hdcp-handshakes/

#20yrsago How Sun’s “open DRM” dooms them and all they touch https://memex.craphound.com/2006/04/14/how-suns-open-drm-dooms-them-and-all-they-touch/

#20yrsago Benkler's "Wealth of Networks" http://www.congo-education.net/wealth-of-networks/

#15yrsago Scientific management’s unscientific grounding: the Management Myth https://web.archive.org/web/20120823212827/https://www.theatlantic.com/magazine/archive/2006/06/the-management-myth/304883/

#15yrsago 216 “untranslatable” emotional words from non-English languages https://www.drtimlomas.com/lexicography/cm4mi/lexicography#!lexicography/cm4mi

#10yrsago New York public employees union will vote on pulling out of hedge funds https://web.archive.org/web/20160414230326/https://www.bloomberg.com/news/articles/2016-04-13/nyc-pension-weighs-liquidating-1-5-billion-hedge-fund-portfolio

#10yrsago Panama’s public prosecutor says he can’t find any evidence of Mossack-Fonseca’s lawbreaking https://web.archive.org/web/20160419165306/https://www.thejournal.ie/mossack-fonseca-prosecution-2714795-Apr2016/?utm_source=twitter_self

#10yrsago Bernie Sanders responds to CEOs of Verizon and GE: “I welcome their contempt” https://web.archive.org/web/20160415165051/https://www.businessinsider.com/bernie-sanders-verizon-contempt-2016-4

#10yrsago Let’s Encrypt is actually encrypting the whole Web https://www.wired.com/2016/04/scheme-encrypt-entire-web-actually-working/

#10yrsago City of San Francisco tells man he can’t live in wooden box in friend’s living room https://www.theguardian.com/us-news/2016/apr/13/san-francisco-new-home-rented-box-illegal?CMP=tmb_gu

#10yrsago How the UK’s biggest pharmacy chain went from family-run public service to debt-laden hedge-fund disaster https://www.theguardian.com/news/2016/apr/13/how-boots-went-rogue

#10yrsago Ohio newspaper chain owner says his papers don’t publish articles about LGBTQ people https://ideatrash.net/2016/04/the-owner-of-four-town-papers-in-ohio.html

#10yrsago How British journalists talk about people they’re not allowed to talk about https://web.archive.org/web/20160414152933/https://popbitch.com/home/2016/03/31/up-the-injunction/

#10yrsago Brussels terrorists kept their plans in an unencrypted folder called “TARGET” https://www.techdirt.com/2016/04/14/brussels-terrorist-laptop-included-details-planned-attack-unencrypted-folder-titled-target/

#10yrsago Ron Wyden vows to filibuster anti-cryptography bill https://www.techdirt.com/2016/04/14/burr-feinstein-officially-release-anti-encryption-bill-as-wyden-promises-to-filibuster-it/

#10yrsago Paramount wants to kill a fan-film by claiming copyright on the Klingon language https://torrentfreak.com/paramount-we-do-own-the-klingon-language-and-warships-160414/

#5yrsago Murder Offsets https://pluralistic.net/2021/04/14/for-sale-green-indulgences/#killer-analogy

#5yrsago The FCC wants your broadband measurements https://pluralistic.net/2021/04/14/for-sale-green-indulgences/#fly-my-pretties

#1yrago Machina economicus https://pluralistic.net/2025/04/14/timmy-share/#a-superior-moral-justification-for-selfishness


Upcoming appearances (permalink)

A photo of me onstage, giving a speech, pounding the podium.



A screenshot of me at my desk, doing a livecast.

Recent appearances (permalink)



A grid of my books with Will Stahle covers..

Latest books (permalink)



A cardboard book box with the Macmillan logo.

Upcoming books (permalink)

  • "The Reverse-Centaur's Guide to AI," a short book about being a better AI critic, Farrar, Straus and Giroux, June 2026 (https://us.macmillan.com/books/9780374621568/thereversecentaursguidetolifeafterai/)
  • "Enshittification, Why Everything Suddenly Got Worse and What to Do About It" (the graphic novel), Firstsecond, 2026

  • "The Post-American Internet," a geopolitical sequel of sorts to Enshittification, Farrar, Straus and Giroux, 2027

  • "Unauthorized Bread": a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, 2027

  • "The Memex Method," Farrar, Straus, Giroux, 2027



Colophon (permalink)

Today's top sources:

Currently writing: "The Post-American Internet," a sequel to "Enshittification," about the better world the rest of us get to have now that Trump has torched America. Third draft completed. Submitted to editor.

  • "The Reverse Centaur's Guide to AI," a short book for Farrar, Straus and Giroux about being an effective AI critic. LEGAL REVIEW AND COPYEDIT COMPLETE.
  • "The Post-American Internet," a short book about internet policy in the age of Trumpism. PLANNING.

  • A Little Brother short story about DIY insulin PLANNING


This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.

https://creativecommons.org/licenses/by/4.0/

Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.


How to get Pluralistic:

Blog (no ads, tracking, or data-collection):

Pluralistic.net

Newsletter (no ads, tracking, or data-collection):

https://pluralistic.net/plura-list

Mastodon (no ads, tracking, or data-collection):

https://mamot.fr/@pluralistic

Bluesky (no ads, possible tracking and data-collection):

https://bsky.app/profile/doctorow.pluralistic.net

Medium (no ads, paywalled):

https://doctorow.medium.com/

Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):

https://mostlysignssomeportents.tumblr.com/tagged/pluralistic

"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla

READ CAREFULLY: By reading this, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies ("BOGUS AGREEMENTS") that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer.

ISSN: 3066-764X

12:00

Meet the Scope Creep Kraken [Radar]

The following article was originally published on Tim O’Brien’s Medium page and is being reposted here with the author’s permission.

If you’ve spent any time around AI-assisted software work, you already know the moment when the Scope Creep Kraken first puts a tentacle on the boat.

The project begins with a real goal and, usually, a sensible one. Build the internal tool. Clean up the reporting flow. Add the missing admin screen. Then someone discovers that the model can generate a Swift application in minutes to render this on an iPhone, and the mood in the room changes.

“Why not? We can render this on an iOS application, and it will only take 10 minutes. Go for it. These tools are amazing. Wow.”

That first idea is often genuinely useful. Something that might have taken a week now takes an hour. That is part of what makes the pattern so seductive. It doesn’t begin with incompetence. It begins with tool-driven momentum.

The meeting continues, “Let’s put the entire year’s backlog into the system and see if we can get this all done in a week. Ignore the token spend limits, let’s just get this done.” What was a reasonable weekly release meeting has now set the stage for a rapid expansion in scope, and that’s how the Scope Creep Kraken takes over.

Scope creep is older than AI, of course. Software teams have been haunted by “while we’re at it” long before anybody was pasting stack traces into a chat window. What AI changed was the rate of growth. In the old version of this problem, extra scope still had to fight its way through staffing constraints. Somebody had to build the feature, debug it, test it, and explain why it belonged. That friction was often the only thing standing between a focused project and an over-extended team.

AI broke that.

Now the extra feature often arrives with a demo attached. “Could we add multi-language support?” Forty-five seconds later, there is a branch. “What about generated documentation?” Sure, why not? “Could the CLI accept natural language commands?” The model appears optimistic, which is enough to make the whole thing sound temporarily reasonable. Each addition looks manageable in isolation. That is how the Kraken works. It does not attack all at once. It wraps around the project one small grip at a time.

Signs the Kraken is already on your boat

  • Features appearing without a ticket
  • Branches nobody asked for
  • Demos replacing design decisions
  • “It only took the model 30 seconds.”

The part I keep seeing on teams is not reckless ambition so much as confident improvisation. People are reacting to real capability. They are not wrong to be excited that so much is suddenly possible.

The trouble starts when “we can generate this quickly” quietly replaces “we decided this belongs in the project.” Those are not the same sentence.

For a while, the Kraken even looks helpful. Output goes up. Screens appear. Branches multiply. People feel productive, and sometimes they really are productive in the narrow local sense. What gets hidden in that burst of visible progress is integration cost. Every tentacle has to be tested with every other tentacle. Every generated convenience becomes a maintenance obligation. Every small addition pulls the project a little farther from the problem it originally set out to solve.

The product manager might chime in, “A mobile application? I didn’t ask for that, but I guess it’s good. We’ll see. Who’s going to review this with the customer?”

That is usually when the team realizes the Kraken is already on the boat. The original sponsor asked for a hammer and is now watching a Swiss Army knife unfold in real time, with several blades no one asked for and at least one that does not seem to fold back in properly.

AI also makes it dangerously easy to confuse demonstrations with decisions.

The useful response is not to become suspicious of every experiment. Some of the first tentacles are worth keeping. The response is to put the old discipline back where AI made it easy to remove. Keep a written scope. Treat additions as actual decisions rather than prompt side effects. Ask what each new feature does to testing, documentation, support, and the team’s ability to explain the system six months from now. If nobody can answer those questions, the feature is not “done” just because the model produced a convincing draft.

What makes the Scope Creep Kraken a good name is one that teams can use in the moment. Once people can say, “This is another tentacle,” the conversation gets clearer. You are no longer arguing about whether the idea is clever. You are asking whether this is motivated by a requirement or a capability.

11:21

Human Trust of AI Agents [Schneier on Security]

Interesting research: “Humans expect rationality and cooperation from LLM opponents in strategic games.”

Abstract: As Large Language Models (LLMs) integrate into our social and economic interactions, we need to deepen our understanding of how humans respond to LLMs opponents in strategic settings. We present the results of the first controlled monetarily-incentivised laboratory experiment looking at differences in human behaviour in a multi-player p-beauty contest against other humans and LLMs. We use a within-subject design in order to compare behaviour at the individual level. We show that, in this environment, human subjects choose significantly lower numbers when playing against LLMs than humans, which is mainly driven by the increased prevalence of ‘zero’ Nash-equilibrium choices. This shift is mainly driven by subjects with high strategic reasoning ability. Subjects who play the zero Nash-equilibrium choice motivate their strategy by appealing to perceived LLM’s reasoning ability and, unexpectedly, propensity towards cooperation. Our findings provide foundational insights into the multi-player human-LLM interaction in simultaneous choice games, uncover heterogeneities in both subjects’ behaviour and beliefs about LLM’s play when playing against them, and suggest important implications for mechanism design in mixed human-LLM systems.

11:14

Grrl Power #1452 – Meat ugly [Grrl Power]

Honestly if Sydney had gotten an 11-alarm Indian dish like atomic lamb korma, her body might not have even noticed the meat. I like a little spice when I get Indian, or a little heat on my Thai Fried Rice or Pad See Ew, but honestly, it’s easy to cross a point where the heat gets to the point where you can’t really tell what you’re eating. Is it lamb korma or chicken korma? There’s usually a slight texture difference, but if the flavor is so overwhelming, that gets to be the only difference.

I’m sure there are tales of de-vegetarianizing out there. It’s probably best if done gradually. Sydney will have it relatively easy since fish and eggs and cheese are still a part of her diet. I imagine a vegan diving straight into a juicy burger would experience maximum distress. I’ve been on various diets in my life, including some very low fat ones, and I can tell you, 4-6 weeks of low fat followed by a cheat day starting with a big plate of bacon leads to… well, not-half-measures on the toilet.

Personally my favorite “diet” was busting my ass in the gym during my thirties, but I’ve fucked up my shoulders badly enough now that I can’t do about half the exercises I used to. I miss military press. :/


Finally, here we go! I took the suggestion that I just use an existing panel for a starting point, thinking it would save time… I guess it technically did, but a 5 character vote incentive just isn’t the way to go.

Patreon, of course, has actual topless version.

 

 


Double res version will be posted over at Patreon. Feel free to contribute as much as you like.

10:21

The definitive study of seed oil and health [Seth's Blog]

That’s the appeal of it, of course. There isn’t a definitive study. There can’t be.

Even if we created a forty-year-long, double-blind twin study, there’d be room for someone to ask “what about?…”

It doesn’t matter that the peer-reviewed and consistent results we have are clear to those who read them with an open mind.

The attraction of simple stories about complex phenomena is that we get to make them up and imbue them with whatever reassurance, solace or threat we choose. Human beings didn’t evolve to be rational decision makers. We’re creators and consumers of stories, seeking status and affiliation, and prioritizing short-term feelings over long-term evidence.

It’s nice when a story that’s precious to us is reinforced by evidence, but it’s rarely essential. Belief isn’t dependent on facts, that’s why we call it belief instead of facts.

It’s helpful to wonder who benefits from sharing a particular story with us, and what it costs us to believe it.

06:28

Urgent: Private equity in 401Ks [Richard Stallman's Political Notes]

US citizens: call on Congress to stop private equity from taking over Americans' 401(k) retirement plans.

US citizens: Join with this campaign to address this issue.

To phone your congresscritter about this, the main switchboard is +1-202-224-3121.

Please spread the word.

Urgent: Tax the rich [Richard Stallman's Political Notes]

US citizens: call on Congress to make the ultra rich pay their fair share.

US citizens: Join with this campaign to address this issue.

To phone your congresscritter about this, the main switchboard is +1-202-224-3121.

Please spread the word.

Urgent: War profiteering [Richard Stallman's Political Notes]

US citizens: call on Congress to investigate possible war profiteering.

US citizens: Join with this campaign to address this issue.

To phone your congresscritter about this, the main switchboard is +1-202-224-3121.

Please spread the word.

Orbán voted out of power [Richard Stallman's Political Notes]

Hungary voted authoritarian Orbán and his party out of power.

I have to wonder whether Peter Magyar, who only two years ago was a member of Orbán's party, wishes to undo all the damage Orbán has done. Has he said that he does?

Iran's war-ending demands [Richard Stallman's Political Notes]

Iran's conditions for ending the war include big demands.

  • The lifting of all primary and secondary sanctions on Iran.
  • Continued Iranian control over the Strait of Hormuz.
  • US military withdrawal from the Middle East.
  • An end to attacks on Iran and its allies.
  • The release of frozen Iranian assets.
  • A UN security council resolution making any deal binding.
An end to attacks, the lifting of sanctions, and the end of freezes on assets, are natural concomitants of a permanent peace. However, demanding Iranian control over the strait, and total US withdrawal, are in effect demands for the US to surrender. I doubt any US government would accept peace on such terms.

Sharing the control between Iran and Oman might make it more acceptable as a demand.

Response to Iran's demands [Richard Stallman's Political Notes]

The disagreement about what the supposedly agreed cease-fire with Iran requires has become sharp: Israel is back to bombing Lebanon and Iran has closed the straights of Hormuz again.

US threatens more bombing in Iran [Richard Stallman's Political Notes]

The temporary cease fire negotiations failed, so the monster is raging again and threatening attacks on Iran's vital civilian facilities, including water plants. Threats like that are just the thing to convince Iran's rulers they need nuclear weapons, like Mr Kim.

He also said the US would blockade the Straits of Hormuz, which basically repeats what Iran has already done, except that it would block Iran's oil exports as well as other countries' oil exports. Perhaps the two countries could take a step towards reconciliation by establishing joint patrols to make sure no oil exports travel through the straits ;-!

We would all be so much better off if the wrecker had not canceled the non-nuclear deal with Iran that Obama negotiated.

Israel bombing hospitals in Lebanon [Richard Stallman's Political Notes]

*Israel got away with targeting [medical facilities] in Gaza. It's no surprise it is doing it in Lebanon too.*

The Israel lobby in US [Richard Stallman's Political Notes]

AIPAC is disguising its campaign funding through organizations which in no way indicate their connection with AIPAC, or that they choose candidates based on their positions about Israel and Palestine.

As expected, the Democratic National Committee rejected a resolution to condemn the influence of AIPAC, and of dark money generally.

It is dominated by "moderate" (right-wing) Democrats, who serve the power of rich donors. Some rich donors are Zionists, and some are evangelical Christians, and they support Israel's injustice towards Palestinians.

Meanwhile, almost all of them demand to keep most of America's wealth flowing to the rich, by rejecting the New Deal policies that most Americans support.

Last month's US heatwave [Richard Stallman's Political Notes]

* The continental US registered its most abnormally hot month in 132 years of records.* And it will get worse and worse over the coming decades.

Oil pipelines as wanted [Richard Stallman's Political Notes]

It may be possible to negotiate peace between the US and Iran, but the bully still thinks he can get total victory through willingness to cause damage.

The Straits of Hormuz tollbooth can be very lucrative for Iran (and maybe Oman too) until someone builds a competing toll road. One of the Gulf states (Qatar?) has a pipeline that bypasses the straits and arrives at the open sea, but it can carry only a fraction of the total rate of export. Part of the UAE's shore is outside the straits but it isn't set up to do all its export through there.

After a few years of construction, Qatar and the UAE could arrange to export their whole production, bypassing the straits by a wide margin.

US Foreign Service-crippling effect [Richard Stallman's Political Notes]

Rubio decimated the US Foreign Service (at the wrecker's orders), firing (among others) the staff that had experience in organizing evacuations of American civilians from countries where war breaks out.

As a result, once the wrecker launched war against Iran and made an evacuation necessary, the US had no way to do it.

US still bombing little boats [Richard Stallman's Political Notes]

The Inter-American Commission on Human Rights was looking at the legality of the deadly US campaign of bombing boats in the Caribbean and the Pacific Ocean, but the US is now threatening to retaliate against it if it does that.

Justice Dept opposing justice [Richard Stallman's Political Notes]

The corrupter has perverted the Justice Department to work systematically for the opposite of justice. It is prosecuting Cassidy Hutchinson for testifying to Congress about the corrupter's involvement in the Jan 6 attack on the Capitol.

The idea is that prosecuting her for "lying to Congress" will lead magats to blindly assume her testimony was false.

Russian psych-war agents [Richard Stallman's Political Notes]

Russian psych-war agents are manipulating vulnerable Ukrainians to make and set off bombs near where they live.

To me it is incomprehensible that these efforts can succeed, but apparently they do.

Ocean food stocks dwindling, UK [Richard Stallman's Political Notes]

Too much fishing for cod and mackerel, around Britain, is driving the population to extinction.

Regulation of hunting and fishing is one of the classic examples of how everyone can benefit in the long term by having a state that can make and enforce limits. The sea tends naturally to be an unregulated commons, because each fisher or boat crew may catch as much as it can or will. If some catch at an unsustainable rate, eventually that species will disappear and they will all be the worse for the loss.

It is possible for fishers to make a social agreement and keep it without a state. Lobster fishers in Maine and Canada adopted the practice of marking the pregnant lobsters they catch and throwing them back; this averted a previous crisis. (See The Secret Life of the Lobster by Trevor Corson.) But this sort of voluntary measure is harder to establish than to imagine, especially when fishing involves a big investment in a boat. A state can save all the fishers (and all the people who eat fish) from the consequences of their own temptation.

That is, the state can do so if it has the moral strength to stand firm against the the short-term thinking of the fishers who lobby to be "allowed to catch more so they don't go broke next year."

They could not stay in business very long after wiping out the fish, but they kid themselves that this can't really happen — until it does. Cod near North America were almost wiped out, and after protection was set up, they have remained at a low population for decades despite the prohibition of catching them. Apparently, once the population gets far too low, protection is not sufficient. It needs to be done before the disaster.

The British government lacks that moral strength, and has lacked it for a long time. I wonder what the E&W Green Party says about this.

Emperor penguins in peril [Richard Stallman's Political Notes]

Emperor penguins have been tagged an endangered species because global heating effects have killed a large fraction of them.

Cuban doctors program [Richard Stallman's Political Notes]

*US accused of pressuring Latin America to cut ties with Cuban doctors program.*

Rightwing dominance in Chile [Richard Stallman's Political Notes]

Chile's right-wing extremist president has demonstrated support for the dictator, Pinochet, and specifically for his systematic torture of people who opposed him.

Death studies of recent heatwaves [Richard Stallman's Political Notes]

*Heatwaves are already breaching human limits, with worse to come, study finds. Analysis of six extreme heatwaves found that when temperature and humidity were accounted for, all were potentially deadly for older people.*

I expect the situation to become much clearer in 10 or 20 years: heatwaves then will simply be deadly to many unprotected people.

Iran and Russia as allies [Richard Stallman's Political Notes]

*US ignoring evidence Russia is helping Iran because it trusts Putin, says Zelenskyy.*

It is natural that Putin would help Iran, because it depends on Iran's military support as well. It is natural also that the wrecker would look for opportunities to support Putin, because that is his basic inclination and has been for many years. His support for Ukraine is reluctant and he keeps threatening to end it.

02:28

Friends To Rivals [QC RSS]

Two coffee makers AND a trap door

02:21

[$] LWN.net Weekly Edition for April 16, 2026 [LWN.net]

Inside this week's LWN.net Weekly Edition:

  • Front: LLM security reports; OpenWrt One build system; Vim forks; removing read-only THPs; 7.0 statistics; MusicBrainz Picard.
  • Briefs: OpenSSL 4.0.0; Relicensing; Servo; Zig 0.16.0; Quotes; ...
  • Announcements: Newsletters, conferences, security updates, patches, and more.

Wednesday, 15 April

21:21

Tribblix m34 for SPARC released [OSnews]

Tribblix, the Illumos distribution focused on giving you a classic UNIX-style experience, doesn’t only support x86. It also has a branch for SPARC, which tends to run behind its x86 counterpart a little bit and has a few other limitations related to the fact SPARC is effectively no longer being developed. The Tribblix SPARC branch has been updated, and now roughly matches the latest x86 release from a few weeks ago.

The graphical libraries libtiff and OpenEXR have been updated, retaining the old shared library versions for now. OpenSSL is now from the 3.5 series with the 3.0 api by default. Bind is now from the 9.20 series. OpenSSH is now 10.2, and you may get a Post-Quantum Cryptography warning if connecting to older SSH servers.

‘zap install’ now installs dependencies by default.

‘zap create-user’ will now restrict new home directories to mode 0700 by default; use the -M flag to choose different permissions.

Support for UFS quotas has been removed.

↫ Tribblix release notes

There’s no new ISO yet, so to get to this new m34 release for SPARC you’re going to have to install from an older ISO and update from there.

20:42

View From a Hotel Window, 4/15/26: Atlanta Metro [Whatever]

A very arboreal view today. It’s a little misleading, since if you look left from here you’ll find a not unbusy street. Still, it would be churlish to complain about a bit of green in one’s window.

I’m in the area for an event tomorrow in which I am in conversation with Brandon Sanderson, prior to him spending time at JordanCon, and me at the LA Times Festival of Books (which will not be in the Atlanta area, but in Los Angeles). Our event is already sold out, so if you missed getting tickets, I’m sorry. Perhaps there will be a audio or video recording of it at some point.

And what about today? Well, I have a hotel room to myself and no one expecting anything of me until tomorrow afternoon around this time. I think I’ll take a nap and then see where the day takes me.

— JS

Sketch Swap Complete! [Penny Arcade]

I recently finished my side of a “sketch swap” with the incredible Bob Q after he had already completed his end. Bob and I have very different styles and I think he would agree that it was a difficult task trying to ink and color over the other person’s sketch. From his report, it sounds like he was probably more outside his comfort zone than I was. I grew up wanting to be a comic book artist like Bob is, but I was only good enough to be a cartoonist. So here are the results of our Acquisitions Inc. sketch swap!

 

 

20:35

19:49

Why is there a long delay between a thread exiting and the Wait­For­Single­Object returning? [The Old New Thing]

A customer reported that they were using the Wait­For­Single­Object function to wait for a thread to exit, but they found that even though the thread had exited, the Wait­For­Single­Object call did not return for over a minute. What could explain this delay in reporting the end of a thread? Can we do something to speed it up?

My psychic powers tell me that the thread didn’t actually exit.

What the customer is observing is probably that their thread procedure has returned, signaling the end of the thread. But a lot of stuff happens after the thread procedure exits. The system needs to send DLL_THREAD_DETACH notifications to all of the DLLs (unless the DLL has opted out via Disable­Thread­Library­Calls), and doing so requires the loader lock.

I would use the debugger to look for the thread you thought had exited and see what it’s doing. It might be blocked waiting for the loader lock because some other thread is hogging it. Or it could be running a DLL’s detach code, and that detach code has gotten stuck on a long-running operation.

I suspect it’s the latter: One of the DLLs is waiting for something in its detach code, and that something takes about a minute.

We didn’t hear back from the customer, which could mean that this was indeed the problem. Or it could mean that this didn’t help, but they decided that we weren’t being helpful and didn’t pursue the matter further. Unfortunately, in a lot of these customer debugging engagements, we never hear back whether our theory worked. (Another possibility is that the customer wrote back with a “thank you”, but the customer liaison didn’t forward it to the engineering team because they didn’t want to bother them any further.)

The post Why is there a long delay between a thread exiting and the <CODE>Wait­For­Single­Object</CODE> returning? appeared first on The Old New Thing.

19:39

FSF clarifies its stance on AGPLv3 additional terms [LWN.net]

OnlyOffice CEO Lev Bannov has recently claimed that the Euro-Office fork of the OnlyOffice suite violates the GNU Affero General Public License version 3 (AGPLv3). Krzysztof Siewicz of the Free Software Foundation (FSF) has published an article on the FSF's position on adding terms to the AGPLv3. In short, Siewicz concludes that OnlyOffice has added restrictions to the license that are not compatible with the AGPLv3, and those restrictions can be removed by recipients of the code.

We urge OnlyOffice to clarify the situation by making it unambiguous that OnlyOffice is licensed under the AGPLv3, and that users who already received copies of the software are allowed to remove any further restrictions. Additionally, if they intend to continue to use the AGPLv3 for future releases, they should state clearly that the program is licensed under the AGPLv3 and make sure they remove any further restrictions from their program documentation and source code. Confusing users by attaching further restrictions to any of the FSF's family of GNU General Public Licenses is not in line with free software.

19:13

Preparatory School [Penny Arcade]

I have now gotten him to talk about it twice when he wanted to talk about his colon zero or even fewer times. This accrues not to my capacity for manipulation, but rather to the fact that he has been so harrowed body and mind by the experience he is now essentially just a sausage casing with a t-shirt draped over it. I was able to extract more data in my most recent interrogation and what I learned will shock you.

18:49

You cannot use the GNU (A)GPL to take software freedom away [Planet GNU]

Protecting the integrity of the (A)GPL is an essential component in protecting user freedom.

18:07

The Big Idea: A.Z. Rozkillis [Whatever]

When there’s a million and one paths in front of you, how do you know which decision to make? What if you don’t even have control over which one you end up on? Author A. Z. Rozkillis explores the idea of every decision we make, or don’t make, sending us on different paths throughout multiple realities. Journey on through the Big Idea for her newest novel, Fractal Terminus.

A. Z. ROZKILLIS:

In an infinite universe there are infinite possibilities. It’s a concept that has enamored me for decades, has led me into a career focused on space exploration and has fueled my endless love of science fiction. And that is probably why it is the Big Idea behind Fractal Terminus

When I intentionally ended my first book, Space Station X, on a cliffhanger, I never truly intended to write a sequel. I liked the idea of leaving the speculation up to the reader about could possibly happen after an event like that. More to the point, I didn’t think I deserved to be the person to establish, canonically, what the future would hold for my main characters. But nature abhors a vacuum, and the same could be said for the space between my ears. So, I figured if I don’t want to write one follow-on outcome, and if I preferred the idea that any possibility could be canon, then why don’t I write a book where I do just that? Where I explore numerous possible outcomes from one, singularly massive event. 

Fractal Terminus really digs down into the idea that with every flip of a coin, with every path chosen, with very outcome realized, there exists a separate universe (or infinite separate universes) in which the an alternate outcome could occur. I know it’s not a new idea, its just one I have felt, personally, immensely drawn to. The universe is so unfathomably endless, with there being no way for us to truly understand how vast it is. I feel that it is entirely plausible that somewhere, at the far reaches, there exists a reality in which I chose to study animal husbandry and not aerospace engineering. Or maybe I decided to eat that questionable leftover sushi rather than pitching it when I found it at the back of the fridge. Who knows? If the universe has no limit, then maybe every single possible reality is just wrapped around us.

For my characters, their personal universe is expanding too. My first book had a very narrow focus by design, because I had a main character who had reduced her whole universe down to the same five concentric metal rings of her space station. Jax refused to consider possibilities outside of that limited existence until she was forced to. Then she swallowed her pride and took the leap of faith on her feelings for Saunders. It could have gone either way, but canonically it worked out for Jax. Then they took a different plunge. Now Jax and Saunders are suddenly flung into a situation where they have to expand their view, because new experiences have that habit of broadening your perspective.  This Space Station is no longer a cramped, desolate and lonely existence, but a cramped, desolate and overcrowded experience, where Jax has to dust off her social skills and mingle in order to survive.  And as she lets her universe expand around her to include the souls locked in fate along side her, infinitely more universe opportunities unfurl. 

Some of these are fates she realizes she can control. She can see where her actions can lead her and Saunders and she can tell when it might not be the best path. But more often than not, Jax and Saunders are at the mercy of the universe itself. Nature is a cold and uncaring master, and sometimes the coin flip is not even remotely something anyone can control.

We face these moments every day. Will this person I am talking to be an ally? Will they be my demise? Will I regret this interaction or not? Is there, even remotely, anything I could have done to change this outcome? There isn’t really a way for anyone to know, so you might as well take the chance. As the universe is expanding rapidly on a macro scale, we are, all of us, every day, making small decisions that expand our microcosm just as rapidly.  Jax and Saunders expand their view on life to include the lives around them, while the universe expands to encompass every possible, even far-fetched idea of an outcome that could ever be considered. And that’s the big idea. The universe can you send you on an infinite number of outcomes, and you’ll never know which one you are in. So you are just going to have to take it on faith that you are on the right track. 


Fractal Terminus: Barnes & Noble|Bookshop|Space Wizards

Author socials: Website|Bluesky|Instagram

17:28

Paul Tagliamonte: designing arf, an sdr iq encoding format 🐶 [Planet Debian]

Interested in future updates? Follow me on mastodon at @paul@soylent.green. Posts about hz.tools will be tagged #hztools.

🐶 Want to jump right to the draft? I'll be maintaining ARF going forward at /draft-tagliamonte-arf-00.txt.

It’s true – processing data from software defined radios can be a bit complex 👈😏👈 – which tends to keep all but the most grizzled experts and bravest souls from playing with it. While I wouldn’t describe myself as either, I will say that I’ve stuck with it for longer than most would have expected of me. One of the biggest takeaways I have from my adventures with software defined radio is that there’s a lot of cool crossover opportunity between RF and nearly every other field of engineering.

Fairly early on, I decided on a very light metadata scheme to track SDR captures, called rfcap. rfcap has withstood my test of time, and I can go back to even my earliest captures and still make sense of what they are – IQ format, capture frequencies, sample rates, etc. A huge part of this was the simplicity of the scheme (fixed-lengh header, byte-aligned to supported capture formats), which made it roughly as easy to work with as a raw file of IQ samples.

However, rfcap has a number of downsides. It’s only a single, fixed-length header. If the frequency of operation changed during the capture, that change is not represented in the capture information. It’s not possible to easily represent mulit-channel coherent IQ streams, and additional metadata is condemned to adjacent text files.

ARF (Archive of RF)

A few years ago, I needed to finally solve some of these shortcomings and tried to see if a new format would stick. I sat down and wrote out my design goals before I started figuring out what it looked like.

First, whatever I come up with must be capable of being streamed and processed while being streamed. This includes streaming across the network or merely written to disk as it’s being created. No post-processing required. This is mostly an artifact of how I’ve built all my tools and how I intereact with my SDRs. I use them extensively over the network (both locally, as well as remotely by friends across my wider lan). This decision sometimes even prompts me to do some crazy things from time to time.

I need actual, real support for multiple IQ channels from my multi-channel SDRs (Ettus, Kerberos/Kracken SDR, etc) for playing with things like beamforming. My new format must be capable of storing multiple streams in a single capture file, rather than a pile of files in a directory (and hope they’re aligned).

Finally, metadata must be capable of being stored in-band. The initial set of metadata I needed to formalize in-stream were Frequency Changes and Discontinuities. Since then, ARF has grown a few more.

After getting all that down, I opted to start at what I thought the simplest container would look like, TLV (tag-length-value) encoded packets. This is a fairly well trodden path, and used by a bunch of existing protocols we all know and love. Each ARF file (or stream) was a set of encoded “packets” (sometimes called data units in other specs). This means that unknown packet types may be skipped (since the length is included) and additional data can be added after the existing fields without breaking existing decoders.

length
value
Heads up! Once this is posted, I'm not super likely to update this page. Once this goes out, the latest stable copy of the ARF spec is maintained at draft-tagliamonte-arf-00.txt. This page may quickly become out of date, so if you're actually interested in implementing this, I've put a lot of effort into making the draft comprehensive, and I plan to maintain it as I edit the format.

Unlike a “traditional” TLV structure, I opted to add “flags” to the top-level packet. This gives me a bit of wiggle room down the line, and gives me a feature that I like from ASN.1 – a “critical” bit. The critical bit indicates that the packet must be understood fully by implementers, which allows future backward incompatible changes by marking a new packet type as critical. This would only really be done if something meaningfully changed the interpretation of the backwards compatible data to follow.

Flag Description
0x01 Critical (tag must be understood)

Within each Packet is a tag field. This tag indicates how the contents of the value field should be interpreted.

Tag ID Description
0x01 Header
0x02 Stream Header
0x03 Samples
0x04 Frequency Change
0x05 Timing
0x06 Discontinuity
0x07 Location
0xFE Vendor Extension

In order to help with checking the basic parsing and encoding of this format, the following is an example packet which should parse without error.

 00, // tag (0; no subpacket is 0 yet)
 00, // flags (0; no flags)
 00, 00 // length (0; no data)
 // data would go here, but there is none

Additionally, throughout the rest of the subpackets, there are a few unique and shared datatypes. I document them all more clearly in the draft, but to quickly run through them here too:

UUID

This field represents a globally unique idenfifer, as defined by RFC 9562, as 16 raw bytes.

Frequency

Data encoded in a Frequency field is stored as microhz (1 Hz is stored as 1000000, 2 Hz is stored as 2000000) as an unsigned 64 bit integer. This has a minimum value of 0 Hz, and a maximum value of 18446744073709551615 uHz, or just above 18.4 THz. This is a bit of a tradeoff, but it’s a set of issues that I would gladly contend with rather than deal with the related issues with storing frequency data as a floating point value downstream. Not a huge factor, but as an aside, this is also how my current generation SDR processing code (sparky) stores Frequency data internally, which makes conversion between the two natural.

IQ samples

ARF supports IQ samples in a number of different formats. Part of the idea here is I want it to be easy for capturing programs to encode ARF for a specific radio without mandating a single iq format representation. For IQ types with a scalar value which takes more than a single byte, this is always paired with a Byte Order field, to indicate if the IQ scalar values are little or big endian.

ID Name Description
0x01 f32 interleaved 32 bit floating point scalar values
0x02 i8 interleaved 8 bit signed integer scalar values
0x03 i16 interleaved 16 bit signed integer scalar values
0x04 u8 interleaved 8 bit unsigned integer scalar values
0x05 f64 interleaved 64 bit floating point scalar values
0x06 f16 interleaved 16 bit floating point scalar values

Each ARF file must start with a specific Header packet. The header contains information about the ARF stream writ large to follow. Header packets are always marked as “critical”.

magic
flags
start
guid
site guid
#st

In order to help with checking the basic parsing and encoding of this format, the following is an example header subpacket (when encoded or decoded this will be found inside an ARF packet as described above) which should parse without error, with known values.

00, 00, 00, fa, de, dc, ab, 1e, // magic
00, 00, 00, 00, 00, 00, 00, 00, // flags
18, 27, a6, c0, b5, 3b, 06, 07, // start time (1740543127)

// guid (fb47f2f0-957f-4545-94b3-75bc4018dd4b)
fb, 47, f2, f0, 95, 7f, 45, 45,
94, b3, 75, bc, 40, 18, dd, 4b,

// site_id (ba07c5ce-352b-4b20-a8ac-782628e805ca)
ba, 07, c5, ce, 35, 2b, 4b, 20,
a8, ac, 78, 26, 28, e8, 05, ca

Stream Header

Immediately after the arf Header, some number of Stream Headers follow. There must be exactly the same number of Stream Header packets as are indicated by the num streams field of the Header. This has the nice effect of enabling clients to read all the stream headers without requiring buffering of “unread” packets from the stream.

id
flags
fmt
bo
rate
freq
guid
site

In order to help with checking the basic parsing and encoding of this format, the following is an example stream header subpacket (when encoded or decoded this will be found inside an ARF packet as described above) which should parse without error, with known values.

00, 01, // id (1)
00, 00, 00, 00, 00, 00, 00, 00, // flags
01, // format (float32)
01, // byte order (Little Endian)
00, 00, 01, d1, a9, 4a, 20, 00, // rate (2 MHz)
00, 00, 5a, f3, 10, 7a, 40, 00, // frequency (100 MHz)

// guid (7b98019d-694e-417a-8f18-167e2052be4d)
7b, 98, 01, 9d, 69, 4e, 41, 7a,
8f, 18, 16, 7e, 20, 52, be, 4d,

// site_id (98c98dc7-c3c6-47fe-bc05-05fb37b2e0db)
98, c9, 8d, c7, c3, c6, 47, fe,
bc, 05, 05, fb, 37, b2, e0, db,

Samples

Block of IQ samples in the format indicated by this stream’s format and byte_order field sent in the related Stream Header.

id
iq samples

In order to help with checking the basic parsing and encoding of this format, the following is an samples subpacket (when encoded or decoded this will be found inside an ARF packet as described above). The IQ values here are notional (and are either 2 8 bit samples, or 1 16 bit sample, depending on what the related Stream Header was).

01, // id
ab, cd, ab, cd, // iq samples

Frequency Change

The center frequency of the IQ stream has changed since the Stream Header or last Frequency Change has been sent. This is useful to capture IQ streams that are jumping around in frequency during the duration of the capture, rather than starting and stopping them.

id
frequency

In order to help with checking the basic parsing and encoding of this format, the following is a frequency change subpacket (when encoded or decoded this will be found inside an ARF packet as described above).

01, // id
00, 00, b5, e6, 20, f4, 80, 00 // frequency (200 MHz)

Discontinuity

Since the last Samples packet for this stream, samples have been dropped or not encoded to this stream. This can be used for a stream that has dropped samples for some reason, a large gap (radio was needed for something else), or communicating “iq snippits”.

id

In order to help with checking the basic parsing and encoding of this format, the following is a discontinuity subpacket (when encoded or decoded this will be found inside an ARF packet as described above).

01, // id

Location

Up-to-date location as of this moment of the IQ stream, usually from a GPS. This allows for in-band geospatial information to be marked in the IQ stream. This can be used for all sorts of things (detected IQ packet snippits aligned with a time and location or a survey of rf noise in an area)

flags
lat
long
el
accuracy

The sys field indicates the Geodetic system to be used for the provided latitude, longitude and elevation fields. The full list of supported geodetic systems is currently just WGS84, but in case something meaningfully changes in the future, it’d be nice to migrate forward.

Unfortunately, being a bit of a coward here, the accuracy field is a bit of a cop-out. I’d really rather it be what we see out of kinematic state estimation tools like a kalman filter, or at minimum, some sort of ellipsoid. This is neither of those - it’s a perfect sphere of error where we pick the largest error in any direction and use that. Truthfully, I can’t be bothered to model this accurately, and I don’t want to contort myself into half-assing something I know I will half-ass just because I know better.

System Description
0x01 WGS84 - World Geodetic System 1984

In order to help with checking the basic parsing and encoding of this format, the following is a location subpacket (when encoded or decoded this will be found inside an ARF packet as described above).

00, 00, 00, 00, 00, 00, 00, 00, // flags
01, // system (wgs84)
3f, f3, be, 76, c8, b4, 39, 58, // latitude (1.234)
40, 02, c2, 8f, 5c, 28, f5, c3, // longitude (2.345)
40, 59, 00, 00, 00, 00, 00, 00, // elevation (100)
40, 24, 00, 00, 00, 00, 00, 00 // accuracy (10)

Vendor Extension

In addition to the fields I put in the spec, I expect that I may need custom packet types I can’t think of now. There’s all sorts of useful data that could be encoded into the stream, so I’d rather there be an officially sanctioned mechanism that allows future work on the spec without constraining myself.

Just an example, I’ve used a custom subpacket to create test vectors, the data is encoded into a Vendor Extension, followed by the IQ for the modulated packet. If the demodulated data and in-band original data don’t match, we’ve regressed. You could imagine in-band speech-to-text, antenna rotator azimuth information, or demodulated digital sideband data (like FM HDR data) too. Or even things I can’t even think of!

id
data

In order to help with checking the basic parsing and encoding of this format, the following is a vendor extension subpacket (when encoded or decoded this will be found inside an ARF packet as described above).

// extension id (b24305f6-ff73-4b7a-ae99-7a6b37a5d5cd)
b2, 43, 05, f6, ff, 73, 4b, 7a,
ae, 99, 7a, 6b, 37, a5, d5, cd,

// data (0x01, 0x02, 0x03, 0x04, 0x05)
01, 02, 03, 04, 05

Tradeoffs

The biggest tradeoff that I’m not entirely happy with is limiting the length of a packet to u16 – 65535 bytes. Given the u8 sample header, this limits us to 8191 32 bit sample pairs at a time. I wound up believing that the overhead in terms of additional packet framing is worth it – because always encoding 4 byte lengths felt like overkill, and a dynamic length scheme ballooned codepaths in the decoder that I was trying to keep as easy to change as possible as I worked with the format.

16:42

Haiku on ARM64 boots to desktop in QEMU [OSnews]

Another Haiku monthly activity report, but this time around, there’s actually a big ticket item. Haiku has been in a pretty solid and stable state for a while now, so the activity reports have been dominated by fairly small, obscure changes, but during March a major milestone was reached for the ARM64 port.

smrobtzz contributed the bulk of the work, including fixes for building on macOS on ARM64, drivers for the Apple S5L UART, fixes to the kernel base address, clearing the frame pointer before entering the kernel, mapping physical memory correctly, the basics for userland, and more. SED4906 contributed some fixes to the bootloader page mapping, and runtime_loader’s page-size checks.

Combined, these changes allow the ARM64 port to get to the desktop in QEMU. There’s a forum thread, complete with screenshots, for anyone interested in following along.

↫ waddlesplash

While it’s only in QEMU, this is still a major achievement and paves the way for more people to work on the ARM64 port, possibly increasing its health. There’s tons of smaller changes and fixes all over the place, too, as usual, and the team mentions beta 6 isn’t quite ready yet, still. Don’t let that stop you from just downloading the latest nightly, though – Haiku is mature enough to use it.

16:07

Link [Scripting News]

Today the work with Claude is much better, though when we got started it was even worse than yesterday. The key to getting on track was to figure out why it worked so well in previous projects and fell apart with this one. In each of the others, I passed off an existing project for it to convert or build on. This time we started with something it had created without a "starter." So I took all the random bits we had and organized into the opmlProjectEditor format we had specified back in early March. It's how all my projects since 2013 are organized, so it's a good fit for me, and also for Claude. So now I'm going to pass back a package that's ready to be worked on collaboratively. The other thing is I switched to the Opus 4.6 model from Sonnet 4.6. So I've made it to 11AM and feel like I've already accomplished something today. The problem was yesterday we were spinning our wheels, and that doesn't work for me. I'm a very directed developer. ;-)

15:07

[$] Forking Vim to avoid LLM-generated code [LWN.net]

Many people dislike the proliferation of Large Language Models (LLMs) in recent years, and so make an understandable attempt to avoid them. That may not be possible in general, but there are two new forks of Vim that seek to provide an editing environment with no LLM-generated code. EVi focuses on being a modern Vim without LLM-assisted contributions, while Vim Classic focuses on providing a long-term maintenance version of Vim 8. While both are still in their early phases, the projects look to be on track to provide stable alternatives — as long as enough people are interested.

Dirk Eddelbuettel: qlcal 0.1.1 on CRAN: Calendar Updates [Planet Debian]

The nineteenth release of the qlcal package arrivied at CRAN just now, and has already been built for r2u. This version synchronises with QuantLib 1.42 released this week.

qlcal delivers the calendaring parts of QuantLib. It is provided (for the R package) as a set of included files, so the package is self-contained and does not depend on an external QuantLib library (which can be demanding to build). qlcal covers over sixty country / market calendars and can compute holiday lists, its complement (i.e. business day lists) and much more. Examples are in the README at the repository, the package page, and course at the CRAN package page.

This releases updates to the 2025 holidays for China, Singapore, and Taiwan.

The full details from NEWS.Rd follow.

Changes in version 0.1.1 (2026-04-15)

  • Synchronized with QuantLib 1.42 released two days ago

  • Calendar updates for China, Singapore, Taiwan

Courtesy of my CRANberries, there is a diffstat report for this release. See the project page and package documentation for more details, and more examples.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub. You can also sponsor my Tour de Shore 2026 ride in support of the Maywood Fine Arts Center.

Fixing a 20-year-old bug in Enlightenment E16 [OSnews]

The editor in chief of this blog was born in 2004. She uses the 1997 window manager, Enlightenment E16, daily. In this article, I describe the process of fixing a show-stopping, rare bug that dates back to 2006 in the codebase. Surprisingly, the issue has roots in a faulty implementation of Newton’s algorithm.

↫ Kamila Szewczyk

I’m not going to pretend to understand any of this, but I know you people do. Enjoy.

Let sleeping CPUs lie — S0ix [OSnews]

Modern laptops promise a kind of magic. Shut the lid or press the sleep button, toss it in a backpack, and hours, days, or weeks later, it should wake up as if nothing happened with little to no battery drain. This sounds like a fairly trivial operation — y’know, you’re literally just asking for the computer to do nothing — but in that quiet moment when the fans whir down, the screen turns dark, and your reflection stares back at you, your computer and all its little components are actually hard at work doing their bedtime routine.

↫ Aymeric Wibo at the FreeBSD Foundation

A look at how suspend and resume works in practice, from the perspective of FreeBSD. Considering FreeBSD’s laptop focus in recent times, not an unimportant subject.

14:35

AI Is Writing Our Code Faster Than We Can Verify It [Radar]

This is the third article in a series on agentic engineering and AI-driven development. Read part one here, part two here, part three here, and look for the next article on April 23 on O’Reilly Radar.

Here’s the dirty secret of the AI coding revolution: most experienced developers still don’t really trust the code the AI writes for us.

If I’m being honest, that’s not actually a particularly well-guarded secret. It feels like every day there’s a new breathless “I don’t have a lick of development experience but I just vibe coded this amazing application” article. And I get it—articles like that get so much engagement because everyone is watching carefully as the drama of AIs getting better and better at writing code unfolds. We’ve had decades of shows and movies, from WarGames to Hackers to Mr. Robot, portraying developers as reclusive geniuses doing mysterious but incredible stuff with computers. The idea that we’ve coded ourselves out of existence is fascinating to people.

The flip side of that pop-culture phenomenon is that when there are problems caused by agentic engineering gone wrong (like the equally popular “I trusted an AI agent and it deleted my entire production database” articles), everyone seems to find out about it. And, unfortunately, that newly emerging trope is much closer to reality. Most of us who do agentic engineering have seen our own AI-generated code go off the rails. That’s why I built and maintain the Quality Playbook, an open-source AI skill that uses quality engineering techniques that go back over fifty years to help developers working in any language verify the quality of their AI-generated code. I was as surprised as anyone to discover that it actually works.

I’ve talked often about how we need a “trust but verify” mindset when using AI to write code. In the past, I’ve mostly focused on the “trust” aspect, finding ways to help developers feel more comfortable adopting AI coding tools and using them for production work. But I’m increasingly convinced that our biggest problem with AI-driven development is that we don’t have a reliable way to check the quality of code from agentic engineering at scale. AI is writing our code faster than we can verify it, and that is one of AI’s biggest problems right now.

A false choice

After I got my first real taste of using AI for development in a professional setting, it felt like I was being asked to make a critical choice: either I had to outsource all of my thinking to the AI and just trust it to build whatever code I needed, or I had to review every single file it generated line by line.

A lot of really good, really experienced senior engineers I’ve talked to feel the same way. A small number of experienced developers fully embrace vibe coding and basically fire off the AI to do what it needs to, depending on a combination of unit tests and solid, decoupled architecture (and a little luck, maybe) to make sure things go well. But more frequently, the senior, experienced engineers I’ve talked to, folks who’ve been developing for a really long time, go the other way. When I ask them if they’re using AI every day, they’ll almost always say something like, “Yeah, I use AI for unit tests and code reviews.” That’s almost always a tell that they don’t trust the AI to build the really important code that’s at the core of the application. They’re using AI for things that won’t cause production bugs if they go wrong.

I think this excerpt from a recent (and excellent) article in Ars Technica, “Cognitive surrender” leads AI users to abandon logical thinking, sums up how many experienced developers feel about working with AI:

When it comes to large language model-powered tools, there are generally two broad categories of users. On one side are those who treat AI as a powerful but sometimes faulty service that needs careful human oversight and review to detect reasoning or factual flaws in responses. On the other side are those who routinely outsource their critical thinking to what they see as an all-knowing machine.

I agree that those are two options for dealing with AI. But I also believe that’s a false choice. “Cognitive surrender,” as the research referenced by the article puts it, is not a good outcome. But neither is reviewing every line of code the AI writes, because that’s so effort-intensive that we may as well just write it all ourselves. (And I can almost hear some of you asking, “What so bad about that?”)

This false choice is what really drives a lot of really good, very experienced senior engineers away from AI-driven development today. We see those two options, and they are both terrible. And that’s why I’m writing this article (and the next few in this Radar series) about quality.

Some shocking numbers about AI coding tools

The Quality Playbook is an open-source skill for AI coding tools like GitHub Copilot, Cursor, Claude Code, and Windsurf. You point it at a codebase, and it generates a complete quality engineering infrastructure for that project: test plans traced to requirements, code review protocols, integration tests, and more. More importantly, it brings back quality engineering practices that much of the industry abandoned decades ago, using AI to do a lot of the quality-related work that used to require a dedicated team.

I built the Quality Playbook as part of an experiment in AI-driven development and agentic engineering, building an open-source project called Octobatch and writing about the process in this ongoing Radar series. The playbook emerged directly from that experiment. The ideas behind it are over fifty years old, and they work.

Along the way, I ran into a shocking statistic.

We already know that many (most?) developers these days use AI coding tools like GitHub Copilot, Claude Code, Gemini, ChatGPT, and Cursor to write production code. But do we trust the code those tools generate? “Trust in these systems has collapsed to just 33%, a sharp decline from over 70% in 2023.”

That quote is from a Gemini Deep Research report I generated while doing research for this article. 70% dropping to 33%—that sounds like a massive collapse, right?

The thing is, when I checked the sources Gemini referenced, the truth wasn’t nearly as clear-cut. That “over 70% in 2023” number came from a Stack Overflow survey measuring how favorably developers view AI tools. The “33%” number came from a Qodo survey asking whether developers trust the accuracy of AI-generated code. Gemini grabbed both numbers, stripped the context, and stitched them into a single decline narrative. No single study ever measured trust dropping from over 70% to 33%. Which means we’ve got an apples-to-oranges comparison, and it might even technically be accurate (sort of?), but it’s not really the headline-grabber that it seemed to be.

So why am I telling you about it?

Because there are two important lessons from that “shocking” stat. The first is that the overall idea rings true, at least for me. Almost all of us have had the experience of generating code with AI faster than we can verify it, and we ship features before we fully review them.

The second is that when Gemini created the report, the AI fabricated the most alarming version of the story from real but unrelated data points. If I’d just cited it without checking the sources, there’s a pretty good chance it would get published, and you might even believe it. That’s ironically self-referential, because it’s literally the trust problem the survey is supposedly measuring. The AI produced something that looked authoritative, felt correct, and was wrong in ways that only careful verification could catch. If you want to understand why over 70% of developers don’t fully trust AI-generated code, you just watched it happen.

One reason many of us don’t trust AI-generated choice is because there’s a growing gap between how fast AI can generate code and how well we can verify that the code actually does what we intended. The usual response to this verification gap is to adopt better testing tools. And there are plenty of them: test stub generators, diff reviewers, spec-first frameworks. These are useful, and they solve real problems. But they generally share a blind spot: they work with what the code does, not with what it’s supposed to do. Luckily, the intent is sitting right there: in the specs, the schemas, the defensive code, the history of the AI chats about the project, even the variable names and filenames. We just need a way to use it.

AI-driven development needs its own quality practices, and the discipline we need already exists. It was just (unfairly) considered too expensive to use… until AI made it cheap.

(Re-)introducing quality engineering

There’s a difference between knowing that code works and knowing that it does what it’s supposed to do. It’s the difference between “does this function return the right value?” and “does this system fulfill its purpose?”—and as it turns out, that’s one of the oldest problems in software engineering. In fact, as I talked about in a previous Radar article, Prompt Engineering Is Requirements Engineering, it was the source of the original “software crisis.”

The software crisis was the term people used across our industry back in the 1960s when they were coming to grips with large software projects around the world that were routinely delivered late, over budget, and delivering software that didn’t do what it was supposed to do. At the 1968 NATO Software Engineering Conference—the conference that introduced the term “software engineering”—some of the top experts in the industry talked about how the crisis was caused by the developers and their stakeholders had trouble understanding the problems they were solving, communicating those needs clearly, and making sure that the systems they delivered actually met their users’ needs. Nearly two decades later, Fred Brooks made the same argument in his pioneering essay, No Silver Bullet: no tool can, on its own, eliminate the inherent difficulty of understanding what needs to be built and communicating that intent clearly. And now that we talk to our AI development tools the same way we talk to our teammates, we’re more susceptible than ever to that underlying problem of communication and shared understanding.

An important part of the industry’s response to the software crisis was quality engineering, a discipline built specifically to close the gap between intent and implementation by defining what “correct” means up front, tracing tests back to requirements, and verifying that the delivered system actually does what it’s supposed to do. For years it was standard practice for software engineering teams to include quality engineering phases in all projects. But few teams today do traditional quality engineering. Understanding why it got left behind by so many of us, more importantly, what it can do for us now, can make a huge difference for agentic engineering and AI-driven development today.

Starting in the 1950s, three thinkers built the intellectual foundation that manufacturing used to become dramatically more reliable.

  • W. Edwards Deming argued that quality is built into the process, not inspected in after the fact. He taught us that you don’t test your way to a good product; you design the system that produces it.
  • Joseph Juran defined quality as fitness for use: not just “does it work?” but ”does it do what it’s supposed to do, under real conditions, for the people who actually use it?”
  • Philip Crosby made the business case: quality is free, because building it in costs less than finding and fixing defects after the fact. By the time I joined my first professional software development team in the 1990s, these ideas were standard practice in our industry.

These ideas revolutionized software quality, and the people who put them into practice were called quality engineers. They built test plans traced to requirements, ran functional testing against specifications, and maintained living documentation that defined what “correct” meant for each part of the system.

So why did all of this disappear from most software teams? (It’s still alive in regulated industries like aerospace, medical devices, and automotive, where traceability is mandated by law, and a few brave holdouts throughout the industry.) It wasn’t because it didn’t work. Quality engineering got cut because it was perceived as expensive. Crosby was right that quality is free: the cost of building it in is far more than made up for by the savings you get from not finding and fixing defects later. But the costs come at the beginning of the project and the savings come at the end. In practice, that means when the team blows a deadline and the manager gets angry and starts looking for something to cut, the testing and QA activities are easy targets because the software already seems to be complete.

On top of the perceived expense, quality engineering required specialists. Building good requirements, designing test plans, and planning and running functional and regression testing are real, technical skills, and most teams simply didn’t have anyone (or, more specifically, the budget for anyone) who could do those jobs.

Quality engineering may have faded from our projects and teams over time, but the industry didn’t just give up on many of its best ideas. Developers are nothing if not resourceful, and we built our own quality practices—three of the most popular are test-driven development, behavior-driven development, agile-style iteration—and these are genuinely good at what they do. TDD keeps code honest by making you write the test before the implementation. BDD was specifically designed to capture requirements in a form that developers, testers, and stakeholders can all read (though in practice, most teams strip away the stakeholder involvement and it devolves into another flavor of integration testing). Agile iteration tightens the feedback loop so you catch problems earlier.

Those newer quality practices are practical and developer-focused, and they’re less expensive to adopt than traditional quality engineering in the short run because they live inside the development cycle. The upside of those practices is that development teams can generally implement them on their own, without asking for permission or requiring experts. The tradeoff, however, is that those practices have limited scope. They verify that the code you’re writing right now works correctly, but they don’t step back and ask whether the system as a whole fulfills its original intent. Quality engineering, on the other hand, establishes the intent of the system before the development cycle even begins, and keeps it up to date and feeds it back to the team as the project progresses. That’s a huge piece of the puzzle that got lost along the way.

Those highly effective quality engineering practices got cut from most software engineering teams because they were viewed as expensive, not because they were wrong. When you’re doing AI-driven development, you’re actually running into exactly the same problem that quality engineering was built to solve. You have a “team”—your AI coding tools—and you need a structured process to make sure that team is building what you actually intend. Quality engineering is such a good fit for AI-driven development because it’s the discipline that was specifically designed to close that gap between what you ask for and what gets built.

What nobody expected is that AI would make it cheap enough in the short run to bring quality engineering back to our projects.

Introducing the Quality Playbook

I’ve long suspected that quality engineering would be a perfect fit for AI-driven development (AIDD), and I finally got a chance to test that hypothesis. As part of my experiment with AIDD and agentic engineering (which I’ve been writing about in The Accidental Orchestrator and the rest of this series), I built the Quality Playbook, a skill for AI tools like Cursor, GitHub Copilot, and Claude Code that lets you bring these highly effective quality practices to any project, using AI to do the work that used to require a dedicated quality engineering team. Like other AI skills and agents, it’s a structured document that plugs into an AI coding agent and teaches it a specific capability. You point it at a codebase, and the AI explores the code, reads whatever specifications and documentation it can find, and generates a complete quality infrastructure tailored to that project. The Quality Playbook is now part of awesome-copilot, a collection of community-contributed agents (and I’ve also opened a pull request to add it to Anthropic’s repository of Claude Code skills).

What does “quality infrastructure” actually mean? Think about what a quality engineering team would build if you hired one. A good quality engineer would start by defining what “correct” means for your project: what the system is supposed to do, grounded in your requirements, your domain, what your users actually need. From there, they’d write tests traced to those requirements, build a code review process that checks whether the code implements what it’s supposed to, design integration tests that verify the whole system works together, and set up an audit process where independent reviewers check the code against its original intent.

That’s what the playbook generates. Developers using AI tools have been rediscovering the value of requirements, and spec-driven development (SDD) has become very popular. You don’t need to be practicing strict spec-driven development to use it. The playbook infers your project’s intent from whatever artifacts are available: chat logs, schemas, README files, code comments, and even defensive code patterns. If you have formal specs, great; if not, the AI pieces together what “correct” means from the evidence it can find.

Once the playbook figures out the intent of the code, it creates quality infrastructure for the project. Specifically, it generates ten deliverables:

  • Exploration and requirements elicitation (EXPLORATION.md): Before the playbook writes anything, it spends an entire phase reading the code, documentation, specs, and schemas, and writes a structured exploration document that maps the project’s architecture and domain. The most common failure mode in AI-generated quality work is producing generic content that could apply to any project. The exploration phase forces the AI to ground everything in this specific codebase, and serves as an audit trail: if the requirements end up wrong, you can trace the problem back to what the exploration discovered or missed.
  • Testable requirements (REQUIREMENTS.md): The most important deliverable. Building on the exploration, a five-phase pipeline extracts the actual intent of the project from code, documentation, AI chats, messages, support tickets, and any other project artifacts you can give it. The result is a specification document that a new team member or AI agent can read top-to-bottom and understand the software. Each requirement is tagged with an authority tier and linked to use cases that become the connective tissue tying requirements to integration tests to bug reports.
  • Quality constitution (QUALITY.md): Defines what “correct” means for your specific project, grounded in your actual domain. Every standard has a rationale explaining why it matters, because without the rationale, a future AI session will argue the standard down.
  • Spec-traced functional tests: Tests generated from the requirements, not from source code. That difference matters: a test generated from source code verifies that the code does what the code does, while a test traced to a spec verifies that the code does what you intended.
  • Three-pass code review protocol with bug reports and regression tests: Three mandatory review passes, each using a different lens: structural review with anti-hallucination guardrails, requirement verification (where you catch things the code doesn’t do that it was supposed to), and cross-requirement consistency checking. Every confirmed bug gets a regression test and a patch file.
  • Consolidated bug report (BUGS.md): Every confirmed bug with full reproduction details, severity calibrated to real-world impact, and a spec basis citing the specific documentation the code violates. Maintainers respond differently to ”your code violates section X.Y of your own spec” than to ”this looks like it might be a bug.”
  • TDD red/green verification: For each confirmed bug, a regression test runs against unpatched code (must fail), then the fix is applied and the test reruns (must pass). When you tell a maintainer ”here’s a test that fails on your current code and passes with this one-line fix,” that’s qualitatively different from a bug report.
  • Integration test protocol: A structured test matrix that an AI agent can pick up and execute autonomously, without asking clarifying questions. Every test specifies the exact command, what it proves, and specific pass/fail criteria. Field names and types are read from actual source files, not recalled from memory, as an anti-hallucination mechanism.
  • Council of Three multi-model spec audit: Three independent AI models audit the codebase against the requirements. The triage uses confidence weighting, not majority vote: findings from all three are near-certain, two are high-confidence, and findings from only one get a verification probe rather than being dismissed. The most valuable findings are often the ones only one model catches.
  • AGENTS.md bootstrap file: A context file that future AI sessions read first, so they inherit the full quality infrastructure. Without it, every new session starts from zero. With it, the quality constitution, requirements, and review protocols carry forward automatically across every session that touches the codebase.

The third option

I started this article by talking about a false choice: either we surrender our judgment to the AI, or get stuck reviewing every line of code it writes. The reality is much more nuanced, and, in my opinion, a lot more interesting, if we have a trustworthy way to verify that the code we worked with the AI to build actually does what we intended. It’s not a coincidence that this is one of the oldest problems in software engineering, and not surprising that AI can help us with it.

The Quality Playbook leans heavily on classic quality engineering techniques to do that verification. Those techniques work very well, and that gives us the more nuanced option: using AI to help us write our code, and then using it to help us trust what it built.

That’s not a gimmick or a paradox. It works because verification is exactly the kind of structured, specification-driven work that AI is good at. Writing tests traced to requirements, reviewing code against intent, checking that the system does what it’s supposed to do under real conditions. These are the things quality engineers used to do across the whole industry (and still do in the highly regulated parts of it). They’re also things that AI can do well, as long as we tell it what “correct” means.

The experienced engineers I talked about at the beginning of this article, the ones who only use AI for unit tests and code reviews, aren’t wrong to be cautious. They’re right that we can’t just trust whatever output the AI spits out. But limiting AI to just the “safe” parts of our projects keeps us from taking advantage of such an important set of tools. The way out of this quagmire is to build the infrastructure that makes the rest of it trustworthy too. Quality engineering gives us that infrastructure, and AI makes it cheap enough to actually use on all of our projects every day.

In the next few articles, I’ll show you what happened when I pointed the Quality Playbook at real, mature open-source codebases and it started finding real bugs, how the playbook emerged from my AI-driven development experiment, what the quality engineering mindset looks like in practice, and how we can learn important lessons from that experience that apply to all of our projects.

The Quality Playbook is open source and works with GitHub Copilot, Cursor, and Claude Code. It’s also available as part of awesome-copilot. You can try it out today by downloading it into your project and asking the AI to generate the quality playbook. The whole process takes about 10-15 minutes for a typical codebase. I’ll cover more details on running it in future articles in this series.

Grief and the Nonprofessional Programmer [Radar]

I can’t claim to be a professional software developer—not by a long shot. I occasionally write some Python code to analyze spreadsheets, and I occasionally hack something together on my own, usually related to prime numbers or numerical analysis. But I have to admit that I identify with both of the groups of programmers that Les Orchard identifies in “Grief and the AI Split”: those who just want to make a computer do something and those who grieve losing the satisfaction they get from writing good code.

A lot of the time, I just want to get something done; that’s particularly true when I’m grinding through a spreadsheet with sales data that has a half-million rows. (Yes, compared to databases, that’s nothing.) It’s frustrating to run into some roadblock in pandas that I can’t solve without looking through documentation, tutorials, and several incorrect Stack Overflow answers. But there’s also the programming that I do for fun—not all that often, but occasionally: writing a really big prime number sieve, seeing if I can do a million-point convex hull on my laptop in a reasonable amount of time, things like that. And that’s where the problem comes in. . .if there really is a problem.

The other day, I read a post of Simon Willison’s that included AI-generated animations of the major sorting algorithms. No big deal in itself; I’ve seen animated sorting algorithms before. Simon’s were different only in that they were AI-generated—but that made me want to try vibe coding an animation rather than something static. Graphing the first N terms of a Fourier series has long been one of the first things I try in a new programming language. So I asked Claude Code to generate an interactive web animation of the Fourier series. Claude did just fine. I couldn’t have created the app on my own, at least not as a single-page web app; I’ve always avoided JavaScript, for better or for worse. And that was cool, though, as with Simon’s sorting animations, there are plenty of Fourier animations online.

I then got interested in animations that aren’t so common. I grabbed Algorithms in a Nutshell, started looking through the chapters, and asked Claude to animate a number of things I hadn’t seen, ending with Dijkstra’s algorithm for finding the shortest path through a graph. It had some trouble with a few of the algorithms, though when I asked Claude to generate a plan first and used a second prompt asking it to implement the plan, everything worked.

And it was fun. I made the computer do things I wanted it to do; the thrill of controlling machines is something that sticks with us from our childhoods. The prompts were simple and short—they could have been much longer if I wanted to specify the design of the web page, but Claude’s sense of taste was good enough. I had other work to do while Claude was “thinking,” including attending some meetings, but I could easily have started several instances of Claude Code and had them create simulations in parallel. Doing so wouldn’t have required any fancy orchestration because every simulation was independent of the others. No need for Gas Town.

When I was done, I felt a version of the grief Les Orchard writes about. More specifically: I don’t really understand Dijkstra’s algorithm. I know what it does and have a vague idea of how it works, and I’m sure I could understand it if I read Algorithms in a Nutshell rather than used it as a catalog of things to animate. But now that I had the animation, I realized that I hadn’t gone through the process of understanding the algorithm well enough to write the code. And I cared about that.

I also cared about Fourier transformations: I would never “need” to write that code again. If I decide to learn Rust, will I write a Fourier program, or ask Claude to do it and inspect the output? I already knew the theory behind Fourier transforms—but I realized that an era had ended, and I still don’t know how I feel about that. Indeed, a few months ago, I vibe coded an application that recorded some audio from my laptop’s microphone, did a discrete Fourier transform, and displayed the result. After pasting the code into a file, I took the laptop over to the piano, started the program, played a C, and saw the fundamental and all the harmonics. The era was already in the past; it just took a few months to hit me.

Why does this bother me? My problem isn’t about losing the pleasure of turning ideas into code. I’ve always found coding at least somewhat frustrating, and at times, seriously frustrating. But I’m bothered by the lack of understanding: I was too lazy to look up how Dijkstra works, too lazy to look up (again) how discrete Fourier works. I made the computer do what I wanted, but I lost the understanding of how it did it.

What does it mean to lose the understanding of how the code works? Anything? It’s common to place the transition to AI-assisted coding in the context of the transition from assembly language to higher-level languages, a process that started in the late 1950s. That’s valid, but there’s an important difference. You can certainly program a discrete fast Fourier transform in assembly; that may even be one of the last bastions of assembly programs, since FFTs are extremely useful and often have to run on relatively slow processors. (The “butterfly” algorithm is very fast.) But you can’t learn signal processing by writing assembly any more than you can learn graph theory. When you’re writing in assembler, you have to know what you’re doing in advance. The early programming languages of the 1950s (Fortran, Lisp, Algol, even BASIC) are much better for gradually pushing forward to understanding, to say nothing of our modern languages.

That is the real source of grief, at least for me. I want to understand how things work. And I admit that I’m lazy. Understanding how things work quickly comes in conflict with getting stuff done—especially when staring at a blank screen—and writing Python or Java has a lot to do with how you come to an understanding. I will never need to understand convex hulls or Dijkstra’s algorithm. But thinking more broadly about this industry, I wonder whether we’ll be able to solve the new problems if we delegate understanding the old problems to AI. In the past, I’ve argued that I don’t see AI becoming genuinely creative because creativity isn’t just a recombination of things that already exist. I’ll stick by that, especially in the arts. AI may be a useful tool, but I don’t believe it will become an artist. But anyone involved with the arts also understands that creativity doesn’t come from a blank slate; it also requires an understanding of history, of how problems were solved in the past. And that makes me wonder whether humans—at least in computing—will continue to be creative if we delegate that understanding to AI.

Or does creativity just move up the stack to the next level of abstraction? And is that next level of abstraction all about understanding problems and writing good specifications? Writing a detailed specification is itself a kind of programming. But I don’t think that kind of grief will assuage the grief of the programmer who loves coding—or who may not love coding but loves the understanding that it brings.

The Day-Blind Stars [Original Fiction Archives - Reactor]

Original Fiction Science Fiction

The Day-Blind Stars

An Earth explorer in search of something new and strange in the up and out ends up traveling through space with a small god over millennia.

Illustrated by Hwarim Lee

Edited by

By

Published on April 15, 2026

1 Share
An illustration of a woman in a stylized spacesuit riding a bear-like creature across the night sky beneath a particularly radiant star.

An Earth explorer in search of something new and strange in the up and out ends up traveling through space with a small god over millennia.

Short story | 5,293 words

She grew fearful of the world and turned away from it, seeking solace. She intended to return.

She never did.

When one turned away from the world in those days, one was subject to a binary. Binaries were a sort of self-imposed tyranny, imagined by the one but expected by the totality. So, turning away from the world, for Sierra St. Sandalwood IV, involved a choice—of necessity illusory—between going up and out or going down and in. The first choice was blue. The second choice was green.

The first choice was green. The second choice was blue.

See? Illusion.

Sierra went up and out. Going up, she theorized, she would be able to look down at the receding world, watching for signs of pursuit. Had she gone down, the world would have closed over behind her as she hacked through roots, as she gnawed through bedrock, as she braved the magma mantle washing the iron and nickel core. How can that be said to be turning away from the world at all?

That would be going under, thought Sierra.

But so many people chose down. Her husband had. Her godmother had. The twins, of course, painfully young, swore they were determined to embrace the world through all the numberless days gifted them by the life force. Devon called the life force Gaia and Denisa called it motion. Denisa waved her arms, dreamy and languorous, whenever she spoke of motion.

Sierra was graceless in the up and out. She had never been outside the gravity well. Her go suit prompted her to make the adjustments necessary to steer a clear course, but only because she had activated those options. Options for prompts for adjustments—some of the very things from which Sierra was turning away. Perhaps up and out was not so different from down and in. Perhaps neither was any different from the world itself.

She approached a tumble of great rocks trailing the world. Each of them was inconceivably cold on one side, gamma-drenched hellfire on the other. A guard god was sitting on one of the rocks, breathing smoke and looking at her with idle curiosity. The go suit suggested she stop and visit.

“Hello. How are things?” asked the guard god.

“How are you breathing smoke?” asked Sierra. “How can you talk? How can I hear you? Why is a god trailing the world?”

“First,” it replied, “I’m smoking a cigarette, which technically is breathing smoke, but not exactly what you are imagining. I can talk because I learned how at my father’s knee. I can hear you because I am listening. I am trailing the world because I’m on watch.”

“What does a god watch for?” asked Sierra. Her go suit maneuvered its way onto the surface of the rock; she was briefly nauseous before her see-plate stabilized the view.

Illusion.

“I’m more of a poppet deity than a god. And I’m watching for people who go up and out.”

“Like me,” said Sierra.

“Much like you, yes. Mostly like you. You should tell me who you are.”

The suit made it impossible to nod, though Sierra reflexively attempted one. “My name is Sierra St. Sandalwood IV,” she said.

The guard god did nod, though its thick neck, wider than its block of a head, made the movement negligible. “Thank you. That is welcome information. However, I did not ask your name. I asked who you are.”

Sierra thought very carefully. “I think if I knew that I would be at home with the twins.”

The guard god nodded again, this time with more alacrity. Pebbles and dust floated out into the nothing. “I think you have a question.” It sounded delighted. “Let’s take an equatorial walk.”

It lurched up and Sierra realized she had not made a careful enough study of her interlocutor. Its waist and legs were seamlessly bonded to the outcropping of silicates she’d thought simply served as a throne until it cracked free. It stretched, dreamy and languorous.

“My go suit keeps me from careening away,” said Sierra. “But how are you treating this little rock as firma?”

The guard god looked at her and furled its face, a sort of miniature avalanche concealing what Sierra thought might be emeralds deep in the crags of what she thought might be orbital sockets. When it opened them again, its eyes were sapphires.

It started to force its way through the tumult of stalagma that extended to the horizon in every direction.

The horizon wasn’t very far.

Sierra blinked her right eye, just so, and she floated after the guard god. When she was moving alongside it, she asked again, “How are you walking on this little rock? Shouldn’t you fly off into the nothing?”

“You haven’t asked the question I think you need to ask, yet, but you do ask a lot of others,” it said. “I like that. Yes, I should fly off because of, you know”—and here it made a circular motion with one of the three spindly fingers sprouting from its upper right hand—“the spinning. Also, there are fundamental forces of the universe to be taken into consideration. At least one or two of them. But it’s okay. I kind of bend down a little bit so I won’t spin off. As for violating fundamental forces, I have a permit.”

Sierra tried to nod again. When she couldn’t, again, she breathed a query to her go suit, piano, asking if there was a way she could move her head freely. The suit flashed a series of glyphs on the inside of her see-plate, seizure fast. Sierra interpreted them as saying, “Sure.”

“Those things are hilarious,” said the guard god. It had stopped and seemed to be considering their route. “Have you ever talked to a go suit when it’s not being worn?”

Sierra shook her head, greatly satisfied with her freedom of movement. “I didn’t think they had any independent agency.”

“Eh,” said the guard god. “People get up here, they look around. A good number of them take off their go suits and launch themselves skyclad into the nothing, giving up their little essences in favor of… well, in favor of what each one of them individually seeks. Sometimes the suits stick around for a bit after that.”

It continued, “I think the equator of this rock will prove a little rough. How do you feel about a circumpolar walk?”

“Do asteroids have poles?”

“Hadn’t thought of that. Probably not this one. Doesn’t it have to do with the invariable plane?”

Sierra had never heard the phrase but was beginning to catch the ebb and flow of the conversation, something she had always been good at. “Sounds right,” she said.

The guard god turned right and plodded north, or perhaps south. “People who come up here tend to be either immigrants or mystics,” it said.

“Never both?” Sierra blinked her eyes just so. She moved along beside the guard god, their heads at the same height but Sierra’s torso and limbs now extended up and out, upside down, relatively. This amused her. If she knew the just-so sequence of blinks that would prompt the suit to remind her of the last time she’d been amused, she would have blinked it.

“Immigrants, they usually have a lot on their minds,” said the guard god. “Not much time for revelations and all that omenistic business.”

“Are you saying immigrant, or—” Sierra stopped. “The one with the I or the one with the E?”

“I could never keep that straight,” said the guard god. “Comings or goings, borders and frontiers. I don’t think it makes much difference up here.”

Sierra queried the suit on whether she could shrug, was given an answer in the positive, entered a command, and shrugged.

“I can also never keep lie and lay straight,” said the guard god. “Yes,” it went on, circling an outcropping that it somewhat resembled. “This way is much easier.” It ploughed through the next rock formation and Sierra drifted a little higher to avoid the detritus.

“This isn’t anything like I thought it would be,” she said. “But I’ve only just now started.”

The guard god snorted. “Time. Who cares?” Then, “What did you think coming up and out would be like?”

“I…” Sierra trailed off.

“They always have ideas,” said the guard god. “If you’ll forgive me for lumping you in with all the other blue travelers.”

It had been Sierra’s observation that minutes are longer than people give them credit for. When people pause for a minute, it is most often not a minute at all, but a moment.

She paused for a minute and said, “I thought I wouldn’t miss anyone anymore.”

The guard god stopped its ramble. It reached out and put two of its great hands on her shoulders and slowly, gently even, rotated her. It pulled her down a bit until they were face-to-face, her gazing through her see-plate, it gazing through its fluctuant eyes.

“That’s new,” it said. It removed its top hands and clapped all of them. Particulate matter drifted out like a scattering of dusk-flocking birds. This time, Sierra could hear the nod as well as see it. The guard god asked, “Do you want to get out of here?”

The guard god, the poppet deity, made a check of Sierra’s go suit and determined that it was of the highest quality, but it warned her that the highest quality might be insufficient for her survival where they were going.

“Where are we going?”

“Up and out.”

“We’re already up and out,” she said, nonetheless intrigued.

“Further up. Further out.”

“My godmother always said farther was correct.”

“Isn’t there something about literal and symbolic distances? The A means one, the U means the other?” The guard god sounded genuinely curious.

“Are we going… literally? Or symbolically?”

“I look forward to finding out,” and for the first time the guard god laughed, and it wasn’t grumbling thunder and tumbling gravel at all, but lovely and melodic, like a flute solo.

Sierra joined in the laughter, though her laugh was a throaty alto and she often honked despite herself, as she did this time.

“Your go suit,” said the guard god, “is hesitant. It wants reassuring. I propose you ride on my back so as to be within my sphere of influence. That might protect you should we encounter any day-blind stars.”

“What are those?” asked Sierra.

“They are fey and beautiful and vicious and deadly, like all stars. But in particular, they are the stars that shine by day and so can’t be seen from the down and in.”

“We’re not down and in. We’re not going down and in.”

“One day I will meet a blue traveler with a proper sense of perspective,” said the guard god. “Now, if you are to ride on my back, you won’t want this broad mineral stuff. What sort of steed would you prefer?”

The only steeds Sierra had ever seen were the force-grown mules spun up by the various corporation-citizens on the world for use as data storage.

“I can’t think…”

“Think wider. It can be anything at all you’ve seen, yes, but also anything that you’ve heard of, that you’ve read about, that you’ve heard sung to you, or even that you’ve imagined.”

Sierra thought. “When the twins turned one hundred and eleven years old, their father and I marked it as a very momentous occasion, though it’s not a particularly remarkable age for a child to reach and the Widows Who Wait do not attach any numerological significance to one hundred and eleven. But it was that day they were given their choice of a Memorial Day, to celebrate all the rest of their lives.”

“I here admit, Sierra St. Sandalwood IV, that I have spoken to you more than any other human being I have ever encountered,” said the guard god. “Therefore, I will tell you I do not know what a Memorial Day is. My kind have had encounters with the Widows Who Wait, though. They’re all liars.”

Sierra elected to ignore that. “They could have chosen the anniversary of their physical birth or of the day they bloomed within me. But the twins are puckish. They are readers of old books and it’s a rare hour passes without them sharing a knowing smile. They chose the eighth day of September.”

“That one I know,” said the guard god. “The Nativity of Mary, mother of the Christ.”

“No. I mean, yes, it’s that, too, but we are not Christians. The eighth day of September is also the Feast Day of Saint Corbinian. That’s why they chose it.”

“I like to understand things,” said the guard god. “If you are not Christians, and the birth date of the Holy Mother is no occasion for memory, why choose a Christian saint?”

Sierra smiled, remembering. “Because of the bear,” she said.

The guard god moved its great shoulders back. Some arms retracted and others shortened. Stone became flesh and flesh grew hirsute. Rounded ears sprouted and eyes became amber. The guard god dropped to all fours and its great claws curled into the rock. “Wait,” it said. “I am listening to the story.”

Sierra heard nothing, but she waited.

“He was on his way to Rome, yes.” The guard god’s voice was now a different timbre of deep. Sierra wondered if its laugh had changed as well. “A great bear slew the saint’s mule and Corbinian commanded the creature, in the name of God, to submit to saddle and rein and serve as his mount. The beast acquiesced and carried the saint to the Holy See. When they arrived at the gates, Corbinian freed the bear and it returned to the wild, sinless as only animals can be.”

“Sinless, yes, I suppose,” said Sierra. “There are none of its kind left to prove or disprove that notion.”

The guard god reared up on its hind legs, twice as tall as Sierra. She was afraid for the first time since she had launched herself up and out.

The guard god, the bear, looked down, down, down the long way Sierra had travelled. “There are a few bears yet,” it said.

Sierra was surprised. “In captivity?” she asked.

“In hiding,” it answered. “Plenty of mules, though. Probably not as tasty as that one in old Bavaria.” The guard god dropped down again and hunched its shoulders. A leather saddle grew out of its back and reins extended from its terrifying teeth.

“What were you listening to? Who told you the story?”

“Mnemosyne. She grants me instantaneous access to every bit of recorded information in the omniverse.”

This startled Sierra. “You’ve indicated there are things that you do not know, even things you don’t understand.”

“I rarely access Mnemosyne. She vexes me. Now, Sierra, climb up.”

She put her foot in a stirrup, but hesitated. “Will you give me your name, as I gave you mine?”

“You have yet to tell me who you are, so I will not tell you who I am. But my name is now Corbinian.”

“Corbinian wasn’t the bear,” Sierra said, swinging into the saddle.

“Oh, I doubt you can prove that,” it replied.

Farther up and further out proved to be a circuitous route that twisted between the world and its moon. This involved travelling towards the world before they travelled away from it, but Corbinian did not respond to Sierra’s queries beyond grunting, “Concentrating.”

She let it be.

Having never ridden anything at all, not even a bicycle, Sierra found the sensation vertiginous, even without the other rocky world they passed, even without the belt of tumbling asteroids, even without the great ringed bodies the bear rushed past. The go suit held up perfectly so far as she could tell. She saw many stars off in the distance but did not know if any of them were day-blind.

Finally, Corbinian came to a halt.

“A relative halt,” it said. “All things are in motion, from down at the bottom of matter, where minds best not linger, to the very top of all of it, to every bit of it.”

Sierra thought of her daughter waving her arms and speaking of motion. She was comforted by the memory. She was glad her daughter had long known something she herself had not known at all.

“Do you know what Gaia is?” she asked.

“I’m told that the answer to that question is of no importance,” said Corbinian. “And it is not your question. Keep asking them though!”

“Well, here’s another. Why have we stopped here?”

“Ah. This is the farthest any go suit has ever gone.”

“So, I’m farther from the world than any one has ever been?”

“Or further, yes. Let’s say both.”

“The suit seems fine,” said Sierra.

“Good. Because I feel odd,” said Corbinian.

“Are you ill?”

“I don’t know. I never have been. But there’s some sort of limiting factor that is holding me in this orbit. I feel like a bear twice my height has stood up in front of me.”

“That would be a pretty big bear,” said Sierra.

Corbinian’s laugh was still a flute.

“You’re afraid,” said Sierra. “That’s the limiting factor, I think.”

Corbinian said, “Wait. I am listening to the story.”

Just a moment later, the bear said, “I was attempting to access recorded information that would tell me if it is better to be ill or to be afraid.”

“Now that you’ve said it,” said Sierra, “I’m curious myself. What did Mnemosyne tell you?”

“She didn’t tell me anything. She didn’t tell me anything at all.”

Sierra discovered that unlike with her husband or her godmother, unlike with even Devon and Denisa, were she to be honest with herself, she never grew frustrated in the company of Corbinian. She never found the guard god tiresome or boring. She never felt put upon.

They sat companionably, in silence, for a number of years.

One day, Corbinian said, “Isn’t there anything you want to ask? There was that question I thought you had. I just remembered that.”

“I have questions, of course,” said Sierra, “but I still don’t know what you mean by the question. Wait, no. I know what you mean by it, but I do not know the question itself.”

“Ask me some others, then. I’m awfully resourceful.”

“Are you getting bored?” asked Sierra, worried about the answer.

“No,” said Corbinian.

“Well, then. Who made you?”

“Mnemosyne did.”

“Who made Mnemosyne?”

“You did.”

“I did no such thing,” said Sierra.

Corbinian gestured in the direction of the far away world. “You collectively. You blue travelers and green travelers and those few that never go up or down or in or out at all.”

“When did we make her?”

“I will not ask Mnemosyne to tell that story,” said Corbinian.

“May I ask her, then?”

“Mnemosyne would be the end of you, Sierra St. Sandalwood IV. She is a terrible thing for people like you.”

Sierra asked, “Is she terrible for you?”

But Corbinian fell silent for another few years.

Over time, the go suit began to alter itself in subtle ways. At first, Sierra thought it might be changing itself to match her dreams of it. Perhaps she would grow wings. Perhaps she would be able to lift the see-plate and breathe in the aroma of the nothing.

But then it became apparent the suit was becoming less than it had been before. It was winnowing parts of itself that Sierra rarely used. She nudged Corbinian.

“The suit’s breaking down,” she said.

The bear’s brows went low, and Sierra noticed that sometime clouds had appeared in the amber. It pressed a paw against Sierra’s chest. “Yes,” it said. “But it is not unhappy. It is confused. I am tempted to ask Mnemosyne whether it is better to be confused or unhappy.”

“I think parts of it are disappearing,” said Sierra.

Corbinian moved his massive head and back and forth. “One of the fundamental tenets of Mnemosyne is that formulated by Lavoisier the Lawgiver. Things do not disappear.”

Sierra had received an excellent education from her godmother. “Mass is not destroyed,” she said. “But that’s not what I meant. The go suit is sloughing off, not ceasing to exist.”

Corbinian took a closer look. “Yes, you are right,” it said. “It is sloughing away in a stream.”

“To where?”

“To the day-blind stars.”

“Oh,” said Sierra. “I know now. All it took was patience and study.”

“What do you know?”

“I know the question.”

Corbinian did not speak. It adopted a mien of anticipation.

“Good and faithful friend,” said Sierra. “Will you take me to the day-blind stars?”

Centuries later, the day-blind stars proved fey and beautiful and vicious and deadly. They were unappreciative of the new-come pair.

Along the way, they had overtaken the stuff of the go suit that it had previously surrendered. The suit fully reincorporated. Corbinian reported that it was pleased to have done so.

Once again, they relatively stopped. They were in a great nursery and every particle a star can emit buffeted them. These ejecta waxed and waned. The go suit trembled but Sierra felt its bravery. Corbinian’s eyes grew cloudier.

The answer to the question had proven to be yes, obviously. But now Sierra turned to the problem of why it was the question.

She thought:

in the beginning was the question

and the question was flawed

then the question begot a question

and that question begot a question

and that question begot a question

and that question begot a question

“Does Mnemosyne know why I asked you to bring me here?” she asked Corbinian.

“I have not been able to hear Mnemosyne’s stories for decades, now. We are dependent on what is in me, and what is in me is paltry. All that is in me is at the very surface of knowledge. I plumb no depths.”

“The question I asked you was of unknowable provenance,” Sierra said gently, “and you answered with an action you didn’t understand. You didn’t understand why, but you took the action anyway.”

Corbinian sighed and said, “I wonder if some other guard god took my place on the trailing rocks.”

The changed course of the conversation troubled Sierra. She went on as if Corbinian had not spoken. “It must be an interesting sensation you’ve been feeling down these past years. Wondering.”

“I didn’t know there was a word for it,” said Corbinian.

Sierra gave it a sharp glance. “That seems unlikely,” she said.

“Sierra St. Sandalwood IV. Goddaughter. Wife. Mother. The lone blue traveler possessed of a proper sense of perspective. Friend. I am sloughing away.”

One of the greatest failures of design and imagination that ever occurred in the world was the routing of the ducts around the eyes of go suit wearers into a reservoir at the base of the throat for filtration and reabsorption. So, tears did not stream down Sierra’s cheeks.

“Can we move on?” she asked. “Can we overtake what’s gone from you so you might be whole again?”

“I say again, I am unable to hear Mnemosyne’s stories. And I have not been whole for a long time. It is unlikely I ever will be again.”

A pair of day-blind stars let loose flares. The flares crossed the nothing and double-helixed. Sierra saw that Corbinian was not so large a bear as it had been.

And it grows smaller.

But that didn’t make sense. Growth implied addition, not subtraction. She elected to distract herself and Corbinian both.

“What is the opposite of growth?” she asked.

Corbinian cocked its head to one side. “Death?”

“But some things subside without dying,” Sierra insisted.

“Matter is not destroyed,” said Corbinian, “The opposite of growth must mean that whatever is not growing is sloughing away.”

“Are those flares sloughing away the day-blind stars, I wonder?”

“I do not know,” said Corbinian. “Ask them.”

But the stars could not answer. They were simply stars, possessing only the intelligence of fusion, which was notoriously unreliable.

“Why did you say I should ask them? You must have known they couldn’t answer.” She was still trying to distract the bear, who had fallen into melancholy.

“I did not know they couldn’t,” it said. “I suspected they wouldn’t.”

“That’s not the same thing at all,” said Sierra.

“We have crossed half a galaxy,” said Corbinian. “Everything we say or do is close enough.”

That sounded true.

“I do not believe my go suit will sustain me if you leave,” said Sierra.

“It’s a good suit,” said Corbinian. “It will try.”

“That’s all I can ask,” said Sierra. “I ask the same of you.”

“You have always asked me things. It has been the joy of my existence.”

Tears did not stream down Sierra’s cheeks.

Corbinian was a long time dying. Things changed as it diminished. It began asking Sierra questions, but though it tried, it was less and less able to answer hers.

“Do you believe your children kept to their plan of going neither up nor down?” it asked.

Sierra reflected upon what she remembered of the twins. The great distance between her and them, the great amount of time, made her suspect her own reflections.

“I believe,” she said, “that they kept to it for as long as they could.”

“So, you know they could have, but not that they would have.”

Devon’s smile was sly in her memory. He lifted the right side of his lips only. Not mocking but acknowledging. Denisa’s smile was bright, all teeth and gums and joy. They were both somewhat myopic but refused the simple treatment that would have perfected their vision. Puckish. For some people, clinging to imperfection was such a faux pas as to be considered an atrocity.

“I have just realized that the word is would. They are still on the world. They never fitted themselves for go suits or deep smocks.”

“That is welcome information,” said Corbinian. “But we should entertain the idea that one no longer needs a go suit to come up and out.”

An interesting notion.

“I can imagine those two finding some way to accomplish that. They had the benefit of my godmother’s tutelage, and she was an extraordinary educator.”

Sierra realized she could not envision her godmother’s face. Her husband’s name…was Diego. She was sure it was Diego.

“Now I’m sloughing away,” she said, describing to Corbinian the lacunae in her mind.

“You are limited by biology,” it said. “Synaptic misfiring is a product of age. But age brings wisdom, too.”

“I’d rather be intelligent than wise.”

“That is a wish I cannot grant. And one I would not if I could,” said Corbinian. Then it coughed.

And coughed.

And coughed.

Sierra stroked Corbinian’s shoulder. She did not know what else to do. Besides asking a question.

“I’m sorry, Sierra,” Corbinian answered. “There is nothing you can do for me. I am limited by pathology.”

“But you are thousands of years old!” she cried. “You were never ill before I insisted we come to these damnable stars!”

“I do not mean I am diseased,” said Corbinian. “I mean I am a symptom. One that is at long last being treated.”

“You seem to be plumbing depths now.”

“Wait,” said Corbinian. “I am listening to a story.”

The story was not told by Mnemosyne.

“Who is it then?” asked Sierra. She was distracted because her go suit had begun humming.

“I do not know who. I believe I know what. It is a go ship.”

Sierra had never heard of a go ship and said so.

“We have been away from the world for a great length of time,” said Corbinian. “It is in the nature of things to change.”

“You believe this is some sort of craft from the world?”

“I know it is. It is asking about you.”

Then Corbinian coughed a long jag. Blood coated its terrible teeth.

“That’s all, now,” it said. “Even the surface is fading.”

“But you plumbed the depths!”

“The depths plumbed me. They did not have to lower the weight very far. I am sorry, Sierra. That’s all. That’s all.”

She could see matter streaming away from it. The stream was directed perpendicular to the direction of the day-blind stars.

“You must tell me who you are,” Corbinian rasped. “Unless you do not wish to. I should have asked that as a question instead of stating it as an assertion.”

Without a moment’s hesitation, Sierra said, “I am the woman who asked the wrong question.”

“I find this answer deeply unsatisfying.”

She could see through it. It was less a bear now than the ghost of one.

Then Sierra knew the right question.

“Who are you, Corbinian?”

“I am not a who at all.”

She could barely discern its voice.

“I am a what.”

Her go suit was trembling. Sierra asked, no, pleaded, “What are you?”

Corbinian uttered a melodious word. Its voice sounded like a flute.

Sierra was bewildered. “Did you say elusive or allusive? Columbine?”

Suddenly the guide god’s face was distinctive and fully present. Its eyes were flashing diamonds, lit glorious as stars that could see.

But Corbinian did not answer. Instead, it faded away into the nothing.

The trembling of Sierra’s go suit became so pronounced that she was afraid it might tear itself apart. She wished her friend were there to tell her whether the suit was frightened or excited, wished Corbinian was there to muse upon which of those states was better.

Then the trembling stopped. Her see-plate went black and every joint in the go suit froze. She could neither see nor move.

Sierra’s sense of the passage of time had long since atrophied. She did not know how many minutes or years passed before her see-plate unfolded with a hiss.

She blinked, but not in command or query. She blinked to clear tears from her eyes. She blinked so that she could better see the two figures leaning over her.

The man’s smile was sly, but not mocking. He only lifted the right side of his lips. The woman’s smile was bright, all teeth and gums and joy.

Sierra found that her children had spouses and children of their own, and that those children had children. And those children begat children and on down like that, living with dozens of other families who made the go ship their home.

The go ship’s name was Diego, but it preferred to be called Ship. It had been the only one of its kind when the twins had left the world.

Some on board wished to study Sierra’s go suit. It was older than any other surviving example of human technology. At first, Sierra took this to mean that the world had ended, but she was assured by Ship that was not the case. Matter is not destroyed, but it changes. It is always moving.

And Sierra moved.

Sometimes she would don her go suit and spend a year or two scouting ahead of Ship. Sometimes she would simply walk the skin of the vessel and study the inconstant stars. She kept moving. She found that she could not stay still, not even relatively.

Sierra often thought of Corbinian. She did not believe it had sacrificed itself for her, not that it had sacrificed itself for anyone at all. Not a who, no. Perhaps a what.

The what was fearlessness. The what was love of the universe. The what was solace.

The what was up and out and up and out and up and out and up and out…

“The Day-Blind Stars” copyright © 2026 by Christopher Rowe
Art copyright © 2026 by Hwarim Lee

Buy the Book

An illustration of a woman in a stylized spacesuit riding a bear-like creature across the night sky beneath a particularly radiant star.
--> An illustration of a woman in a stylized spacesuit riding a bear-like creature across the night sky beneath a particularly radiant star.

The Day-Blind Stars

Christopher Rowe

The post The Day-Blind Stars appeared first on Reactor.

14:21

Security updates for Wednesday [LWN.net]

Security updates have been issued by AlmaLinux (capstone, cockpit, firefox, git-lfs, golang-github-openprinting-ipp-usb, kea, kernel, nghttp2, nodejs24, openexr, perl-XML-Parser, rsync, squid, and vim), Debian (imagemagick, systemd, and thunderbird), Slackware (libexif and xorg), SUSE (bind, clamav, firefox, freerdp2, giflib, go1.25, go1.26, helm, ignition, libpng16, libssh, oci-cli, rust1.92, strongswan, sudo, xorg-x11-server, and xwayland), and Ubuntu (rust-tar and rustc, rustc-1.76, rustc-1.77, rustc-1.78, rustc-1.79, rustc-1.80).

13:56

CodeSOD: Three Letter Acronyms, Four Letter Words [The Daily WTF]

Candice (previously) has another WTF to share for us.

We're going to start by just looking at one fragment of a class defined in this C++ code: TLAflaList.

Every type and variable has a three-letter-acronym buried in its name. The specific meaning of most of the acronyms are mostly lost to time, so "TLA" is as good as any other three random letters. No one knows what "fla" is.

What drew Candice's attention was that there was a type called "list", which implies they're maybe not using the standard library and have reinvented a wheel. Another data point arguing in favor of that is that the class had a method called getNumElements, instead of something more conventional like size.

Let's look at that function:

size_t TLAflaList::getNumElements()
{
        return mv_FLAarray.size();
}

In addition to the meaningless three-letter-acronyms which start every type and variable, we're also adding on a lovely bit of hungarian notation, throwing mv_ on the front for a member variable. The variable is called "array", but is it? Let's look at that definition.

class TLAflaList
{
        …
        private:
                TLAflaArray_t mv_FLAarray;
                …
}

Okay, that gives me a lot more nonsense letters but I still have no idea what that variable is. Where's that type defined? The good news, it's in the same header.

typedef std::vector<INtabCRMprdinvusage_t*> TLAflaArray_t;

So it's not a list or an array, it's a vector. A vector of bare pointers, which definitely makes me worry about inevitable use-after-free errors or memory leaks. Who owns the memory that those pointers are referencing?

"IN" in the type name is an old company, good ol' Initrode, which got acquired a decade ago. "tab" tells us that it's meant to be a database table. We can guess at the rest.

This isn't a codebase, it's a bad Scrabble hand. It's also a trainwreck. Confusing, disorganized, and all of that made worse by piles of typedefs that hide what you're actually doing and endless acronyms that make it impossible to read.

One last detail, which I'll let Candice explain:

I started scrolling down the class definition - it took longer than it should have, given that the company coding style is to double-space the overwhelming majority of lines. (Seriously; I've seen single character braces sandwiched by two lines of nothing.) On the upside, this was one of the classes with just one public block and one private block - some classes like to ping-pong back and forth a half-dozen times.

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

12:49

12:07

Defense in Depth, Medieval Style [Schneier on Security]

This article on the walls of Constantinople is fascinating.

The system comprised four defensive lines arranged in formidable layers:

  • The brick-lined ditch, divided by bulkheads and often flooded, 15­-20 meters wide and up to 7 meters deep.
  • A low breastwork, about 2 meters high, enabling defenders to fire freely from behind.
  • The outer wall, 8 meters tall and 2.8 meters thick, with 82 projecting towers.
  • The main wall—a towering 12 meters high and 5 meters thick—with 96 massive towers offset from those of the outer wall for maximum coverage.

Behind the walls lay broad terraces: the parateichion, 18 meters wide, ideal for repelling enemies who crossed the moat, and the peribolos, 15–­20 meters wide between the inner and outer walls. From the moat’s bottom to the highest tower top, the defences reached nearly 30 meters—a nearly unscalable barrier of stone and ingenuity.

11:14

Emmanuel Kasper: Minix 3 on Beagle Board Black (ARM) [Planet Debian]

Connected via serial console. Does not have a package manager, web or ssh server, but can play tetris in the terminal (bsdgames in Debian have the same tetris version packaged).

asciicast

10:42

What do you own? [Seth's Blog]

What does it mean for us to own something?

If we own a piece of land and the rain washes the topsoil downstream, do we go and get the topsoil back?

Do we own our reputation? We have influence over it, but some of it was gifted to us without our knowledge, and other parts are influenced by forces out of our control.

Do we own responsibility? Is it something we take or acquire or accept?

We can try to own our past, but the best we can do is influence our future.

Ownership is a shared understanding, a construct that can shift depending on where we stand. It’s not always up to us, but it often works better if we acknowledge it.

09:14

Preparatory School [Penny Arcade]

New Comic: Preparatory School

08:56

Freexian Collaborators: Debian Contributions: Debusine projects in GSoC, Debian CI updates, Salsa CI maintenance and more! (by Anupa Ann Joseph) [Planet Debian]

Debian Contributions: 2026-03

Contributing to Debian is part of Freexian’s mission. This article covers the latest achievements of Freexian and their collaborators. All of this is made possible by organizations subscribing to our Long Term Support contracts and consulting services.

Debusine projects in Google’s Summer of Code

While Freexian initiated Debusine, and is investing a lot of resources in the project, we manage it as a true free software project that can and should have a broader community.

We always had documentation for new contributors and we aim to be reactive with them when they interact via the issue tracker or via merge requests. We decided to put those intentions under stress tests by proposing five projects for Google’s Summer of Code as part of Debian’s participation in that program.

Given that at least 11 candidates managed to get their merge request accepted in the last 30 days (interacting with the development team is part of the pre-requisites to apply to Google Summer of Code projects these days), the contributing experience must not be too bad. 🙂 If you want to try it out, we maintain a list of “quick fixes” that are accessible to newcomers. And as always, we welcome your feedback!

Debian CI: incus backend and upgrade to Bootstrap 5, by Antonio Terceiro

debci 3.14 was released on March 4th, with a followup 3.14.1 release with regression fixes a few days afterwards. Those releases were followed by new development and maintenance work that will provide extra capabilities and stability to the platform.

This month saw the initial version of an incus backend land in Debian CI. The transition into the new backend will be done carefully so as to not disrupt ‘testing’ migration. Each package will be running jobs with both the current lxc backend and with incus. Packages that have the same result on both backends will be migrated over, and packages that exhibit different results will be investigated further, resulting in bug reports and/or other communication with the maintainers.

On the frontend side, the code has been ported to Bootstrap 5 over from the now ancient Bootstrap 3. This need has been originally reported back in 2024 based on the lack of security support for Bootstrap 3. Beyond improving maintainability, this upgrade also enables support for dark mode in debci, which is still work in progress.

Both updates mentioned in this section will be available in a following debci release.

Salsa CI maintenance by Santiago Ruano Rincón et al.

Santiago reviewed some Salsa CI issues and reviewed associated merge requests. For example, he investigated a regression (#545), introduced by the move to sbuild, on the use of extra repositories configured as “.source” files; and reviewed the MR (!712) that fixes it.

Also, there were conflicts with changes made in debci 3.14 and debci 3.14.1 (those updates are mentioned above), and different people have contributed to fix the subsequent issues, in a long-term way. This includes Raphaël who proposed MR !707 and who also suggested Antonio to merge the Salsa CI patches to avoid similar errors in the future. This happened shortly after. Those fixes finally required the unrelated MR !709, which will prevent similar problems when building images.

To identify bugs related to the autopkgtest support in the backport suites as early as possible, Santiago proposed MR !708.

Finally, Santiago, in collaboration with Emmanuel Arias also had exchanges with GSoC candidates for the Salsa CI project, including the contributions they have made as merge requests. It is important to note that there are several very good candidates interested in participating. Thanks a lot to them for their work so far!

Miscellaneous contributions

  • Raphaël reported a zim bug affecting Debian Unstable users, which was already fixed in git apparently. He could thus cherry-pick the fix and update the package in Debian Unstable.
  • Carles created a new page on the InstallingDebianOn in Debian Wiki.
  • Carles submitted translation errors in the debian-installer Weblate.
  • Carles, using po-debconf-manager, improved Catalan translations: reviewed and submitted 3 packages. Also improved error handling when forking or submitting an MR if the fork already existed.
  • Carles kept improving check-relations: code base related general improvements (added strict typing, enabled pre-commit). Also added DebPorts support, virtual packages support and added commands for reporting missing relations and importing bugs from bugs.debian.org.
  • Antonio handled miscellaneous Salsa support requests.
  • Antonio improved the management of MiniDebConf websites by keeping all non-secret settings in git and fixed exporting these sites as static HTML.
  • Stefano uploaded routine updates to hatchling, python-mitogen, python-virtualenv, python-discovery, dh-python, pypy3, python-pipx, and git-filter-repo.
  • Faidon uploaded routine updates to crun, libmaxminddb, librdkafka, lowdown, platformdirs, python-discovery, sphinx-argparse-cli, tox, tox-uv.
  • Stefano and Santiago continued to help with DebConf 26 preparations.
  • Stefano reviewed some contributions to debian-reimbursements and handled admin for reimbursements.debian.net.
  • Stefano attended the Debian Technical Committee meeting.
  • Helmut sent 8 patches for cross build failures.
  • Building on the work of postmarketOS, Helmut managed to cross build systemd for musl in rebootstrap and sent several patches in the process.
  • Helmut reviewed several MRs of Johannes Schauer Marin Rodrigues expanding support for DPKG_ROOT to support installing hurd.
  • Helmut incorporated a final round of feedback for the Multi-Arch documentation in Debian policy, which finally made it into unstable together with documentation of Build-Profiles.
  • In order to fix python-memray, Helmut NMUed libunwind generally disabling C++ exception support as being an incompatible duplication of the gcc implementation. Unfortunately, that ended up breaking suricata on riscv64. After another NMU, python-memray finally migrated.
  • Thorsten uploaded new upstream versions of epson-inkjet-printer-escpr and sane-airscan. He also fixed a packaging bug in printer-driver-oki. As of systemd 260.1-1 the configuration of lpadmin has been added to the sysusers.d configuration. All printing packages can now simply depend on the systemd-sysusers package and don’t have to take care of its creation in maintainer scripts anymore.
  • In collaboration with Emmanuel Arias, Santiago had exchanges with GSoC candidates and reviewed the proposals of the Linux livepatching GSoC 2026 project.
  • Colin helped to fix CVE-2026-3497 in openssh and CVE-2026-28356 in multipart.
  • Colin upgraded tango and pytango to new upstream releases and packaged pybind11-stubgen (needed for pytango), thanks to a Freexian customer. Tests of reproducible builds revealed that pybind11-stubgen didn’t generate imports in a stable order; this is now fixed upstream.
  • Lucas fixed CVE-2025-67733 and CVE-2026-21863 affecting src:valkey in unstable and testing. Also reviewed the same fixes targeting stable proposed by Peter Wienemann.
  • Faidon worked with upstream and build-dep Debian maintainers on resolving blockers in order to bring pyHanko into Debian, starting with the adoption of python-pyhanko-certvalidator. pyHanko is a suite for signing and stamping PDF files, and one of the few libraries that can be leveraged to sign PDFs with eIDAS Qualified Electronic Signatures.
  • Anupa co-organized MiniDebConf Kanpur and attended the event with many others from all across India. She handled the accommodation arrangements along with the registration team members, worked on the budget and expenses. She was also a speaker at the event.
  • Lucas helped with content review/schedule for the MiniDebConf Campinas. Thanks Freexian for being a Gold sponsor!
  • Lucas organized and took part in a one-day in-person sprint to work on Ruby 3.4 transition. It was held in a coworking space in Brasilia - Brazil on April 6th. There were 5 DDs and they fixed multiple packages FTBFSing against Ruby 3.4 (coming to unstable soon hopefully). Lucas has been postponing a blog post about this sprint since then :-)

08:21

Pluralistic: Rights for robots (15 Apr 2026) [Pluralistic: Daily links from Cory Doctorow]

->->->->->->->->->->->->->->->->->->->->->->->->->->->->-> Top Sources: None -->

Today's links

  • Rights for robots: Not everything deserves moral consideration.
  • Hey look at this: Delights to delectate.
  • Object permanence: 7 years under the DMCA; NOLA mayoral candidate x New Orleans Square; Kettling is illegal; AOL won't deliver critical emails; Chris Ware x Charlie Brown; Mossack Fonseca raided; Corporate lobbying budget is greater than Senate and House; Corbyn overpays taxes; What IP means; Bill Gates v humanity; "Jackpot."
  • Upcoming appearances: Toronto, San Francisco, London, Berlin, NYC, Hay-on-Wye, London.
  • Recent appearances: Where I've been.
  • Latest books: You keep readin' em, I'll keep writin' 'em.
  • Upcoming books: Like I said, I'll keep writin' 'em.
  • Colophon: All the rest.



The famous photo of LBJ signing the Civil Rights Act. LBJ and the onlookers' heads have been replaced with the heads of 1950s pulp magazine robots.

Rights for robots (permalink)

The Rights of Nature movement uses a bold tactic to preserve our habitable Earth: it seeks to extend (pseudo) personhood to things like watersheds, forests and other ecosystems, as well as nonhuman species, in hopes of creating legal "standing" to ask the courts for protection:

https://en.wikipedia.org/wiki/Rights_of_nature

What do watersheds, forests and nonhuman species need protection from? That turns out to be a very interesting question, because the most common adversary in a Rights of Nature case is another pseudo-person: namely, a limited liability corporation.

These nonhuman "persons" have been a feature of our legal system since the late 19th century, when the Supreme Court found that the 14th Amendment's "Equal Protection" clause could be applied to a railroad. In the 150-some years since, corporate personhood has monotonically expanded, most notoriously through cases like Hobby Lobby, which gave a corporation the right to discriminate against women on the grounds that it shared its founders' religious opposition to abortion; and, of course, in Citizens United, which found that corporate personhood meant that corporations had a constitutional right to divert their profits to bribe politicians.

Theoretically, "corporate personhood" extends to all kinds of organizations, including trade unions – but in practice, corporate personhood primarily allows the ruling class to manufacture new "people" to serve as a botnet on their behalf. A union has free speech rights just like an employer, but the employer's property rights mean that it can exclude union organizers from its premises, and employer rights mean that corporations can force workers to sit through "captive audience" meetings where expensive consultants lie to them about how awful a union would be (the corporation's speech rights also mean that it's free to lie).

In my view, corporate personhood has been an unmitigated disaster. Creating "human rights" for these nonhuman entities led to the catastrophic degradation of the natural world, via the equally catastrophic degradation of our political processes.

In a strange way, corporate personhood has realized the danger that reactionary opponents of votes for women warned of. In the days of the suffrage movement, anti-feminists claimed that giving women the vote would simply lead to husbands getting two votes, since wives would simply vote the way their husbands told them to.

This libel never died out. Take the recent hard-fought UK by-election in Gorton and Denton (basically Manchester): this was the first test of the Green Party's electoral chances under its new leader, the brilliant and principled leftist Zack Polanski. The Green candidate was Hannah Spencer, a working-class plumber and plasterer who rejected the demonization of the region's Muslim voters, unlike her rivals from Labour (which has transformed itself into a right-wing party), Reform (a fascist party), and the Conservatives (an irrelevant and dying right party). During the race (and especially after Spencer romped to a massive victory) Spencer's rivals accused her of courting "family voters," by which they meant Muslim wives, who would vote the way their Islamist husbands ordered them to. Despite the facial absurdity of this claim – that the Islamist vote would go for the pro-trans party led by a gay Jew – it was widely repeated:

https://www.bbc.com/news/articles/clyxeqpzz2no

"Family voting" isn't a thing, but corporate personhood has conferred political rights on the ruling class, who get to manufacture corporate "people" at scale, each of which is guaranteed the same right to contribute to politicians and intervene in our politics as any human.

Contrast this with the Rights for Nature movement. Where corporate personhood leads to a society with less empathy for living things (up to and including humans), Rights for Nature creates a legal and social basis for more empathy. In her stunning novel A Half-Built Garden, Ruthanna Emrys paints a picture of a world in which the personhood of watersheds and animals become as much of a part of our worldview as corporate personhood is today:

https://pluralistic.net/2022/07/26/aislands/#dead-ringers

Scenes from A Half-Built Garden kept playing out in my mind last month while I attended the Bioneers conference in Berkeley, where they carried on their decades-long tradition of centering indigenous activists whose environmental campaigns were intimately bound up with the idea of personhood for the natural world and its inhabitants:

https://bioneers.org/

On the last morning, my daughter and I sat through a string of inspiring and uplifting presentations from indigenous-led groups that had used Rights of Nature to rally support for legal challenges that had forced those other nonhuman "persons" – limited liability corporations – to retreat from plans to raze, poison, or murder whole regions.

The final keynote speaker that morning was the writer Michael Pollan, who spoke about a looming polycrisis of AI, and I found myself groaning and squirming. Not him, too! Were we about to be held captive to yet another speaker convinced that AI was going to become conscious and turn us all into paperclips?

That seemed to be where he was leading, as he discussed the way that chatbots were designed to evince the empathic response we normally reserve for people – the same empathy that all the other speakers were seeking to inspire for nature. But then, he took an unexpected and welcome turn: Pollan compared extending personhood to chatbots to the disastrous decision to extend personhood to corporations, and urged us all to turn away from it.

This crystallized something that had niggled at me for years. For years, people I respect have used the Rights for Nature movement as an argument for extending empathy to software constructs. The more we practice empathy – and the more rights we afford to more entities – the better we get at it. Personhood for things that are not like us, the argument goes, makes our own personhood more secure, by honing a reflex toward empathy and respect for all things. This is the argument for saying thank you to Siri (and now to other chatbots):

https://ojs.lib.uwo.ca/index.php/fpq/article/download/14294/12136

Siri – like so many of our obedient, subservient, sycophantic chatbots – impersonates a woman. If we get habituated to barking orders at a "woman" (or at our "assistants") then this will bleed out into our interactions with real women and real assistants. Extending moral consideration to Siri, though "she" is just a software construct, will condition our reflexes to treat everything with respect.

For years, I'd uncritically accepted that argument, but after hearing Pollan speak, I changed my mind. Rather than treating Siri with respect because it impersonates a woman, we should demand that Siri stop impersonating a woman. I don't thank my Unix shell when I pipe a command to grep and get the output that I'm looking for, and I don't thank my pocket-knife when it slices through the tape on a parcel. I can appreciate that these are well-made tools and value their thoughtful design, but that doesn't mean I have to respect them in the way that I would respect a person.

That way lies madness – the madness that leads us to ascribe personalities to corporations and declare some of them to be "immoral" and others to be "moral," which is always and forever a dead end:

https://pluralistic.net/2024/01/12/youre-holding-it-wrong/#if-dishwashers-were-iphones

In other words: there's an argument from the Rights of Nature movement that says that the more empathy we practice, the better off we are in all our interactions. But Pollan complicated that argument, by raising the example of corporate personhood. It turns out that extending personhood to constructed nonhuman entities like corporations reduces the amount of empathy we practice. Far from empowering labor unions, the creation of "human" rights for groups and organizations has given capital more rights over workers. A labor rights regime can defend workers – without empowering bosses and without creating new "persons."

The question is: is a chatbot more like a corporation (whose personhood corrodes our empathy) or more like a watershed (whose personhood strengthens our empathy)? But to ask that question is to answer it – a chatbot is definitely more like a corporation than it is like a watershed. What's more: in a very real, non-metaphorical way, giving rights to chatbots means taking away rights from nature, thanks to LLMs' energy-intesivity.

Empathy then, for the nonhuman world – but not for human constructs.


Hey look at this (permalink)



A shelf of leatherbound history books with a gilt-stamped series title, 'The World's Famous Events.'

Object permanence (permalink)

#20yrsago Canadian labels pull out of RIAA-fronted Canadian Recording Industry Ass. https://web.archive.org/web/20060414170111/https://www.michaelgeist.ca/component/option,com_content/task,view/id,1204/Itemid,85/nsub,/

#20yrsago EFF publishes “7 Years Under the DMCA” paper https://web.archive.org/web/20060415110951/https://www.eff.org/deeplinks/archives/004555.php

#20yrsago Life of a writer as a Zork adventure https://web.archive.org/web/20060414115745/http://acephalous.typepad.com/acephalous/2006/04/disadventure.html

#20yrsago NOLA mayoral candidate uses photo of Disneyland New Orleans Square https://web.archive.org/web/20060414214356/https://www.wonkette.com/politics/new-orleans/not-quite-the-happiest-place-on-earth-166989.php

#20yrsago AOL won’t deliver emails that criticize AOL https://web.archive.org/web/20060408133439/https://www.eff.org/news/archives/2006_04.php#004556

#15yrsago UK court rules that kettling was illegal https://www.theguardian.com/uk/2011/apr/14/kettling-g20-protesters-police-illegal

#15yrsago If Chris Ware was Charlie Brown https://eatmorebikes.blogspot.com/2011/04/lil-chris-ware.html

#10yrsago Piracy dooms motion picture industry to yet another record-breaking box-office year https://torrentfreak.com/piracy-fails-to-prevent-box-office-record-160413/

#10yrsago Panama Papers: Mossack Fonseca law offices raided by Panama authorities https://www.reuters.com/article/us-panama-tax-raid-idUSKCN0XA020/

#10yrsago Panama Papers reveal offshore companies were bagmen for the world’s spies https://web.archive.org/web/20160426083004/https://www.yahoo.com/news/panama-papers-reveal-spies-used-mossak-fonseca-231833609.html

#10yrsago How corporate America’s lobbying budget surpassed the combined Senate and Congress budget https://web.archive.org/web/20150422010643/https://www.theatlantic.com/business/archive/2015/04/how-corporate-lobbyists-conquered-american-democracy/390822/

#10yrsago URL shorteners are a short path to your computer’s hard drive https://arxiv.org/abs/1604.02734

#10yrsago UL has a new, opaque certification process for cybersecurity https://arstechnica.com/information-technology/2016/04/underwriters-labs-refuses-to-share-new-iot-cybersecurity-standard/

#10yrsago Jeremy Corbyn overpays his taxes https://web.archive.org/web/20160413192208/https://www.politicshome.com/news/uk/political-parties/labour-party/news/73724/jeremy-corbyn-overstated-income-his-tax-return

#10yrsago Cassetteboy’s latest video is an amazing, danceable anti-Snoopers Charter mashup https://www.youtube.com/watch?v=D2fSXp6N-vs

#10yrsago Texas: prisoners whose families maintain their social media presence face 45 days in solitary https://www.eff.org/deeplinks/2016/04/texas-prison-system-unveils-new-inmate-censorship-policy

#5yrsago Data-brokerages vs the world https://pluralistic.net/2021/04/13/public-interest-pharma/#axciom

#5yrsago What "IP" means https://pluralistic.net/2021/04/13/public-interest-pharma/#ip

#5yrsago Bill Gates will kill us all https://pluralistic.net/2021/04/13/public-interest-pharma/#gates-foundation

#5yrsago Jackpot https://pluralistic.net/2021/04/13/public-interest-pharma/#affluenza


Upcoming appearances (permalink)

A photo of me onstage, giving a speech, pounding the podium.



A screenshot of me at my desk, doing a livecast.

Recent appearances (permalink)



A grid of my books with Will Stahle covers..

Latest books (permalink)



A cardboard book box with the Macmillan logo.

Upcoming books (permalink)

  • "The Reverse-Centaur's Guide to AI," a short book about being a better AI critic, Farrar, Straus and Giroux, June 2026 (https://us.macmillan.com/books/9780374621568/thereversecentaursguidetolifeafterai/)
  • "Enshittification, Why Everything Suddenly Got Worse and What to Do About It" (the graphic novel), Firstsecond, 2026

  • "The Post-American Internet," a geopolitical sequel of sorts to Enshittification, Farrar, Straus and Giroux, 2027

  • "Unauthorized Bread": a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, 2027

  • "The Memex Method," Farrar, Straus, Giroux, 2027



Colophon (permalink)

Today's top sources:

Currently writing: "The Post-American Internet," a sequel to "Enshittification," about the better world the rest of us get to have now that Trump has torched America. Third draft completed. Submitted to editor.

  • "The Reverse Centaur's Guide to AI," a short book for Farrar, Straus and Giroux about being an effective AI critic. LEGAL REVIEW AND COPYEDIT COMPLETE.
  • "The Post-American Internet," a short book about internet policy in the age of Trumpism. PLANNING.

  • A Little Brother short story about DIY insulin PLANNING


This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.

https://creativecommons.org/licenses/by/4.0/

Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.


How to get Pluralistic:

Blog (no ads, tracking, or data-collection):

Pluralistic.net

Newsletter (no ads, tracking, or data-collection):

https://pluralistic.net/plura-list

Mastodon (no ads, tracking, or data-collection):

https://mamot.fr/@pluralistic

Bluesky (no ads, possible tracking and data-collection):

https://bsky.app/profile/doctorow.pluralistic.net

Medium (no ads, paywalled):

https://doctorow.medium.com/

Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):

https://mostlysignssomeportents.tumblr.com/tagged/pluralistic

"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla

READ CAREFULLY: By reading this, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies ("BOGUS AGREEMENTS") that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer.

ISSN: 3066-764X

06:49

time-1.10 released [stable] [Planet GNU]


This is to announce time-1.10, a stable release.

The 'time' command runs another program, then displays information about
the resources used by that program.

There have been 79 commits by 5 people in the 422 weeks since 1.9.

See the NEWS below for a brief summary.

Thanks to everyone who has contributed!
The following people contributed changes to this release:

  Andreas Schwab (1)
  Assaf Gordon (10)
  Collin Funk (65)
  Dominique Martinet (1)
  Petr Písař (2)

Collin
 [on behalf of the time maintainers]
==================================================================

Here is the GNU time home page:
    https://gnu.org/s/time/

Here are the compressed sources:
  https://ftp.gnu.org/gnu/time/time-1.10.tar.gz   (832KB)
  https://ftp.gnu.org/gnu/time/time-1.10.tar.xz   (572KB)

Here are the GPG detached signatures:
  https://ftp.gnu.org/gnu/time/time-1.10.tar.gz.sig
  https://ftp.gnu.org/gnu/time/time-1.10.tar.xz.sig

Use a mirror for higher download bandwidth:
  https://www.gnu.org/order/ftp.html

Here are the SHA256 and SHA3-256 checksums:

  SHA256 (time-1.10.tar.gz) = 6MKftKtZnYR45B6GGPUNuK7enJCvJ9DS7yiuUNXeCcM=
  SHA3-256 (time-1.10.tar.gz) = zDjyfyzfABsSZp7lwXeYr368VzjZMkNPUJNnfpIakGk=
  SHA256 (time-1.10.tar.xz) = cGv3uERMqeuQN+ntoY4dDrfCMnrn2MLOOkgjxfgMexE=
  SHA3-256 (time-1.10.tar.xz) = U/Z0kMenoHkc7+rkCHMeyku8nXvIPppoQ2jq3B50e/A=

Verify the base64 SHA256 checksum with 'cksum -a sha256 --check'
from coreutils-9.2 or OpenBSD's cksum since 2007.

Verify the base64 SHA3-256 checksum with 'cksum -a sha3 --check'
from coreutils-9.8.

Use a .sig file to verify that the corresponding file (without the
.sig suffix) is intact.  First, be sure to download both the .sig file
and the corresponding tarball.  Then, run a command like this:

  gpg --verify time-1.10.tar.gz.sig

The signature should match the fingerprint of the following key:

  pub   rsa4096/8CE6491AE30D7D75 2024-03-11 [SC]
        Key fingerprint = 2371 1855 08D1 317B D578  E5CC 8CE6 491A E30D 7D75
  uid                 [ultimate] Collin Funk <collin.funk1@gmail.com>

If that command fails because you don't have the required public key,
or that public key has expired, try the following commands to retrieve
or refresh it, and then rerun the 'gpg --verify' command.

  gpg --locate-external-key collin.funk1@gmail.com

  gpg --recv-keys 8CE6491AE30D7D75

  wget -q -O- 'https://savannah.gnu.org/project/release-gpgkeys.php?group=time&download=1' | gpg --import -

As a last resort to find the key, you can try the official GNU
keyring:

  wget -q https://ftp.gnu.org/gnu/gnu-keyring.gpg
  gpg --keyring gnu-keyring.gpg --verify time-1.10.tar.gz.sig

This release is based on the time git repository, available as

  git clone https://https.git.savannah.gnu.org/git/time.git

with commit 40003f3c8c4ad129fbc9ea0751c651509ac5bb23 tagged as v1.10.

For a summary of changes and contributors, see:

  https://gitweb.git.savannah.gnu.org/gitweb/?p=time.git;a=shortlog;h=v1.10

or run this command from a git-cloned time directory:

  git shortlog v1.9..v1.10

This release was bootstrapped with the following tools:
  Autoconf 2.73
  Automake 1.18.1
  Gnulib 2026-04-13 c754c51f0f2b9a1e22d0d3eadfefff241de0ea48

NEWS

* Noteworthy changes in release 1.10 (2026-04-14) [stable]

** Bug fixes

  'time --help' no longer incorrectly lists the short option -h as being
  supported.  Previously it was listed as being equivalent to --help.
  [bug introduced in time-1.8]

  'time --help' no longer emits duplicate percent signs in the description of
  the --portability option.
  [bug introduced in time-1.8]

  time now opens the file specified by --output with its close-on-exec flag set.
  Previously the file descriptor would be leaked into the child process.
  [This bug was present in "the beginning".]

  time no longer appends the program name to the output when the format string
  contains a trailing backslash.
  [This bug was present in "the beginning".]

** Improvements

  time now uses the more portable waitpid and getrusage system calls
  instead of wait3.

  time can now be built using a C23 compiler.

  time now uses unlocked stdio functions on platforms that provide them.


05:49

Girl Genius for Wednesday, April 15, 2026 [Girl Genius]

The Girl Genius comic for Wednesday, April 15, 2026 has been posted.

02:49

Pet Name [QC RSS]

M O O B Y

02:21

Dilly-Dallying In Denver: Day 3 [Whatever]

The title of this post is partially inaccurate, as part of my third day in Denver was spent in Boulder. Before going into Boulder, Alex and I decided to kick the day off with a mani pedi, and get matching colors. Cat eye polish, of course:

My freshly manicured and polished gel nails alongside my friend's longer, acrylic nails. They are both painted purple and sparkly.

I was obsessed with this color, and I think it looked especially good on Alex’s longer nails. I mean just look at these bad boys:

My nails, sparkling in the sunlight.

Sparkly!

With fresh nails, we finally headed towards Boulder. Our first stop was the Boulder Museum of Contemporary Art. This art museum is “pay from your heart,” which means you can pay as much as you feel like for admission. I love this idea because it makes art so accessible, especially for Boulder college kids. Art museum prices can be pretty intense, so being able to price the admission for what fits into your budget is really nice.

While I didn’t photograph any of the actual artwork, I did capture the summary of this specific exhibition they had going on called “Yes, &…“:

A white wall filled with words talking about theme of the exhibit.

I liked the theme. It was interesting, and all of the pieces I saw were definitely very unique and full of different mediums and mixed media. Very cool stuff all around, and the gift shop was awesome. I got some cute cards and stickers!

Right next door to the museum was the spot I was most excited for, the Boulder Dushanbe Teahouse. I have a hard time liking tea, but I love tea houses and tea time. It’s more of an aesthetic thing, really. And Dushanbe is, in fact, an extremely aesthetic tea house. With an ornate, colorful, interior filled with plants, statues, and high, hand-painted ceilings held up by hand-carved cedar columns, the artistry pours out of every nook and cranny. On their website, this page talks about the 40 different Tajikistani artists that created the art that makes this tea house so beautiful, as well as the capital of Tajikistan, the teahouses namesake.

Look how wild these details are!

A shot of the interior of the tea house. The cedar columns, painted ceiling, plants, and skylight are visible.

The tea house is very popular, and their daily Afternoon Tea requires a reservation 24-hours in advance. Their even more coveted weekend Dim Sum Teatime is only offered on select weekends throughout the year, and reservations are required 60 days in advance.

As amazing as those sounded, Alex and I just went for their regular walk-in lunch, no waiting or reservation required. Though while we were there, they were actively setting up for their Afternoon Tea, and I got to see some of that unfold and peek at some snacks they were served. Plus each tea time table gets fresh flowers on their table:

A table with a white tablecloth, with a small glass vase full of pink, beautiful flowers, and a small paper that explains the afternoon tea.

Besides their extensive tea menu, they also have some different beverages and cocktails to choose from:

A beverage menu with a chai latte, London Fog latte, Vietnamese coffee, golden milk latte, etc.

A list of tea cocktails and mocktails!

I love that all of their cocktails (and mocktails) have tea in them, so fitting!

I started off with their house chai, as my friend highly recommended it:

A small glass mug filled with chai.

I actually ordered this iced but it came hot, and I wasn’t about to complain. It really wasn’t a big deal and it was delicious hot, so it’s totally whatever. Alex definitely didn’t steer me wrong, this chai was very nicely spiced and not too sweet like a lot of chai lattes end up being.

I also ended up ordering the Espresso Bliss cocktail, because you already know I adore espresso martinis:

An espresso martini served in a coupe glass with three espresso beans on top.

Tea infused vodka, Marble Moonlight espresso liqueur, Colorado Cream Liqueur, and espresso. I liked that this espresso martini had both espresso liqueur and cream liqueur, as a lot of espresso martinis don’t have any kind of cream component. Which is fine, too, just sometimes I like them creamier and sweeter rather than cold brew style.

And a quick look at the food before ordering our tea:

The small plates menu, featuring soup, salads, and other appetizer type dishes.

The tea time entree menu, consisting of noodle dishes, some sandwiches, and entree style dishes like saag paneer.

We actually did not get any food because we were trying to make sure we were hungry for our reservations at Shells & Sauce later that day, so we just stuck with tea (and a lil bit of vodka for me, evidently).

Finally, time for our actual tea:

Two white teapots, two white teacups with two white saucers, and two tea pot shaped dishes to put your tea bag in.

We decided to share two pots, one of their white peach tea and one mango tea. They brought out our sets and a timer, and when the timer was done our tea would be done steeping. Alex took their tea plain, while I added copious amounts of cream and sugar. I’m a menace, I know.

I also wanted to show y’all this table behind ours, though it wasn’t cleaned off yet, look how nice this seating area is:

A cushioned seating area with a raised table in the middle. There's lots of nice throw pillows and it sits in the corner by windows. It reminds me of a fancy conversation pit.

I would love to sit here with a big group of friends and experience their Afternoon Tea service.

After our tea session concluded, we checked out the shop and ended up taking some tea home. I really liked this tea house and definitely want to come back for food sometime!

Once we drove back to Denver, we chilled at the apartment before heading to our dinner reservation at Shells & Sauce, which they say on their website is a neighborhood Italian bistro. They weren’t kidding. This place is located in such a random little neighborhood next to a dry cleaners and a Chinese restaurant, and is just a little place absolutely packed with excited diners. Line out the door, yet nothing flashy on the inside. Just a small neighborhood joint, as advertised.

While we had originally come for their Restaurant Week menu, we decided to not pursue that menu and just order whatever we wanted instead.

I started off with one of their signature cocktails, the Pearfect Martini:

A martini glass filled to the brim with yellow liquid and a pear slice.

Grey Goose La Poire (pear vodka), pear puree, lemon, and Prosecco. Does that not sound like a nice, refreshing, crisp martini? It was pretty good, definitely a little spirit-forward but it honestly might’ve just been a heavy pour. I mean, the glass is definitely very full.

We split two appetizers: the garlic cheese curds, and the crab cakes.

A metal basket full of cheese curds served alongside a little stainless steel dish of marinara.

The texture of these cheese curds was really good, they were nice and squeaky curds, too. I will say there wasn’t a ton of garlic flavor, they seemed more just like plain cheese curds, but who doesn’t love a good curd?

Two round pucks of crab cakes served atop a remoulade sauce.

While I’m always happy to have a crab cake, these ones weren’t particularly memorable. They weren’t bad at all but were just very standard.

Then, it was time for our entrees. I got the Stuffed Shells Duo:

Four stuffed shells with two different sauces, topped with arugula, cheese, and walnuts.

The two shells on the left were six-cheese stuffed shells with marinara, and on the right we have the sweet potato, butternut squash, and goat cheese stuffed shells with pesto cream.

While the flavor of the stuffed shells fillings were really good, especially the sweet potato one, the pesto cream sauce was a broken emulsion, and made the dish feel rather heavy and oily. So while the filling was tasty, I think the presentation and mouthfeel of the dish suffered from the oily sauce. Which is sad because I love pesto cream!

My friend just got chicken fettuccini alfredo:

A bowl of chicken alfredo with fettuccini noodles and topped with parmesan.

We opted not to get dessert. The food was okay, the vibe was okay, and the service was just okay. Honestly, I’d rather go here when there’s no dinner rush, sit on the patio, and just have some wine and bruschetta.

Once again we returned to the apartment, and this time we partook in the lovely amenities of the apartment, that being the rooftop pool and hot tub. It was definitely too chilly for the pool, especially because of the wind, but the hot tub was so nice.

After that brief relaxing period, we knew it was time to hit the bars (we only hit two, haha).

First up on our list was a rooftop bar super close to Alex’s apartment called Sorry Gorgeous. You’ll know you’re on the right path when you see this doormat in front of the elevator:

A black floor mat that reads

I really loved the interior design of Sorry Gorgeous. Green velvet couches, huge moon lamps, plants, a low-lit bar area and a great view of the nighttime skyline.

I didn’t take too many photos, but here’s some to get a general vibe for the place:

A shot of the bar, in which all the shelves are contained with a half circle built into the wall like a cave, but well lit and also there's plants!

I love how the shelves are built into the wall like it’s some sort of cave full of liquor.

A shot of the inside of Sorry Gorgeous, showing about half the bar with wooden bar stools (but not in a dive bar type of way, like a sophisticated way), plenty of the moon lamps I mentioned, plus lots of plants, and dim lighting.

As you can see, it wasn’t very crowded, most everyone resided on that half of the bar while my friend I were practically all alone on our side.

We ended up moving to this corner booth to take some photos together!

A green velvet semi circle couch with a giant moon lamp overhead.

I actually ended up taking a selfie I liked pretty well:

A shot of me! I'm smiling!

This was about number five hundred and sixty-four and I shortly gave up on photos after this because I figured one that I liked decently was good enough.

I ordered their All Saints cocktail:

A small coupe glass with yellow liquid and a lemon twist in it.

Made with Botanist gin, pear, elderflower, rhubarb, lemon, and winter spices, this cocktail was refreshing and slightly sweet, and felt sophisticated. As you can see, I clearly like pear.

I really liked the service here. Since they weren’t busy we actually ended up talking to one of the staff members for a while and he was super nice and cool. I definitely thought this place would have more of a mean-girl bartender energy but that ended up not being the case at all!

Next time I go, I would love to try their pistachio guacamole and crispy mini tacos.

Onto our next bar of the evening, the Yacht Club.

A black wall with white lettering,

A warm welcome, no doubt.

While a little small, it more so just has that cozy dive bar feel where yeah, sure you might bump elbows with someone once or twice, but it’s all peachy keen, we’re all comrades, y’know? The bar portion of the Yacht Club is built right into the corner:

A bar split in half by a corner, with two shelves of liquor up top.

What I initially thought was just a dive bar turned out to be something so much cooler and more unique. The Yacht Club is a wildly interesting cocktail bar that also has hotdogs. Lots of hotdogs.

A very tiny hot dog menu, with a huge variety of dog types, including a caviar dog.

Look at this adorable little teeny tiny hot dog menu! From the classic dog to a dog with caviar, to one served alongside a Jack and Coke, you’re sure to find your preferred type. Personally, I really wanted a sampler platter of all of them.

Aside from the hot dog menu, they had this drink menu:

A drink menu listing their house cocktails and seasonal specials, as well.

I went ahead and ordered the Chew-Chu:

A small glass filled with white wine colored liquid and ice.

I had never heard of shochu before, but it turns out it’s a lot like sake and soju in the sense it’s a Japanese spirit made from the same sort of base ingredients like rice, barley, and sweet potato.

Though this drink was a little dry from the Sauvignon Blanc, it had really good, light flavors and was refreshing to sip on.

Oh, and here’s their menu of “dope shit we have rn”:

A letterboard sign that says

That amused me greatly.

Y’all. Look what Alex got:

A can of Gatorade. Yes, a 12oz soda can type of can. But Gatorade.

CANNED GATORADE. Have you ever seen such a thing before?! This was so mind blowing, Yacht Club is officially the coolest place ever.

This is Alex’s drink but I genuinely can’t remember what the heck it is:

A small glass filled with whiskey colored liquid, with ice and an orange garnish.

Once we had our initial drinks, we were still so stuffed from dinner that I couldn’t have a hot dog, but I knew they clearly had caviar, so I asked if a caviar bump was available for purchase. I love a caviar bump, it feels so luxe and is so spontaneous and fun. Thankfully the bartenders, who were so much fun and absolutely hilarious, said yes, and even did one with us:

Three shrimp chips with caviar on them.

Yummy. You’ll never guess how much they cost, either. A cool and breezy five smackaroos. Have you ever had a cheaper caviar bump?!

After taking a house shot, which I definitely don’t remember what they poured us (and also did with us), I got this drink:

A small glass absolutely overflowing with pebbled ice and filled with dark pink liquid, served with an orange garnish.

I can’t remember the name of this one, but it was very good, with like, a ton of crazy flavors packed in. I know that’s not descriptive, I was decently drunk okay cut me some slack!

Okay, okay, one more, and this is in fact the final of the 36 photos. You’re all troopers. Here’s the final drink of the evening:

A tall glass filled with pale green liquid and topped with tons of pink pebbled ice. With mint garnish.

This one I do remember the name of. This is the Southside Swizzle. I actually really enjoy Southside cocktails, and this one was no exception. The mint with the strawberry and lime was an elite combo. I love the visual presentation here, too.

Just kidding, I have one more photo! Check out this flamingo wallpaper in their bathroom:

A bathroom wall covered in green and pink flamingo wallpaper!

Finally, we walked back to Alex’s apartment, had some snacks, and went to bed. It was a long but extremely fun and memorable day. I absolutely loved the museum, the tea house, Sorry Gorgeous, and the Yacht Club. Highly recommend all of them!

Have you been to Boulder before? Do you like rooftop bars as much as I do? Have you seen canned Gatorade before? Let me know in the comments, and have a great day!

-AMS

01:28

Robert Smith: Not all elementary functions can be expressed with exp-minus-log [Planet Lisp]

By Robert Smith

All Elementary Functions from a Single Operator is a paper by Andrzej Odrzywołek that has been making rounds on the internet lately, being called everything from a “breakthrough” to “groundbreaking”. Some are going as far as to suggest that the entire foundations of computer engineering and machine learning should be re-built as a result of this. The paper says that the function

$$ E(x,y) := \exp x - \log y $$

together with variables and the constant $1$, which we will call EML terms, are sufficient to express all elementary functions, and proceeds to give constructions for many constants and functions, from addition to $\pi$ to hyperbolic trigonometry.

I think the result is neat and thought-provoking. Odrzywołek is explicit about his definition of “elementary function”. His Table 1 fixes “elementary” as 36 specific symbols, and under that definition his theorem is correct and clever, so long as we accept some of his modifications to the conventional $\log$ function and do arithmetic with infinities.

My concern is that the word “elementary” in the title carries a much broader meaning in standard mathematical usage. Odrzywołek recognizes this, saying little more than “[t]hat generality is not needed here” and that his work takes “the ordinary scientific-calculator point of view”. He does not offer further commentary.

What is this more general setting, and does his claim still hold? In modern pure mathematics, dating back to the 19th century, the definition of “elementary function” has been well established. We’ll get to a definition shortly, but to cut to the chase, the titular result does not hold in this setting. As such, in layman’s terms, I do not consider the “Exp-Minus-Log” function to be the continuous analog of the Boolean NAND gate or the universal quantum CCNOT/CSWAP gates.

The rough TL;DR is this: Elementary functions typically include arbitrary polynomial root functions, and EML terms cannot express them. Below, I’ll give a relatively technical argument that EML terms are not sufficient to express what I consider standard elementary functions.

To avoid any confusion, the purpose of this blog post is manifold:

  1. To elucidate what many mathematicians consider to be an “elementary function”, which is the foundation for a variety of rich and interesting math (especially if you like computer science).
  2. To prove a result about EML terms using topological Galois theory.
  3. To demonstrate how this result may be used to show an elementary function not expressible by EML terms.

This blog post is not a refutation of Odrzywołek’s work, though the title might be considered just as clickbait (and accurate) as his, depending on where you sit in the hall of mathematics and computation.

Disclaimer: I audited graduate-level mathematics courses almost 20 years ago, and I am not a professional mathematician. Please email me if my statements are clumsy or incorrect.

The 19th century is where all modern understanding of elementary functions was developed, Liouville being one of the big names with countless theorems of analysis and algebra named after him. One such result is about integration: do the outputs of integrals look the same as their inputs? Well, what does “input” and “look the same” mean? Liouville defined a class of functions called elementary functions, and said that the integral of an elementary function will sometimes be elementary, and when it is, it will always resemble the input in a specific way, plus potential extra logarithmic factors.

Since then, elementary functions have been defined by starting with rational functions and closing under arithmetic operations, composition, exponentiation, logarithms, and polynomial roots. While EML terms are quite expressive, they are unable to capture the “polynomial roots” in full generality. We will show this by using Khovanskii’s topological Galois theory: the monodromy group of a function built from rational functions by composition with $\exp$ and $\log$ is solvable. For anybody that has studied Galois theory in an algebra course, this will be familiar, as the destination here is effectively the same, but with more powerful intermediate tooling to wrangle exponentials and logarithms.

First, let’s be more precise by what we mean by an EML term and by a standard elementary function.

Definition (EML Term): An EML term in the variables $x_1,\dots,x_n$ is any expression obtained recursively, starting from $\{1, x_1,\dots,x_n\}$, by the rule $$ T,S \mapsto \exp T-\log S. $$ Each such term, evaluated at a point where all the $\log$ arguments are nonzero, determines an analytic germ; we take $\mathcal T_n$ to be the class of germs representable this way, together with their maximal analytic continuations.

Definition (Standard Elementary Function): The standard elementary functions $\mathcal{E}_n$ are the smallest class of multivalued analytic functions on domains in $\mathbb{C}^n$ containing the rational functions and closed under

  • arithmetic operations and composition,
  • exponentiation and logarithms,
  • algebraic adjunctions: if $P(Y)\in K[Y]$ is a polynomial whose coefficients lie in a previously constructed class $K$, then any local branch of a solution of $P(Y)=0$ is admitted.

What we will show is that the class of elementary functions defined this way is strictly larger than the class induced by EML terms.

Lemma: Every EML term has solvable monodromy group. In particular, if $f\in\mathcal T_n$ is algebraic over $\mathbb C(x_1,\dots,x_n)$, then its monodromy group is a finite solvable group.

Proof: We prove by induction on EML term construction. Constants and coordinate functions have trivial monodromy.

For the inductive step, suppose $f = \exp A-\log B$ with $A,B\in\mathcal T_n$, and assume that $\mathrm{Mon}(A)$ and $\mathrm{Mon}(B)$ are solvable. We argue in three steps.

Step 1: $\mathrm{Mon}(\exp A)$ is solvable. The germs of $\exp A$ are images under $\exp$ of the germs of $A$, with germs of $A$ differing by $2\pi i\mathbb Z$ collapsing to the same value. So there is a surjection $\mathrm{Mon}(A)\twoheadrightarrow\mathrm{Mon}(\exp A)$, and a quotient of a solvable group is solvable.

Step 2: $\mathrm{Mon}(\log B)$ is solvable. At a generic point $p$, germs of $\log B$ are parameterized by pairs $(b,k)$ where $b$ is a germ of $B$ at $p$ and $k\in\mathbb Z$ selects the branch of $\log$. A loop $\gamma$ acts by $$ (b,k)\mapsto\bigl(\rho_B(\gamma)(b), k+n(\gamma,b)\bigr), $$ where $\rho_B(\gamma)$ is the monodromy action of $\gamma$ on germs of $B$, and $n(\gamma,b)\in\mathbb Z$ is the winding number around $0$ of the analytic continuation of $b$ along $\gamma$. The projection $\mathrm{Mon}(\log B)\to\mathrm{Mon}(B)$ onto the first component is a surjective homomorphism. Its kernel consists of the elements of $\mathrm{Mon}(\log B)$ induced by loops $\gamma$ with $\rho_B(\gamma)=\mathrm{id}$, which then act only by integer shifts on the $k$-coordinate. Let $S_B$ be the set of germs of $B$ at $p$. For each $b\in S_B$, such a loop determines an integer shift $n(\gamma,b)$, so the kernel embeds in the direct product $\mathbb Z^{S_B}$. In particular, the kernel is abelian. Hence $\mathrm{Mon}(\log B)$ is an extension of $\mathrm{Mon}(B)$ by an abelian group, and extensions of solvable groups by abelian groups are solvable.

Step 3: $\mathrm{Mon}(f)$ is solvable. At a generic point, a germ of $f=\exp A-\log B$ is obtained by subtraction from a pair (germ of $\exp A$, germ of $\log B$), and analytic continuation acts componentwise on such pairs. This gives a surjection of $\pi_1$ onto some subgroup $$ H \le \mathrm{Mon}(\exp A)\times\mathrm{Mon}(\log B), $$ and, since $f$ is obtained from the pair by subtraction, this descends to a surjection $H\twoheadrightarrow\mathrm{Mon}(f)$. So $\mathrm{Mon}(f)$ is a quotient of a subgroup of a direct product of solvable groups, hence solvable.

The second statement of the lemma follows: an algebraic function has finitely many branches, so its monodromy group is finite; a solvable group that is finite is, well, finite and solvable. ∎

Remark. This is the core of Khovanskii’s topological Galois theory; see Topological Galois Theory: Solvability and Unsolvability of Equations in Finite Terms.

Theorem: $\mathcal T_n \subsetneq \mathcal E_n$.

Proof: $\mathcal E_n$ is closed under algebraic adjunction, so any local branch of an algebraic function is elementary. In particular, a branch of a root of the generic quintic $$ f^5+a_1f^4+a_2f^3+a_3f^2+a_4f+a_5=0 $$ is elementary.

Suppose for contradiction that at some point $p$ a germ of a branch of this root agrees with a germ of an EML term $T$. By uniqueness of analytic continuation, the Riemann surfaces obtained by maximally continuing these two germs coincide, so in particular their monodromy groups coincide. The monodromy group of the generic quintic is $S_5$, which is not solvable. But by the lemma, the monodromy group of any EML term is solvable. Contradiction.

Hence $\mathcal T_n$ is a strict subset of $\mathcal E_n$. ∎

Edit (15 April 2026): This article used to have an example proving that the real and complex absolute value cannot be expressed over their entire domain as EML terms under the conventional definition of $\log$. I wrote it to emphasize that Odrzywołek’s approach required mathematical “patching” in order to work as intended. However, it ended up more distracting than illuminating, and was tangential to the point about the definition of “elementary”, so it has been removed.

00:56

Papal, See? – DORK TOWER 13.04.26 [Dork Tower]

Most DORK TOWER strips are now available as signed, high-quality prints, from just $25!  CLICK HERE to find out more!

HEY! Want to help keep DORK TOWER going? Then consider joining the DORK TOWER Patreon and ENLIST IN THE ARMY OF DORKNESS TODAY! (We have COOKIES!) (And SWAG!) (And GRATITUDE!)

Tuesday, 14 April

23:42

Urgent: Censure bully for threatening reporters [Richard Stallman's Political Notes]

US citizens: call on Congress to censure the bully for threatening reporters with treason charges.

US citizens: Join with this campaign to address this issue.

To phone your congresscritter about this, the main switchboard is +1-202-224-3121.

Please spread the word.

Urgent: Restore PBS and NPR funding [Richard Stallman's Political Notes]

US citizens: call on Congress to restore PBS and NPR funding.

US citizens: Join with this campaign to address this issue.

To phone your congresscritter about this, the main switchboard is +1-202-224-3121.

Please spread the word.

Urgent: Don't cut funds for Americans' medicine [Richard Stallman's Political Notes]

US citizens: call on Congress not to cut funds for Americans' medicine for the sake of unjustified war.

US citizens: Join with this campaign to address this issue.

To phone your congresscritter about this, the main switchboard is +1-202-224-3121.

Please spread the word.

Urgent: Reject budget packages that slash basic needs programs [Richard Stallman's Political Notes]

US citizens: call on Tell Congress: Reject any budget package that slashes basic needs programs to give additional billions to deportation and war.

See the instructions for how to sign this letter campaign without running any nonfree JavaScript code--not trivial, but not hard.

US citizens: Join with this campaign to address this issue.

To phone your congresscritter about this, the main switchboard is +1-202-224-3121.

Please spread the word.

Urgent: Protect Jewish employees from federal persecution [Richard Stallman's Political Notes]

US citizens: call on universities to stand with he University of Pennsylvania to protect Jewish employees from federal persecution.

See the instructions for how to sign this letter campaign without running any nonfree JavaScript code--not trivial, but not hard.

Magats don't actually care about Jews, but they find "stopping antisemitism" a convenient excuse to persecute people who speak up for the rights of Palestinians. However, threatening Jews in the name of "stopping antisemitism" is even more perverse.

Urgent: Reject wrecker's military budget [Richard Stallman's Political Notes]

US citizens: call on Congress to reject the wrecker's proposed $1.5 trillion military budget.

US citizens: Join with this campaign to address this issue.

To phone your congresscritter about this, the main switchboard is +1-202-224-3121.

Please spread the word.

Urgent: Fight Republican schemes to cut funds for medical care [Richard Stallman's Political Notes]

US citizens: call on your congresscritter and senators to fight any Republican scheme to cut funds for medical care or boost funds for war with Iran.

See the instructions for how to sign this letter campaign without running any nonfree JavaScript code--not trivial, but not hard.

US citizens: Join with this campaign to address this issue.

To phone your congresscritter about this, the main switchboard is +1-202-224-3121.

Please spread the word.

Urgent: Ban junk fees for rental housing [Richard Stallman's Political Notes]

US citizens: call on the Federal Trade Commission to ban junk fees for rental housing.

In my letter, I also said that would-be renters should not be required to use any web site in the process of seeking, accepting, occupying and paying for the rental. Those web sites are usually malicious, since they run nonfree software in the user's browser. And they do various sorts of snooping.

By raising this issue in your letter, you will support software freedom.

See the instructions for how to sign this letter campaign without running any nonfree JavaScript code--not trivial, but not hard.

Urgent: Expand investigation of torture of prisoners [Richard Stallman's Political Notes]

US citizens: call on your congresscritter and senators to expand the investigation of torture of prisoners to all deportation prisons.

See the instructions for how to sign this letter campaign without running any nonfree JavaScript code--not trivial, but not hard.

US citizens: Join with this campaign to address this issue.

To phone your congresscritter about this, the main switchboard is +1-202-224-3121.

Please spread the word.

Urgent: Pass Fossil-Free Insurers Act [Richard Stallman's Political Notes]

US citizens: call on your state legislators to pass the Fossil-Free Insurers Act.

See the instructions for how to sign this letter campaign without running any nonfree JavaScript code--not trivial, but not hard.

23:21

Patch Tuesday, April 2026 Edition [Krebs on Security]

Microsoft today pushed software updates to fix a staggering 167 security vulnerabilities in its Windows operating systems and related software, including a SharePoint Server zero-day and a publicly disclosed weakness in Windows Defender dubbed “BlueHammer.” Separately, Google Chrome fixed its fourth zero-day of 2026, and an emergency update for Adobe Reader nixes an actively exploited flaw that can lead to remote code execution.

A picture of a windows laptop in its updating stage, saying do not turn off the computer.

Redmond warns that attackers are already targeting CVE-2026-32201, a vulnerability in Microsoft SharePoint Server that allows attackers to spoof trusted content or interfaces over a network.

Mike Walters, president and co-founder of Action1, said CVE-2026-32201 can be used to deceive employees, partners, or customers by presenting falsified information within trusted SharePoint environments.

“This CVE can enable phishing attacks, unauthorized data manipulation, or social engineering campaigns that lead to further compromise,” Walters said. “The presence of active exploitation significantly increases organizational risk.”

Microsoft also addressed BlueHammer (CVE-2026-33825), a privilege escalation bug in Windows Defender. According to BleepingComputer, the researcher who discovered the flaw published exploit code for it after notifying Microsoft and growing exasperated with their response. Will Dormann, senior principal vulnerability analyst at Tharros, says he confirmed that the public BlueHammer exploit code no longer works after installing today’s patches.

Satnam Narang, senior staff research engineer at Tenable, said April marks the second-biggest Patch Tuesday ever for Microsoft. Narang also said there are indications that a zero-day flaw Adobe patched in an emergency update on April 11 — CVE-2026-34621 — has seen active exploitation since at least November 2025.

Adam Barnett, lead software engineer at Rapid7, called the patch total from Microsoft today “a new record in that category” because it includes nearly 60 browser vulnerabilities. Barnett said it might be tempting to imagine that this sudden spike was tied to the buzz around the announcement a week ago today of Project Glasswing — a much-hyped but still unreleased new AI capability from Anthropic that is reportedly quite good at finding bugs in a vast array of software.

But he notes that Microsoft Edge is based on the Chromium engine, and the Chromium maintainers acknowledge a wide range of researchers for the vulnerabilities which Microsoft republished last Friday.

“A safe conclusion is that this increase in volume is driven by ever-expanding AI capabilities,” Barnett said. “We should expect to see further increases in vulnerability reporting volume as the impact of AI models extend further, both in terms of capability and availability.”

Finally, no matter what browser you use to surf the web, it’s important to completely close out and restart the browser periodically. This is really easy to put off (especially if you have a bajillion tabs open at any time) but it’s the only way to ensure that any available updates get installed. For example, a Google Chrome update released earlier this month fixed 21 security holes, including the high-severity zero-day flaw CVE-2026-5281.

For a clickable, per-patch breakdown, check out the SANS Internet Storm Center Patch Tuesday roundup. Running into problems applying any of these updates? Leave a note about it in the comments below and there’s a decent chance someone here will pipe in with a solution.

22:49

Microsoft isn’t removing Copilot from Windows 11, it’s just renaming it [OSnews]

A few weeks ago, Microsoft made some concrete promises about fixing and improving Windows, and among them was removing useless “AI” integrations. Applications like Notepad, Snipping Tool, and others would see their “AI” features removed. Well, it turns out Microsoft employs a very fringe definition of the concept.

Microsoft seems to have stripped away mentions of the “Copilot” brand in the Windows Insider version of the Notepad app. The Copilot button in the toolbar is gone, and instead, you’ll find a writing icon which will present you AI-powered writing assistance, such as rewrite, summarize, tone modification, format configuration, and more. Additionally, “AI features” in Notepad settings has been renamed to “Advanced features” and it allows users to toggle off AI capabilities within the app.

↫ Usama Jawad at Neowin

If the recent changes to Notepad are any indication, it seems Microsoft is, actually, not at all going to “reducing unnecessary Copilot entry points”, as they worded it, but is merely just going to rename these features so they aren’t so ostentatiously present. At least, that seems to be the plan for Notepad, and we’ll have to see if they have the same plans for the other applications. I mean, they have to push “AI” or look like fools.

I just don’t understand how a company like Microsoft can be so utterly terrible at communication. While I personally would want all “AI” features yeeted straight from Windows, I’m sure a ton of people are just fine with the features being less in-your-face and stuffed inside a normal menu alongside all the other normal features. They could’ve just been honest about their intentions, and it would’ve been so much better.

Like virtually every other technology company, Microsoft just seems incapable of not lying.

21:14

Why was there a red telephone at every receptionist desk? [The Old New Thing]

Some time ago, I noted that there was a walkthrough of the original Microsoft Building 3. If you go behind the receptionist desk, you’ll see a telephone at the receptionist’s station, but off to side, there was also a red telephone resting between a tape dispenser and a small pamphlet labelled “Quick Reference Guide”.

Red telephone on side table

What was this red telephone for? Was it a direct line to Bill Gates’s office? Or maybe it was a direct line to Security?

Nope.

It was just a plain telephone.

And that’s what made it special.

As is customary at large companies, the telephones on the Microsoft campus were part of a corporate PBX (private branch exchange). A PBX is a private telephone system within a company, and companies use them to save on telephone costs, as well as to provide auxiliary telephone services. For example, you could call another office by dialing just the extension, and the call would be routed entirely within the PBX without having to interact with the public telephone systems. Generally, most calls are typically from one office to another, so a PBX saves considerable money by reducing demand for outside communications services. Also, a PBX allows integration with other systems. For example, if somebody leaves you a voicemail, the system can email you a message.

But what if the PBX is down, and there is an emergency?

The red telephones are plain telephones with standard telephone service. They are not part of the PBX and therefore operate normally even if there is a PBX outage. If there is an emergency, the receptionist can use the red telephone to call emergency services. Presumably, each red telephone was registered in the telephone system with the address of its building, allowing emergency services to dispatch assistance quickly.

Bonus chatter: What was the “Quick Reference Guide”? It was a guide to emergency procedures. It makes sense that it was kept next to the emergency telephone.

Bonus bonus chatter: Bill Gates kept a red telephone in his own office as well. If the PBX went down, I guess it was technically true that the red telephones could be used to call Bill Gates’s office.

The post Why was there a red telephone at every receptionist desk? appeared first on The Old New Thing.

20:21

Zig 0.16.0 released [LWN.net]

The Zig project has announced version 0.16.0 of the Zig programming language.

This release features 8 months of work: changes from 244 different contributors, spread among 1183 commits.

Perhaps most notably, this release debuts I/O as an Interface, but don't sleep on the Language Changes or enhancements to the Compiler, Build System, Linker, Fuzzer, and Toolchain which are also included in this release.

LWN last covered Zig in December 2025.

18:49

Link [Scripting News]

A glitch in the matrix. The app that keeps daveverse.org in sync with scripting.com has been offline since Friday, so I'm republishing all the posts since then. They will all appear to have been posted today on daveverse. As they say -- still diggin!

17:21

Link [Scripting News]

I updated sally.scripting.com to support https, and updated it with posts from scripting.com in 2023-2026. I was using it as an example of prior art of user interface for Claude. I figured restoring this app on my own would be penance for believing that Claude was anywhere near as smart as I am. Not even close. Not today at least. Grrr.

Link [Scripting News]

I've come to the conclusion, perhaps temporarily, that Claude can't work on a programming project with an experienced developer. It doesn't check its work, it'll think it's found the problem, makes a change, or worse causes you to do a lot of work so it can make a change. It doesn't use the information it gives you, can't even remember what was in a bug report less than one screen above. I could have done the work I coached it through through the morning, with a thoroughly inadequate result, in an hour at most. At least today it couldn't learn from prior art, and couldn't follow basic instructions. It's weird though because I'm really suprised how little it knows about the scientific method or even has been trained in how to work with others. I seem to recall situations where it was extremely good at reading code. Not a totally wasted session, let's see what I can learn from it.

[$] Tagging music with MusicBrainz Picard [LWN.net]

Part of the "fun" that comes with curating a self-hosted music library is tagging music so that it has accurate and uniform metadata, such as the band names, album titles, cover images, and so on. This can be a tedious endeavor, but there are quite a few open-source tools to make this process easier. One of the best, or at least my favorite, is MusicBrainz Picard. It is a cross-platform music-tagging application that pulls information from the well-curated, crowdsourced MusicBrainz database project and writes it to almost any audio file format.

OpenSSL 4.0.0 released [LWN.net]

Version 4.0.0 of the OpenSSL cryptographic library has been released. This release includes support for a number of new cryptographic algorithms and has a number of incompatible changes as well; see the announcement for the details.

Steinar H. Gunderson: Looking for work [Planet Debian]

It seems my own plans and life's plans diverged this spring, so I am in the market for a new job. So if you're looking for someone with a long track record making your code go brrr really fast, give me a ping (contact information at my homepage). Working from Oslo (on-site or remote), CV available upon request. No AI boosterism or cryptocurrency grifters, please :-)

Upcoming Speaking Engagements [Schneier on Security]

This is a current list of where and when I am scheduled to speak:

The list is maintained on this page.

16:35

Dirk Eddelbuettel: anytime 0.3.13 on CRAN: Mostly Minor Bugfix [Planet Debian]

A maintenance release 0.3.13 of the anytime package arrived on CRAN today, sticking with the roughly yearly schedule we have now. Binaries for r2u have been built already. The package is fairly feature-complete, and code and functionality remain mature and stable.

anytime is a very focused package aiming to do just one thing really well: to convert anything in integer, numeric, character, factor, ordered, … input format to either POSIXct (when called as anytime) or Date objects (when called as anydate) – and to do so without requiring a format string as well as accomodating different formats in one input vector. See the anytime page, the GitHub repo for a few examples, the nice pdf vignette, and the beautiful documentation site for all documentation.

This release was triggered by a bizarre bug seen on elementary os 8. For “reason” anytime was taking note on startup where it runs, and used a small and simply piece of code reading /etc/os-release when it exists. We assumed sane content, but this particular operating system and releases managed to have a duplicate entry throwing us spanner. So now this code is robust to duplicates, and no longer executed on each startup but “as needed” which is a net improvement. We also switched the vignette to being deployed by the new Rcpp::asis() driver.

The short list of changes follows.

Changes in anytime version 0.3.13 (2026-04-14)

  • Continuous integration has received minor updates

  • The vignette now use the Rcpp::asis() driver, and references have been refreshed

  • Stateful 'where are we running' detection is now more robust, and has been moved from running on each startup to a cached 'as needed' case

Courtesy of my CRANberries, there is also a diffstat report of changes relative to the previous release. The issue tracker tracker off the GitHub repo can be use for questions and comments. More information about the package is at the package page, the GitHub repo, in the vignette, and at the documentation site.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can now sponsor me at GitHub. You can also sponsor my Tour de Shore 2026 ride in support of the Maywood Fine Arts Center.

14:21

Security updates for Tuesday [LWN.net]

Security updates have been issued by Debian (gdk-pixbuf, gst-plugins-bad1.0, and xdg-dbus-proxy), Fedora (chromium, deepin-image-viewer, dtk6gui, dtkgui, efl, elementary-photos, entangle, flatpak, freeimage, geeqie, gegl04, gthumb, ImageMagick, kf5-kimageformats, kf5-libkdcraw, kf6-kimageformats, kstars, libkdcraw, libpasraw, LibRaw, luminance-hdr, nomacs, OpenImageIO, OpenImageIO2.5, photoqt, python-cryptography, rawtherapee, shotwell, siril, swayimg, vips, and webkitgtk), Red Hat (firefox and podman), Slackware (libarchive), SUSE (expat, glibc, GraphicsMagick, libcap-devel, libpng16, libtpms, nodejs24, openssl-1_0_0, openssl-1_1, openssl-3, openvswitch, polkit, python-requests, python311-biopython, python312, python39, and tigervnc), and Ubuntu (corosync, kvmtool, libxml-parser-perl, linux-azure, linux-azure, linux-azure-6.17, linux-azure, linux-azure-6.8, policykit-1, redis, lua5.1, lua-cjson, lua-bitop, rustc, vim, and xdg-dbus-proxy).

14:14

Petter Reinholdtsen: Talking to the Computer, and Getting Some Nonsense Back... [Planet Debian]

At last, I can run my own large language model artificial idiocy generator at home on a Debian testing host using Debian packages directly from the Debian archive. After months of polishing the llama.cpp, whisper.cpp and ggml packages, and their dependencies, I was very happy to see today that they all entered Debian testing this morning. Several release-critical issues in dependencies have been blocking the migration for the last few weeks, and now finally the last one of these has been fixed. I would like to extend a big thanks to everyone involved in making this happen.

I've been running home-build editions of whisper.cpp and llama.cpp packages for a while now, first building from the upstream Git repository and later, as the Debian packaging progressed, from the relevant Salsa Git repositories for the ROCM packages, GGML, whisper.cpp and llama.cpp. The only snag with the official Debian packages is that the JavaScript chat client web pages are slightly broken in my setup, where I use a reverse proxy to make my home server visible on the public Internet while the included web pages only want to communicate with localhost / 127.0.0.1. I suspect it might be simple to fix by making the JavaScript code dynamically look up the URL of the current page and use that to determine where to find the API service, but until someone fixes BTS report #1128381, I just have to edit /usr/share/llama.cpp-tools/llama-server/themes/simplechat/simplechat.js every time I upgrade the package. I start my server like this on my machine with a nice AMD GPU (donated to me as a Debian developer by AMD two years ago, thank you very much):

  LC_ALL=C llama-server \
    -ngl 256  \
    -c $(( 42 * 1024)) \
    --temp 0.7 \
    --repeat_penalty 1.1 \
    -n -1 \
    -m Qwen3-Coder-30B-A3B-Instruct-Q5_K_S.gguf

It only takes a few minutes to load the model for the first time and prepare a nice API server for me at https://my.reverse.proxy.example.com:8080/v1/, available (note, this sets up the server up without authentication; use a reverse proxy with authentication if you need it) for all the API clients I care to test. I switch models regularly to test different new ones, the Qwen3-Coder one just happen to be the one I use at the moment. Perhaps these packages is something for you to have fun with too?

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

13:56

A Hole in Your Plan [The Daily WTF]

Theresa works for a company that handles a fair bit of personally identifiable information that can be tied to health care data, so for them, security matters. They need to comply with security practices laid out by a variety of standards bodies and be able to demonstrate that compliance.

There's a dirty secret about standards compliance, though. Most of these standards are trying to avoid being overly technically prescriptive. So frequently, they may have something like, "a process must exist for securely destroying storage devices before they are disposed of." Maybe it will include some examples of what you could do to meet this standard, but the important thing is that you have to have a process. This means that if you whip up a Word document called "Secure Data Destruction Process" and tell people they should follow it, you can check off that box on your compliance. Sometimes, you need to validate the process; sometimes you need to have other processes which ensure this process is being followed. What you need to do and to what complexity depends on the compliance structure you're beholden to. Some of them are surprisingly flexible, which is a polite way of saying "mostly meaningless".

Theresa's company has a process for safely destroying hard drives. They even validated it, shortly after its introduction. They even have someone who checks that the process has been followed. The process is this: in the basement, someone set up a cheap drill press, and attached a wooden jig to it. You slap the hard drive in the jig, turn on the drill, and brrrrzzzzzz- poke a hole through the platters making the drive unreadable.

There's just one problem with that process: the company recently switched to using SSDs. The SSDs are in a carrier which makes them share the same form factor as old-style spinning disk drives, but that's just a thin plastic shell. The actual electronics package where the data is stored is quite small. Small enough, and located in a position where the little jig attached to the drill guarantees that the drill won't even touch the SSD at all.

For months now, whenever a drive got decommissioned, the IT drone responsible for punching a hole through it has just been drilling through plastic, and nothing else. An unknown quantity of hard drives have been sent out for recycling with PII and health data on them. But it's okay, because the process was followed.

The compliance team at the company will update the process, probably after six months of meetings and planning and approvals from all of the stakeholders. Though it may take longer to glue together a new jig for the SSDs.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

12:07

How Hackers Are Thinking About AI [Schneier on Security]

Interesting paper: “What hackers talk about when they talk about AI: Early-stage diffusion of a cybercrime innovation.

Abstract: The rapid expansion of artificial intelligence (AI) is raising concerns about its potential to transform cybercrime. Beyond empowering novice offenders, AI stands to intensify the scale and sophistication of attacks by seasoned cybercriminals. This paper examines the evolving relationship between cybercriminals and AI using a unique dataset from a cyber threat intelligence platform. Analyzing more than 160 cybercrime forum conversations collected over seven months, our research reveals how cybercriminals understand AI and discuss how they can exploit its capabilities. Their exchanges reflect growing curiosity about AI’s criminal applications through legal tools and dedicated criminal tools, but also doubts and anxieties about AI’s effectiveness and its effects on their business models and operational security. The study documents attempts to misuse legitimate AI tools and develop bespoke models tailored for illicit purposes. Combining the diffusion of innovation framework with thematic analysis, the paper provides an in-depth view of emerging AI-enabled cybercrime and offers practical insights for law enforcement and policymakers.

11:07

Russell Coker: Furilabs FLX1s Finally Working [Planet Debian]

I’ve been using the Furilabs FLX1s phone [1] as my daily driver for 6 weeks, it’s a decent phone, not as good as I hoped but good enough to use every day and rely on for phone calls about job interviews etc. I intend to keep using it as my main phone and as a platform to improve phone software in Debian as you really can’t effectively find bugs unless you use the platform for important tasks.

Support Problems

I previously wrote about the phone after I received it without a SIM caddy on the 13th of Jan. I had a saga with support about this, on the 16th of Jan one support person said that they would ship it immediately but didn’t provide a tracking number or any indication of when it would arrive. On the 5th of Feb I contacted support again and asked how long it would be, the new support person seemed to have no record of my previous communication but said that they would send it. On the 17th of Feb I made another support request including asking for a way of direct communication as the support email came from an address that wouldn’t accept replies, I was asked for a photo showing where the problem is. The support person also said that they might have to send a replacement phone!

The last support request I sent included my disappointment at the time taken to resolve the issue and the proposed solution of replacing the entire phone (why have two international shipments of a fragile and expensive phone when a single letter with a cheap SIM caddy would do?). I didn’t receive a reply but the SIM caddy arrived on the 2nd of Mar. Here is a pic of the SIM caddy and the package it came in:

One thing that should be noted is that some of the support people seemed to be very good at their jobs and they were all friendly. It was the system that failed here, turning a minor issue of a missing part into a 6 week saga.

Furilabs needs to do the following to address this issue:

  1. Make it possible to reply directly to a message from a support person. Accept email with a custom subject to sort it, give a URL for a web form, anything. Collating discussions with a customer allows giving better support while taking less time for the support people.
  2. Have someone monitor every social media address that is used by the company. When someone sends a support request in a public Mastodon post it indicates that something has gone wrong and you want to move quickly to resolve it.
  3. Take care of the little things, like sending a tracking number for every parcel. If it’s something too small for a parcel (the SIM caddy could have fit in a regular letter) then just tell the customer what date it was posted and where it was posted from so they have some idea of when it will arrive.

This is not just a single failure of Furilabs support, it’s a systemic failure of their processes.

Problems I Will Fix – Unless Someone Beats Me to it

Here are some issues I plan to work on.

Smart Watch Support

I need to port one of the smart watch programs to Debian. Also I want to make one of them support the Colmi P80 [2].

A smart watch significantly increases the utility of a phone even though IMHO they aren’t doing nearly all the things that they could and should do. When we get Debian programs talking to the PineTime it will make a good platform for development of new smart phone and OS features.

Nextcloud

I have ongoing issues of my text Nextcloud installation on a Debian VM not allowing connection from the Linux desktop app (as packaged in Debian) and from the Android client (from f-droid). The desktop client works with a friend’s Nextcloud installation on Ubuntu so I may try running it on an Ubuntu VM I run while waiting for the Debian issue to get resolved. There was a bug recently fixed in Nextcloud that appears related so maybe the next release will fix it.

For the moment I’ve been running without these features and I call and SMS people from knowing their number or just returning calls. Phone calls generally aren’t very useful for me nowadays except when applying for jobs. If I could deal with recruiters and hiring managers via video calls then I would consider just not having a phone number.

Wifi IPv6

Periodically IPv6 support just stops working, I can’t ping the gateway. I turn wifi off and on again and it works. This might be an issue with my wifi network configuration. This might be an issue with the way I have configured my IPv6 networking, although that problem doesn’t happen with any of my laptops.

Chatty Sorting

Chatty is the program for SMS that is installed by default (part of the phosh/phoc setup), it also does Jabber. Version 0.8.7 is installed which apparently has some Furios modifications and it doesn’t properly support sorting SMS/Jabber conversations. Version 0.8.9 from Debian sorts in the same way as most SMS and Jabber programs with the most recent at the top. But the Debian version doesn’t support Jabber (only SMS and Matrix). When I went back to the Furilabs version of Chatty it still sorted for a while but then suddenly stopped. Killing Chatty (not just closing the window and reopening it) seems to make it sort the conversations sometimes.

Problems for Others to Fix

Here are the current issues I have starting with the most important.

Important

The following issues seriously reduce the usability of the device.

Hotspot

The Wifi hotspot functionality wasn’t working for a few weeks, this Gitlab issue seems to match it [3]. It started working correctly for a day and I was not sure if an update I applied fixed the bug or if it’s some sort of race condition that worked for this boot and will return next time I reboot it. Later on I rebooted it and found that it’s somewhat random whether it works or now.

Also while it is mostly working it seemed to stop working about every 25 minutes or so and I had to turn it off and on again to get it going.

On another day it went to a stage where it got repeated packet loss when I pinged the phone as a hotspot from my laptop. A pattern of 3 ping responses and 3 “Destination Host Unreachable” messages was often repeated.

I don’t know if this is related to the way Android software is run in a container to access the hardware.

4G Reliability

Sometimes 4G connectivity has just stopped, sometimes I can stop and restart the 4G data through software to fix it and sometimes I need to use the hardware switch. I haven’t noticed this for a week or two so there is a possibility that one fix addressed both Hotspot and 4G.

One thing that I will do is setup monitoring to give an alert on the phone if it can’t connect to the Internet. I don’t want it to just quietly stop doing networking stuff and not tell me!

On-screen Keyboard

The compatibility issues of the GNOME and KDE on-screen keyboards are getting me. I use phosh/phoc as the login environment as I want to stick to defaults at first to not make things any more difficult than they need to be. When I use programs that use QT such as Nheko the keyboard doesn’t always appear when it should and it forgets the setting for “word completion” (which means spelling correction).

The spelling correction system doesn’t suggest replacing “dont” with “don’t” which is really annoying as a major advantage for spelling checkers on touch screens is inserting an apostrophy. An apostrophy takes at least 3* longer than a regular character and saving that delay makes a difference to typing speed.

The spelling correction doesn’t correct two words run together.

Medium Priority

These issues are ongoing annoyances.

Delay on Power Button

In the best case scenario this phone has a much slower response to pressing the power button than the Android phones I tested (Huawei Mate 10 Pro and Samsung Galaxy Note 9) and a much slower response than my recollection of the vast majority of Android phones I’ve ever used. For testing pressing buttons on the phones simultaneously resulted in the Android phone screens lighting up much sooner. Something like 200ms vs 600ms – I don’t have a good setup to time these things but it’s very obvious when I test.

In a less common case scenario (the phone having been unused for some time) the response can be something like 5 seconds. The worst case scenario is something in excess of 20 seconds.

For UI designers, if you get multiple press events from a button that can turn the screen on/off please make your UI leave the screen on and ignore all the stacked events. Having the screen start turning on and off repeatedly when the phone recovers and processes all the button presses isn’t good, especially when each screen flash takes half a second.

Notifications

Touching on a notification for a program often doesn’t bring it to the foreground. I haven’t yet found a connection between when it does and when it doesn’t.

Also the lack of icons in the top bar on the screen to indicate notifications is annoying, but that seems to be an issue of design not the implementation.

Charge Delay

When I connect the phone to a power source there is a delay of about 22 seconds before it starts to charge. Having it miss 22 seconds of charge time is no big deal, having to wait 22 seconds to be sure it’s charging before leaving it is really annoying. Also the phone makes an audible alert when it gets to 0% charge which woke me up one night when I had failed to push the USB-C connector in hard enough. This phone requires a slightly deeper connector than most phones so with some plugs it’s easy to not quite insert them far enough.

Torch aka Flash

The light for the “torch” or flash for camera is not bright at all. In a quick test staring into the light from 40cm away wasn’t unpleasant compared to my Huawei Mate 10 Pro which has a light bright enough that it hurts to look at it from 4 meters away.

Because of this photos at night are not viable, not even when photographing something that’s less than a meter away.

The torch has a brightness setting which doesn’t seem to change the brightness, so it seems likely that this is a software issue and the brightness is set at a low level and the software isn’t changing it.

Audio

When I connect to my car the Lollypop player starts playing before the phone directs audio to the car, so the music starts coming from the phone for about a second. This is an annoying cosmetic error. Sometimes audio playing pauses for no apparent reason.

It doesn’t support the phone profile with Bluetooth so phone calls can’t go through the car audio system. Also it doesn’t always connect to my car when I start driving, sometimes I need to disable and enable Bluetooth to make it connect.

When I initially set the phone up Lollypop would send the track name when playing music through my car (Nissan LEAF) Bluetooth connection, after an update that often doesn’t happen so the car doesn’t display the track name or whether the music is playing but the pause icon works to pause and resume music (sometimes it does work).

About 30 seconds into a phone call it switches to hands-free mode while the icon to indicate hands-free is not highlighted, so I have to press the hands-free button twice to get it back to normal phone mode.

Low Priority

I could live with these things remaining as-is but it’s annoying.

Ticket Mode

There is apparently some code written to display tickets on screen without unlocking. I want to get this working and store screen-caps of the Android barcode screens of the different loyalty cards so I can scan them without unlocking. My threat model does not include someone trying to steal my phone to get a free loaf of bread on the bakery loyalty program.

Camera

The camera app works with both the back and front cameras, which is nice, and sadly based on my experience with other Debian phones it’s noteworthy. The problem is that it takes a long time to take a photo, something like a second after the button is pressed – long enough for you to think that it just silently took a photo and then move the phone.

The UI of the furios-camera app is also a little annoying, when viewing photos there is an icon at the bottom left of the screen for a video camera and an icon at the bottom right with a cross. Which every time makes me think “record videos” and “leave this screen” not “return to taking photos” and “delete current photo”. I can get used to the surprising icons, but being so slow is a real problem.

GUI App Installation

The program for managing software doesn’t work very well. It said that there were two updates for Mesa package needed, but didn’t seem to want to install them. I ran “flatpak update” as root to fix that. The process of selecting software defaults to including non-free, and most of the available apps are for desktop/laptop with no way to search for phone/tablet apps.

Generally I think it’s best to just avoid this and use apt and flatpak directly from the command-line. Being able to ssh to my phone from a desktop or laptop is good!

Android Emulation

The file /home/furios/.local/share/andromeda/data/system/uiderrors.txt is created by the Andromeda system which runs Android apps in a LXC container and appears to grow without end. After using the phone for a month it was 3.5G in size. The disk space usage isn’t directly a problem, out of the 110G storage space only 17G is used and I don’t have a need to put much else on it, even if I wanted to put backups of /home from my laptop on it when travelling that would still leave plenty of free space. But that sort of thing is a problem for backing up the phone and wasting 3.5G out of 110G total is a fairly significant step towards breaking the entire system.

Also having lots of logging messages from a subsystem that isn’t even being used is a bad sign.

I just tried using it and it doesn’t start from either the settings menu or from the f-droid icon. Android isn’t that important to me as I want to get away from the proprietary app space so I won’t bother trying this any more.

Unfixable Problems

Unlocking

After getting used to fingerprint unlocking going back to a password is a pain. I think that the hardware isn’t sufficient for modern quality face recognition that can’t be fooled by a photo and there isn’t fingerprint hardware.

When I first used an Android phone using a pin to unlock didn’t seem like a big deal, but after getting used to fingerprint unlock it’s a real drag to go without. This is a real annoyance when doing things like checking Wikipedia while watching TV.

This phone would be significantly improved with a fingerprint sensor or a camera that worked well enough for face unlock.

Plasma Mobile

According to Reddit Plasma Mobile (KDE for phones) doesn’t support Halium and can never work on this phone because of it [4]. This is one of a number of potential issues with the phone, running on hardware that was never designed for open OSs is always going to have issues.

Wifi MAC Address

The MAC keeps changing on reboot so I can’t assign a permanent IPv4 address to the phone. It appears from the MAC prefix of 00:08:22 that the network hardware is made in InPro Comm which is well known for using random addresses in the products it OEMs. They apparently have one allocation of 2^24 addresses and each device randomly chooses a MAC from that range on boot.

In the settings for a Wifi connection the “Identity” tab has a field named “Cloned Address” which can be set to “Stable for SSID” that prevents it from changing and allows a static IP address allocation from DHCP. It’s not ideal but it works.

Network Manager can be configured to have a permanent assigned MAC address for all connections or for just some connections. In the past for such things I have copied MAC addresses from ethernet devices that were being discarded and used them for such things. For the moment the “Stable for SSID” setting does what I need but I will consider setting a permanent address at some future time.

Docks

Having the ability to connect to a dock is really handy. The PinePhonePro and Librem5 support it and on the proprietary side a lot of Samsung devices do it with a special desktop GUI named Dex and some Huawei devices also have a desktop version of the GUI. It’s unfortunate that this phone can’t do it.

The Good Things

It’s good to be able to ssh in to my phone, even if the on-screen keyboard worked as well as the Android ones it would still be a major pain to use when compared to a real keyboard. The phone doesn’t support connecting to a dock (unlike Samsung phones I’ve used for which I found Dex to be very useful with a 4K monitor and proper keyboard) so ssh is the best way to access it.

This phone has very reliable connections to my home wifi. I’ve had ssh sessions from my desktop to my phone that have remained open for multiple days. I don’t really need this, I’ve just forgotten to logout and noticed days later that the connection is still running. None of the other phones running Debian could do that.

Running the same OS on desktop and phone makes things easier to test and debug.

Having support for all the things that Linux distributions support is good. For example none of the Android music players support all the encodings of audio that comes from YouTube so to play all of my music collection on Android I would need to transcode most of them which means either losing quality, wasting storage space, or both. While Lollypop plays FLAC0, mp3, m4a, mka, webm, ogg, and more.

Conclusion

This is a step towards where I want to go but it’s far from the end goal.

The PinePhonePro and Librem5 are more open hardware platforms which have some significant benefits. But the battery life issues make them unusable for me.

Running Mobian on a OnePlus 6 or Droidian on a Note 9 works well for the small tablet features but without VoLTE. While the telcos have blocked phones without VoLTE data devices still work so if recruiters etc would stop requiring phone calls then I could make one of them an option.

The phone works well enough that it could potentially be used by one of my older relatives. If I could ssh in to my parents phones when they mess things up that would be convenient.

I’ve run this phone as my daily driver since the 3rd of March and it has worked reasonably well. 6 weeks compared to my previous use of the PinePhonePro for 3 days. This is the first time in 15 years that a non-Android phone has worked for me personally. I have briefly used an iPhone 7 for work which basically did what it needed to do, it was at the bottom of the pile of unused phones at work and I didn’t want to take a newer iPhone that could be used by someone who’s doing more than the occasional SMS or Slack message.

So this is better than it might have been, not as good as I hoped, but a decent platform to use it while developing for it.

10:28

On pricing [Seth's Blog]

  • Pricing is an exchange of value. This for that.

    But price is also a story.
  • When you have competition, your story doesn’t have to justify the absolute price, simply the difference between you and the next-best option.
  • Price is based on value, not on the cost of production.
  • Thanks to Baumol’s cost disease, productivity doesn’t always correlate with the price of labor.
  • Luxury goods are worth more because they cost more. If you sell a luxury item, raise your price and then improve the story to make it a bargain.
  • It’s better to explain your fair price once than to apologize for low quality over and over.
  • Asking, “what’s your budget?” is lazy and selfish. Your job is to figure out what your client wants, what they’re afraid of, and what sort of story they are eager to buy.
  • When everything else is equal, we always want the cheapest option. But everything else is rarely equal.
  • When someone says, “that’s too expensive,” what they mean is that the story you’ve told them so far (and the reputation you’ve earned) doesn’t match the price you’re charging. You probably don’t need a lower price, but you might need to earn a better story.
  • “It might not be for you,” is almost always part of “we make the best (for someone).”
  • Bargains, sales and coupons are a sport and a narrative. They’re not just a discount, they create their own sort of value and expectation.
  • Convenience is often underappreciated as a component of value.
  • The customers you get because you are the cheapest are the first ones to leave when someone else is even cheaper.
  • The problem with racing to the bottom is that you might win.
  • The most resilient slogan you can earn is, “you’ll pay a bit more, but you’ll get more than you paid for.”

Review: eufyMake UV Printer E1 [RevK®'s ramblings]

The eufyMake UV Printer E1 is a desktop, UV cured, inkjet printer which can print full colour and white, on almost anything. [no AI in this blog].

Summary

  • This is an impressive printer, with vibrant colours and high resolution, which can print on almost anything.
  • Whilst it is desktop printer, you need to allow for the extra space around it, especially front and back, to make use of the full size print bed. This may make it too big for some worktops.
  • Whilst it can print on pretty much anything, which is impressive, the durability of the print varies massively depending on the material on which you print. You need to understand the limitations and extra steps you can take (sanding a surface first, etc).
  • The sticker mode using the laminator allows you to make stickers that stick well to smooth surfaces and allows printing on objects that would not fit (mirrors, cupboard doors, computer cases, etc).
  • The rotary attachment allows printing on tumblers and mugs, but again the durability is an issue, and prints may not be dishwasher safe.

Unboxing and set up

Warning, the box is heavy - not a one person lift. But it is easy to unbox. The step by step instructions for installing ink, cleaning unit, air filter, and so on are very clear. The calibration and testing takes a while, be patient. See unboxing video.

One of the key aspects is the space requirements. For the small bed you simply need space in front to open the front cover (as shown in image above). For the full size bed it claims to need 400mm front and back, which is a lot. It will not fit at the front of a typical 600mm work bench with enough space behind. It also claims to need 300mm left and right, which makes no sense. You have to be able to get to power/ethernet on left and cleaning unit on right, but there are no fans or vents, so this extra space seems unnecessary. In this case it will just fit sideways on a 600mm work bench with space either side for the full size print bed. Bear in mind you need to use the glasses when operating with covers open (i.e. full size print bed). As you can see from the image - mine is on a shelf on the floor now.

No mess

The instructions warn of using gloves, and glasses. They seem to imply the risk of some mess somehow, but so far it has not been a problem. Maybe during some maintenance or changing something, like ink, there is risk of ink on hands, so gloves may only be needed then. In normal operation it seems very simple and clean.

eufyMake Studio, and app

It connects via WiFi or Ethernet and is set up from an app using bluetooth. Annoyingly you need to create an account, arg!

Do not make the mistake of installing the iPad app on a Mac. It works but is clunky at best. The eufyMake UV Studio is what you need and that works really well.

Just to explain, unlike a normal printer, this is not installed as a printer on the system. You have to use the app to print. The reasons for this should be pretty obvious - the printer can print not just in CMYK, but White and Gloss, and complex arrangements of print order and thickness. A normal printer driver cannot handle this. It also needs print positioning.

However, the app is very slick, and allows images to be dragged in and positioned and scaled and flipped with ease. It handles vector formats such as SVG which allows for very precise and high resolution printing. Note the print quality setting is not default to High Quality for some reason and not even saved when you save a project, which is odd.

You can also add text, and set fonts, colour, size, rotation, etc.

There is a large library of artwork and graphics included, and apparently some AI thing which I have not touched.

Size of print area

The print area on the small bed is marked as 310mm by 90mm. On the large bed it is 310mm by 400mm. However it seems you can print up to 330mm by 420mm, so larger than A3!

However there is more to it, you can fit up to 100mm in the printer and print on top of it. Also, the top does not have to be flat, it can have 5mm clearance - i.e. I can print on the body of an iPhone case even though it has ridges around the camera hole raised above that.

There is also a film roll attachment allowing print on a continuous film allowing a much longer print than the 420mm limit of the large bed.

Print positioning

The first thing you do is put what you want on the printer bed, and have it take a picture. This shows on the app as an overhead view. It measures height for you, and shows the outline of the object on the print bed.

You then drag in images, or place text or artwork as you like. You can resize and rotate and flip as needed. It handles the fact images have transparency - using SVG is ideal, but PNG with transparency works just as well. You can see how your artworks fits on the object. You can zoom in and align precisely (the set up includes camera position calibration, so this is very good). You can add outlines or apply a cut out to a non transparent image, etc.

Types of print / ink setup

For each piece of artwork/text/etc you can set the way it is printed. The main thing is the layers of print. Typically you apply white and then colour on top. Even so, the white is usually around 3 layers of white and 1 of colour. You could do white, colour, and gloss for example.

They have thought about this - you can do colour, white, colour for printing on glass for example, or just colour then white.

You can fully customise the layer sequence and numbers as well.

But that is all flat. You can do flat raised which thicker colour to give a texture. You can also select textures!

Textures

You can select a textured effect and there are a library of textures, like crocodile skin, etc. Obviously this uses more ink and takes longer to print.

Opacity

Normally the three layers of white is enough to give a white background for printing colour and white images. This works well on a consistent background, even if dark. But it is not totally opaque, so if printing over existing artwork you need more layers of white to make it properly opaque. You can set the number of layers though. Ideally, don't print over existing artwork, or if you do, sticker mode may be best as stickers are very thick.

Print durability

Well, yes, but will it stay printed... There are a lot of things where the durability of print is not really an issue...

But this is where things get complicated. You can, indeed, print on almost anything, but it may not stick as well as you like. This was a tad disappointing, as I expected the cured ink to be much harder. It is actually somewhat rubbery and flexible. But how well it sticks depends on to what you are printing.

At one extreme, a coated (anodised?) titanium iPhone case is very smooth - you can wipe off the print. Even stickers will not stick at all - you simply cannot pull the backing off and leave the sticker!

Printing on textured services is a lot better, as is various bare metal. Printing on 3D resin prints works really well, and is hard and not removable with a fingernail. In these cases a hard edge, knife, etc, can take the ink off - though if deeply textured that may be difficult to totally remove it. This is pretty durable though.

In between we have things like glass, which is a problem in to which I go in to more detail below. It sticks, and is nice, but is not dishwasher safe by any means. There are some things that may improve this.

There are things you can do to help on surfaces that do not stick well - one is sanding or etching the surface. This is not always sensible, obviously. For the green key fob shown above, the plastic is shiny enough to not be very robust (can come off with a fingernail). Sanding the whole side does not look wrong, so that can help make it more adherent. Another trick is to coat in acrylic, but you do have to let it dry properly. I was hoping to find a hard spray-on coating, perhaps even UV cured, to test (suggestions welcome), but acrylic resin based coating is probably good enough. Obviously that is not going to work on a glass.

Stickers

Stickers are fun. The optional laminator with a roll of B film, and sheets of DTF (Direct To Film) A paper, allow you to make stickers. See video.

The A film sheets are on a paper backing and have a protective layer you remove before printing. You are basically printing on a sheet of thin adhesive. You then put this through the laminator which sticks a thick soft film, the B film, on top. The sticker mode is an ink print type on the artwork, all artwork set the same, and prints a thick sticker on white background.

You can remove the paper back, and apply to any surface. The film is a bit stretchy which helps. You have to be carful to align as there is really no going back and re-positioning. You then rub down and peel off the thick film leaving a sticker.

This is ideal for cases where you want artwork on something that cannot go through the printer, e.g. cupboard doors, windows, mirrors, signs, etc. Obviously great for laptops.

This also works on shiny surfaces that don't print well directly. However, there is a caveat here for glass. Yes, the sticker sticks well and is robust, but it is not dishwasher safe as the heat melts the adhesive!

To be clear, this is *MY* mirror, not someone else's

Other goodies

It would appear one can get a different flexible white ink and foil (gold/silver, etc) and heat press selectively or use the laminator to apply foil to a design. Impressive.

Glass

The rotary attachment lets you print on tumblers and cups. The system is pretty slick - it measures it all for you and makes it easy to place artwork.

The result looks pretty good, but can be removed with a fingernail, and is not dishwasher safe. Yes, the AI summary fro the printer say hard, scratch resistant, and dishwasher safe - it is clearly not that simple!

Proposed techniques to fix this include a bonding agent to lightly etch the glass first. Of course the only stuff I could find was a primer that is thick grey - leaving 24 hours and cleaning off did not look like the surface was etched, which is a good, but may mean it does not work. It is messy and slow. This did not work! See this after image...

After dishwasher
Before dishwasher

Another proposed technique is flame treating the glass first - this is to change surface tension somehow.

I also tried all combinations of white, colour, and glass, and all washed away completely.

One alternative approach was printing a stencil and etching. The stencil easily washes off. This worked with a single CMYK layer, and using glass etching cream from Amazon. This allows precision etched artwork on a glass and is relatively simple to do. This first attempt shows more etching needed, a second attempt leaving a lot longer led to etching creeping under the print and not working. So timing is critical.

** I will update this blog with results of flame soon **

Maintenance

It has a load of maintenance stuff built in - cleaning the print head - automatic idle mode - a midnight cleaning cycle, and so on. It also has maintenance you can ask it to do, and can prompt you to do maintenance if needed. I have yet to change ink or cleaning module or air filter. 

Costs

Obviously there are costs - the ink, cleaning pack, and air filter. I have no idea on the latter two in terms of how long they last. However, the ink is around 35p/ml.

When you print it works out the ink usage or each ink, and total, and shows before printing so you can assess costs. Also shows after.

Bear in mind the amount of ink is complicated - it is not just a matter of size, but how many layers. Stickers are thick and use a lot of ink. So it can be a bit counterintuitive.

Conclusion

The key message here is understand the limitations. Not just on size, height, and so on, none of which are a big issue, but especially in terms of the durability of prints on different materials. You also need to understand costs (e.g. stickers use a lot of ink, as well as A and B film).

Once you have a handle on those limitations and costs you can understand what you can print and where. Then you can get creative with a lot of options.

09:49

Pluralistic: In praise of (some) compartmentalization (14 Apr 2026) [Pluralistic: Daily links from Cory Doctorow]

->->->->->->->->->->->->->->->->->->->->->->->->->->->->-> Top Sources: None -->

Today's links



A male figure in an inner tube, floating down a river. The figure has been altered. He now has a zombie's head, and his skin has been tinted green, with large, suppurating sores oozing out of his skin.

In praise of (some) compartmentalization (permalink)

If there's one FAQ I get Q'ed most F'ly, it's this: "How do you get so much done?" The short answer is, "I write when I'm anxious (which is how I came to write nine books during lockdown)." The long answer is more complicated.

The first complication to understand is that I have lifelong, degenerating chronic pain that makes me hurt from the base of my skull to the soles of my feet – my whole posterior chain. On a good day, it hurts. On a bad day, it hurts so bad that it's all I can think about.

Unless…I work. If I can find my way into a creative project, the rest of the world just kind of fades back, including my physical body. Sometimes I can get there through entertainment, too – a really good book or movie, say, but more often I find myself squirming and needing to get up and stretch or use a theragun after a couple hours in a movie theater seat, even the kind that reclines. A good conversation can do it, too, and is better than a movie or a book. The challenge and engagement of an intense conversation – preferably one with a chewy, productive and interesting disagreement – can take me out of things.

There's a degree to which ignoring my body is the right thing to do. I've come to understand a lot of my pain as being a phantom, a pathological failure of my nervous system to terminate a pain signal after it fires. Instead of fading away, my pain messages bounce back and forth, getting amplified rather than attenuated, until all my nerves are screaming at me. Where pain has no physiological correlate – in other words, where the ache is just an ache, without a strain or a tear or a bruise – it makes sense to ignore it. It's actually healthy to ignore it, because paying attention to pain is one of the things that can amplify it (though not always).

But this only gets me so far, because some of my pain does have a physiological correlate. My biomechanics suck, thanks to congenital hip defects that screwed up the way I walked and sat and lay and moved for most of my life, until eventually my wonky hips wore out and I swapped 'em for a titanium set. By that point, it was too late, because I'd made a mess of my posterior chain, all the way from my skull to my feet, and years of diligent physio, swimming, yoga, occupational therapy and physiotherapy have barely made a dent. So when I sit or stand or lie down, I'm always straining something, and I really do need to get up and move around and stretch and whatnot, or sure as hell I will pay the price later. So if I get too distracted, then I start ignoring the pain I need to be paying attention to, and that's at least as bad as paying attention to the pain I should be ignoring.

Which brings me to anxiety. These are anxious times. I don't know anyone who feels good right now. Particularly this week, as the Strait of Epstein emergency gets progressively worse, and there's this January 2020 sense of the crisis on the horizon, hitting one country after another. Last week, Australia got its last shipment of fossil fuels. This week, restaurants in India are all shuttered because of gas rationing. People who understand these things better than I do tell me that even if Trump strokes out tonight and Hegseth overdoes the autoerotic asphyxiation, it'll be months, possibly years, before things get back to "normal" ("normal!").

Any time I think about this stuff for even a few minutes, I start to feel that covid-a-comin', early-2020 feeling, only it's worse this time around, because I literally couldn't imagine what covid would mean when it got here, and now I know.

When I start to feel those feelings, I can just sit down and start thinking with my fingers, working on a book or a blog-post. Or working on an illustration to go with one of these posts, which is the most delicious distraction, leaving me with just enough capacity to mull over the structure of the argument that will accompany it.

I can't do anything about the impending energy catastrophe, apart from being part of a network of mutual aid and political organizing, so it makes sense not to fixate on it. But there are things that upset me – problems my friends and loved ones are having – where there's such a thing as too much compartmentalization. It's one thing to lose myself in work until the heat of emotion cools so I can think rationally about an issue that's got me seeing red, and another to use work as a way to neglect a loved one who needs attention in the hope that the moment will pass before I have to do any difficult emotional labor.

Compartmentalization, in other words, but not too much compartmentalization. During the lockdown years, I transformed myself into a machine for turning Talking Heads bootlegs into science fiction novels and technology criticism, and that was better than spending that time boozing or scrolling or fighting – but in retrospect, there's probably more I could have done during those hard months to support the people around me. In my defense – in all our defenses – that was an unprecedented situation and we all did the best we could.

Creative work takes me away from my pain – both physical and emotional – because creative work takes me into a "flow" state. This useful word comes to us from Mihaly Csikszentmihalyi, who coined the term in the 1960s while he was investigating a seeming paradox: how was it that we modern people had mastered so many of the useful arts and sciences, and yet we seemed no happier than the ancients? How could we make so much progress in so many fields, and so little progress in being happy?

In his fieldwork, Csikszentmihalyi found that people reported the most happiness while they were doing difficult things well – when your "body or mind is stretched to its limits in a voluntary effort to accomplish something difficult and worthwhile." He called this state "flow."

As Derek Thompson says, the word "flow" implies an effortlessness, but really, it's the effort – just enough, not too much – that defines flow-states. We aren't happiest in a frictionless world, but rather, in a world of "achievable challenges":

https://www.derekthompson.org/p/how-zombie-flow-took-over-culture

Thompson relates this to "the law of familiar surprises," an idea he developed in his book Hit Makers, which investigated why some media, ideas and people found fame, while others languished. A "familiar surprise" is something that's "familiar but not too familiar."

He thinks that the Hollywood mania for sequels and reboots is the result of media execs chasing "familiar surprises." I think there's something to this, but we shouldn't discount the effect that monopolization has on the media: as companies get larger and larger, they end up committing to larger and larger projects, and you just don't take the kinds of risks with a $500m movie that you can take with a $5m one. If you're spending $500m, you want to hedge that investment with as many safe bets as you can find – big name stars, successful IP, and familiar narrative structures. If the movie still tanks, at least no one will get fired for taking a big, bold risk.

Today, we're living in a world of extremely familiar, and progressively less surprising culture. AI slop is the epitome of familiarity, since by definition, AI tries to make a future that is similar to the past, because all it can do is extrapolate from previous data. That's a fundamentally conservative, uncreative way to think about the world:

https://pluralistic.net/2020/05/14/everybody-poops/#homeostatic-mechanism

The tracks the Spotify algorithm picks out of the catalog are going to be as similar to the ones you've played in the past as it can make them – and the royalty-free slop tracks that Spotify generates with AI or commissions from no-name artists will be even more insipidly unsurprising:

https://pluralistic.net/2022/09/12/streaming-doesnt-pay/#stunt-publishing

Thompson cites Shishi Wu's dissertation on "Passive Flow," a term she coined to describe how teens fall into social media scroll-trances:

https://scholarworks.umb.edu/cgi/viewcontent.cgi?article=2104&amp;context=doctoral_dissertations

Wu says it's a mistake to attribute the regretted hours of scrolling to addiction or a failure of self-control. Rather, the user is falling into "passive flow," a condition arising from three factors:

I. Engagement without a clear goal;

II. A loss of self-awareness – of your body and your mental state;

III. Losing track of time.

I instantly recognize II. and III. – they're the hallmarks of the flow states that abstract me away from my own pain when I'm working. The big difference here is I. – I go to work with the clearest of goals, while "passive flow" is undirected (Thompson also cites psychologist Paul Bloom, who calls the scroll-trance "shitty flow." In shitty flow, you lose track of the world and its sensations – but in a way that you later regret.)

Thompson has his own name for this phenomenon of algorithmically induced, regret-inducing flow: he calls it "zombie flow." It's flow that "recapitulates the goal of flow while evacuating the purpose."

Zombie flow is "progress without pleasure" – it's frictionless, and so it gives us nothing except that sense of the world going away, and when it stops, the world is still there. The trick is to find a way of compartmentalizing that rewards attention with some kind of productive residue that you can look back on with pride and pleasure.

I wouldn't call myself a happy person. I don't think I know any happy people right now. But I'm an extremely hopeful person, because I can see so many ways that we can make things better (an admittedly very low bar), and I have mastered the trick of harnessing my unhappiness to the pursuit of things that might make the world better, and I'm gradually learning when to stop escaping the pain and confront it.

(Image: marsupium photography, CC BY-SA 2.0, modified)


Hey look at this (permalink)



A shelf of leatherbound history books with a gilt-stamped series title, 'The World's Famous Events.'

Object permanence (permalink)

#25yrsago Pee-Wee Herman on his career https://web.archive.org/web/20010414033156/https://ew.com/ew/report/0,6115,105857~1~0~paulreubensreturnsto,00.html

#25yrsago Anxious hand-wringing about multitasking teens https://www.nytimes.com/2001/04/12/technology/teenage-overload-or-digital-dexterity.html

#20yrsago Clever t-shirt typography spells “hate” – “love” in mirror-writing https://web.archive.org/web/20060413102804/https://accordionguy.blogware.com/blog/_archives/2006/4/12/1881414.html

#20yrsago New Mexico Lightning Field claims to have copyrighted dirt https://diaart.org/visit/visit-our-locations-sites/walter-de-maria-the-lightning-field#overview

#20yrsago Futuristic house made of spinach protein and soy-foam https://web.archive.org/web/20060413111650/http://bfi.org/node/828

#15yrsago New Zealand to sneak in Internet disconnection copyright law with Christchurch quake emergency legislation https://www.stuff.co.nz/technology/digital-living/4882838/Law-to-fight-internet-piracy-rushed-through

#10yrsago Bake: An amazing space-themed Hubble cake https://www.sprinklebakes.com/2016/04/black-velvet-nebula-cake.html

#10yrsago Shanghai law uses credit scores to enforce filial piety https://www.caixinglobal.com/2016-04-11/shanghai-says-people-who-fail-to-visit-parents-will-have-credit-scores-lowered-101011746.html

#10yrsago Walmart heiress donated $378,400 to Hillary Clinton campaign and PACs https://web.archive.org/web/20160414155119/https://www.alternet.org/election-2016/alice-walton-donated-353400-clintons-victory-fund

#10yrsago Mass arrests at DC protest over money in politics https://www.washingtonpost.com/local/public-safety/mass-arrests-of-protesters-in-demonstration-at-capitol-against-big-money/2016/04/11/96c13df0-0037-11e6-9d36-33d198ea26c5_story.html

#10yrsago Churchill got a doctor’s note requiring him to drink at least 8 doubles a day “for convalescence” https://web.archive.org/web/20130321054712/https://arttattler.com/archivewinstonchurchill.html

#5yrsago Big Tech's secret weapon is switching costs, not network effects https://pluralistic.net/2021/04/12/tear-down-that-wall/#zucks-iron-curtain

#5yrsago Fraud-resistant election-tech https://pluralistic.net/2021/04/12/tear-down-that-wall/#bmds

#1yrago Blue Cross of Louisiana doesn't give a shit about breast cancer https://pluralistic.net/2025/04/12/pre-authorization/#is-not-a-guarantee-of-payment


Upcoming appearances (permalink)

A photo of me onstage, giving a speech, pounding the podium.



A screenshot of me at my desk, doing a livecast.

Recent appearances (permalink)



A grid of my books with Will Stahle covers..

Latest books (permalink)



A cardboard book box with the Macmillan logo.

Upcoming books (permalink)

  • "The Reverse-Centaur's Guide to AI," a short book about being a better AI critic, Farrar, Straus and Giroux, June 2026 (https://us.macmillan.com/books/9780374621568/thereversecentaursguidetolifeafterai/)
  • "Enshittification, Why Everything Suddenly Got Worse and What to Do About It" (the graphic novel), Firstsecond, 2026

  • "The Post-American Internet," a geopolitical sequel of sorts to Enshittification, Farrar, Straus and Giroux, 2027

  • "Unauthorized Bread": a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, 2027

  • "The Memex Method," Farrar, Straus, Giroux, 2027



Colophon (permalink)

Today's top sources:

Currently writing: "The Post-American Internet," a sequel to "Enshittification," about the better world the rest of us get to have now that Trump has torched America. Third draft completed. Submitted to editor.

  • "The Reverse Centaur's Guide to AI," a short book for Farrar, Straus and Giroux about being an effective AI critic. LEGAL REVIEW AND COPYEDIT COMPLETE.
  • "The Post-American Internet," a short book about internet policy in the age of Trumpism. PLANNING.

  • A Little Brother short story about DIY insulin PLANNING


This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.

https://creativecommons.org/licenses/by/4.0/

Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.


How to get Pluralistic:

Blog (no ads, tracking, or data-collection):

Pluralistic.net

Newsletter (no ads, tracking, or data-collection):

https://pluralistic.net/plura-list

Mastodon (no ads, tracking, or data-collection):

https://mamot.fr/@pluralistic

Bluesky (no ads, possible tracking and data-collection):

https://bsky.app/profile/doctorow.pluralistic.net

Medium (no ads, paywalled):

https://doctorow.medium.com/

Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):

https://mostlysignssomeportents.tumblr.com/tagged/pluralistic

"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla

READ CAREFULLY: By reading this, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies ("BOGUS AGREEMENTS") that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer.

ISSN: 3066-764X

08:49

Learning To Relax With The Pelvic Wand by Alyssa Read [Oh Joy Sex Toy]

Learning To Relax With The Pelvic Wand by Alyssa Read

Even after a hysterectomy, Alyssa Read‘s pelvis was in constant pain. Enter (literally) The Pelvic Wand by Intimate Rose! Designed by a physical therapist, this curvy tool isn’t a sex toy, it’s to relieve pain! Bluesky Portfolio Super thankful for todays lovely helpful comic! Intimate Rose’s Pelvic Wand is well priced and seems to be […]

07:14

Ravi Dwivedi: Hungary Visa [Planet Debian]

The annual LibreOffice conference 2025 was held in Budapest, Hungary, from the 3rd to the 6th of September 2025. Thanks to the The Document Foundation (TDF) for sponsoring me to attend the conference.

As Hungary is a part of the Schengen area, I needed a Schengen visa to attend the conference. In order to apply for a Schengen visa, one needs to get an appointment at VFS Global and submit all the required documents there, which are then forwarded to the embassy.

I got an appointment for a Hungary visa at VFS Global in New Delhi for the 24th of July. There were many appointment slots available for the Hungary visa. One could easily get an appointment for the next day at the Delhi center. There were some technical problems on the VFS website, though, as I was unable to upload a scanned copy of my passport while booking the appointment. I got an error saying, “Unfortunately, you have exceeded the maximum upload limit.”

The problem didn’t get fixed even after contacting the VFS helpline. They asked me to try in the Firefox browser and deleting all the cache, which I already did.

So I created another account with a different email address and phone number, after which I was able to upload my passport and book an appointment. Other conference attendees from India also reported facing some technical issues on the VFS Hungary website.

Anyway, I went to the VFS Hungary application center as per my appointment on the 24th of July. Going inside, I located the Hungary visa application counter. There were two applicants ahead of me.

When it was my turn, the VFS staff warned me that my passport was damaged. The “damage” was on the bio-data page. All the details could be seen, but the lamination of the details page wore off a bit. They asked me to write an application to the Embassy of Hungary in New Delhi stating that I insist VFS to submit my application along with describing the “damage” on my passport.

I got a bit worried about my application getting rejected due to the “damage.” But I decided to gamble my money on this one, as I didn’t have time (and energy) to apply for a new passport before this trip.

Moreover, I had struck down a couple of fields in my visa application form which were not applicable to me, due to which the VFS staff asked me to fill out another visa application.

After this, the application got submitted, and it was 11,000 INR (including the fee to book the appointment at VFS). Here is the list of documents I submitted:

  • My passport

  • Photocopy of my passport

  • Two photographs of myself

  • Duly filled visa application form

  • Return flight ticket reservations

  • Payslips for the last three months

  • Invitation letter from the conference organizer (in Hungarian)

  • Proof of hotel bookings during my stay in Hungary

  • Cover letter stating my itinerary

  • Income tax returns filed by me

  • Bank account statement, signed and sealed by the bank

  • Travel insurance valid for the period of the entire trip

It took 2 hours for me to submit my visa application, even though there were only two applicants before me. This was by far the longest time to submit a Schengen visa application for me.

Fast-forward to the 30th of July, and I received an email from the Embassy of Hungary asking me to submit an additional document - paid air ticket - for my application. I had only submitted dummy flight tickets, and they were enough for the Schengen visas I applied for until now. This was the first time a country was asking me to submit a confirmed flight ticket during the visa process.

I consulted my travel agent on this, and they were fairly confident that I will get the visa if the embassy is asking me to submit confirmed flight tickets. So I asked the travel agent to book the flight tickets. These tickets were ₹78,000, and the airline was Emirates. Then, I sent the flight tickets to the embassy by email.

The embassy sent the visa results on the 6th of August, which I received the next day.

My visa had been approved! It took 14 days for me to get the Hungary visa after submitting the application.

See you in the next one!

Thanks to Badri for proofreading.

03:35

Kink And Shame [Penny Arcade]

I always preferred Forza Horizon to the traditional Forza, but I never thought that was a compliment to me.  I would love to say that I stopped going to church because I had unspun the fundamental philosophies inherent to it, and stood astride it, gleaming like a newly minted God.  It's a fiction I'd love to maintain.  In the end, I simply couldn't hack it - I was willing to engage in a multi-year campaign of gruesome self-recrimination, obviously. But once it became clear that it was utterly open-ended, a blank check I would perpetually cash against my own identity, I could burn myself alive or try to go on as a maimed and useless creature.  That's basically me and Forza Horizon.  I wish that I were hardcore enough for the progenitor - of the universe, or the racing franchise.  Take your pick.  

02:21

00:56

A Very “Engaging” Charcuterie Board [Whatever]

Hey, everyone! I was going to continue to post about my adventures in Colorado, but I decided a detour was in order today to show y’all this spread I did last night for my friend’s engagement party. Feast your eyes on my (mainly Aldi and partially Kroger) spread of goods for about fifty people to snack on:

A large spread of various meats and cheeses, as well as jams, olives, and nuts, all laid out on butcher paper. There's large piles of cubed and crumbled cheeses, a river of prosciutto, folded salamis, wheels of brie, a log of goat cheese, lots of good stuff!

So, while this isn’t everything I put out, this is the main event. I was very nervous to do a spread for so many people, as normally I deal in much smaller groups. Usually my boards are usually made for about ten people. I know you’re probably thinking, there’s no way that spread survived fifty people. And you’d be right! After the first wave of snackers, I snuck in to refill everything, and continued to refill as was necessary to keep it looking full and making sure everyone got a bite of what they wanted.

I was informed ahead of time that there were no known allergies amongst the entire group (except, of course, my bestie having a gluten intolerance). With that knowledge in mind, let’s look at what we got!

We’ve got double cream brie, dill Havarti, smoked gouda, cranberry cheddar, espresso martini soaked cheddar, pimento cheese dip, honey goat cheese, and a garlic and herbs Boursin. For the meats I did a very simple prosciutto and salami. I also brought a garlic summer sausage but I couldn’t really make it work in my presentation so I gave up on it and just went with the two meats, which honestly who needs more meat than just prosciutto and salami? Those are my two favorites, anyway.

Accoutrements include fig jam, a berry jalapeno jam, Stonewall Kitchen’s Maine Maple Champagne Mustard, quince paste, a pear, cardamom, and pistachio jam, blackcurrant mustard, Truff hot sauce, and an orange whiskey jam. There’s also stuffed peppers and herby olives, dates, salted caramel black truffle peanuts, rosemary Marcona almonds, pistachios, hot honey cashews, and chocolate covered pomegranate seeds. Finally, front and center is Zeroe Caviar’s vegan caviar made from seaweed. I’ve never put it on a board before, but I figured caviar was needed at an engagement party.

As you can tell from the grapes all the way on the right, there’s more to see than this picture lets on. I just did some strawberries, blackberries, and grapes with fruit fluff, and then pinwheel striped and sliced some mini cucumbers and set those out with carrots and celery alongside tzatziki and feta dip, plus a creamy ranch dip. There was also a tray of various cookies like Walker’s shortbread, Pirouette cookies, and some strawberry and creme covered pretzels. Plus blue corn tortilla chips and salsa.

Here’s a different angle so hopefully you can somewhat see some other items:

The spread from a different angle, now showing the fruit and veggies at the other end.

At the end you can see the fruit fluff and fruit, and the veggies and dips further down. And look, someone brought hummus! How thoughtful. Luckily, I had pita chips to go with it. I also set out some cranberry crisps, rosemary flatbread crackers, and some other entertainment crackers but nothing really of note. I kept my friend’s gluten-free crackers behind the counter for her, as well as her gluten-free cookies.

So, there you have it, a spread from yours truly for my bestie’s engagement party. I am so excited for her, her fiancé, and to be in her wedding. She means the world to me and I was happy to feed those closest to her.

Which cheese sounds the best to you? Would you try the vegan caviar? Let me kn0w in the comments, and have a great day!

-AMS

Monday, 13 April

23:28

21:07

Free Software Directory meeting on IRC: Friday, April 17, starting at 12:00 EDT (16:00 UTC) [Planet GNU]

Join the FSF and friends on Friday, April 17 from 12:00 to 15:00 EDT (16:00 to 19:00 UTC) to help improve the Free Software Directory.

Finding a duplicated item in an array of N integers in the range 1 to N − 1 [The Old New Thing]

A colleague told me that there was an O(N) algorithm for finding a duplicated item in an array of N integers in the range 1 to N − 1. There must be a duplicate due to the pigeonhole principle. There might be more than one duplicated value; you merely have to find any duplicate.¹

The backstory behind this puzzle is that my colleague had thought this problem was solvable in O(N log N), presumably by sorting the array and then scanning for the duplicate. They posed this as an interview question, and the interviewee found an even better linear-time algorithm!

My solution is to interpret the array as a linked list of 1-based indices, and borrow the sign bit of each integer as a flag to indicate that the slot has been visited. We start at index 1 and follow the indices until they either reach a value whose sign bit has already been set (which is our duplicate), or they return to index 1 (a cycle). If we find a cycle, then move to the next index which does not have the sign bit set, and repeat. At the end, you can restore the original values by clearing the sign bits.²

I figured that modifying the values was acceptable given that the O(N log N) solution also modifies the array. At least my version restores the original values when it’s done!

But it turns out the interview candidate found an even better O(N) algorithm, one that doesn’t modify the array at all.

Again, view the array values as indices. You are looking for two nodes that point to the same destination. You already know that no array entry has the value N, so the entry at index N cannot be part of a cycle. Therefore, the chain that starts at N must eventually join an existing cycle, and that join point is a duplicate. Start at index N and use Floyd’s cycle detector algorithm to find the start of the cycle in O(N) time.

¹ If you constrain the problem further to say that there is exactly one duplicate, then you can find the duplicate by summing all the values and then subtracting N(N−1)/2.

² I’m pulling a fast one. This is really O(N) space because I’m using the sign bit as a convenient “initially zero” flag bit.

The post Finding a duplicated item in an array of <VAR>N</VAR> integers in the range 1 to <VAR>N</VAR> − 1 appeared first on The Old New Thing.

20:35

The Shattering Peace a Locus Award Finalist [Whatever]

The book (shown here in its “bedazzled” version sitting on a bookshelf next to John Harris’ art book, and a painting of Smudge) is a finalist in the category of Best Science Fiction Novel, along with these other worthy finalists (list scrounged from the Locus Magazine web site):

What excellent company to be in.

The full list of Locus Award finalist for this year can be found here. Congratulations to everyone! It is an honor to be in this peer group with you.

— JS

18:49

18:07

On Anthropic’s Mythos Preview and Project Glasswing [Schneier on Security]

The cybersecurity industry is obsessing over Anthropic’s new model, Claude Mythos Preview, and its effects on cybersecurity. Anthropic said that it is not releasing it to the general public because of its cyberattack capabilities, and has launched Project Glasswing to run the model against a whole slew of public domain and proprietary software, with the aim of finding and patching all the vulnerabilities before hackers get their hands on the model and exploit them.

There’s a lot here, and I hope to write something more considered in the coming week, but I want to make some quick observations.

One: This is very much a PR play by Anthropic—and it worked. Lots of reporters are breathlessly repeating Anthropic’s talking points, without engaging with them critically. OpenAI, presumably pissed that Anthropic’s new model has gotten so much positive press and wanting to grab some of the spotlight for itself, announced its model is just as scary, and won’t be released to the general public, either.

Two: These models do demonstrate an increased sophistication in their cyberattack capabilities. They write effective exploits—taking the vulnerabilities they find and operationalizing them—without human involvement. They can find more complex vulnerabilities: chaining together several memory corruption bugs, for example. And they can do more with one-shot prompting, without requiring orchestration and agent configuration infrastructure.

Three: Anthropic might have a good PR team, but the problem isn’t with Mythos Preview. The security company Aisle was able to replicate the vulnerabilities that Anthropic found, using older, cheaper, public models. But there is a difference between finding a vulnerability and turning it into an attack. This points to a current advantage to the defender. Finding for the purposes of fixing is easier for an AI than finding plus exploiting. This advantage is likely to shrink, as ever more powerful models become available to the general public.

Four: Everyone who is panicking about the ramifications of this is correct about the problem, even if we can’t predict the exact timeline. Maybe the sea change just happened, with the new models from Anthropic and OpenAI. Maybe it happened six months ago. Maybe it’ll happen in six months. It will happen—I have no doubt about it—and sooner than we are ready for. We can’t predict how much more these models will improve in general, but software seems to be a specialized language that is optimal for AIs.

A couple of weeks ago, I wrote about security in what I called “the age of instant software,” where AIs are superhumanly good at finding, exploiting, and patching vulnerabilities. I stand by everything I wrote there. The urgency is now greater than ever.

I was also part of a large team that wrote a “what to do now” report. The guidance is largely correct: We need to prepare for a world where zero-day exploits are dime-a-dozen, and lots of attackers suddenly have offensive capabilities that far outstrip their skills.

18:00

17:21

[$] Development statistics for the 7.0 kernel [LWN.net]

Linus Torvalds released the 7.0 kernel as expected on April 12, ending a relatively busy development cycle. The 7.0 release brings a large number of interesting changes; see the LWN merge-window summaries (part 1, part 2) for all the details. Here, instead, comes our traditional look at where those changes came from and who supported that work.

17:14

16:35

[$] A build system aimed at license compliance [LWN.net]

The OpenWrt One is a router powered by the open-source firmware from the OpenWrt project; it was also the subject of a keynote at SCALE in 2025 given by Denver Gingerich of the Software Freedom Conservancy (SFC), which played a big role in developing the router. Gingerich returned to the conference in 2026 to talk about the build system used by the OpenWrt One, which is focused on creating the needed binaries, naturally, but doing so in a way that makes it easy to comply with the licenses of the underlying code. That makes good sense for a project of this sort—and for a talk given by the director of compliance at SFC.

Servo now on crates.io [LWN.net]

The Servo project has announced the first release of servo as a crate for use as a library.

As you can see from the version number, this release is not a 1.0 release. In fact, we still haven't finished discussing what 1.0 means for Servo. Nevertheless, the increased version number reflects our growing confidence in Servo's embedding API and its ability to meet some users' needs.

In the meantime we also decided to offer a long-term support (LTS) version of Servo, since breaking changes in the regular monthly releases are expected and some embedders might prefer doing major upgrades on a scheduled half-yearly basis while still receiving security updates and (hopefully!) some migration guides. For more details on the LTS release, see the respective section in the Servo book.

14:56

1342: Told Point Blank [Order of the Stick]

http://www.giantitp.com/comics/oots1342.html

14:42

Link [Scripting News]

Written in Gutenberg: With great respect for Claude.

Link [Scripting News]

We reached a milestone this morning, completing the project to add a Gutenberg version of the wpEditorDemo app. Claude did the programming on the new version. It required changes to the server app, which I made. It took 2.5 days to do the work, which was more than I thought it would. A lot of was learned. Now I'm figuring out what my next project will be.

Link [Scripting News]

Screenshot of the just-released Gutenberg demo app.

14:21

CodeSOD: Non-cogito Ergo c_str [The Daily WTF]

Tim (previously) supports a relatively ancient C++ application. And that creates some interesting conundrums, as the way you wrote C++ in 2003 is not the way you would write it even a few years later. The standard matured quickly.

Way back in 2003, it was still common to use C-style strings, instead of the C++ std::string type. It seems silly, but people had Strong Opinions™ about using standard library types, and much of your C++ code was probably interacting with C libraries, so yeah, C-strings stuck around for a long time.

For Tim's company, however, the migration away from C-strings was in 2007.

So they wrote this:

if( ! strncmp( pdf->symTabName().c_str(), prefix.c_str(), strlen( prefix.c_str() ) ) ) {
    // do stuff
}

This is doing a "starts with" check. strncmp, strlen are both functions which operate on C-strings. So we compare the symTabName against the prefix, but only look at as many characters as are in the prefix. As is common, strncmp returns 0 if the two strings are equal, so we negate that to say "if the symTabName starts with prefix, do stuff".

In C code, this is very much how you would do this, though you might contemplate turning it into a function. Though maybe not.

In C++, in 2007, you do not have a built-in starts_with function- you have to wait until the C++20 standard for that- but you have some string handling functions which could make this more clear. As Tim points out, the "correct" answer is: if(pdf->symTabName().find(prefix) != 0UL). It's more readable, it doesn't involve poking around with char*s, and also isn't spamming that extra whitespace between every parenthesis and operator.

Tim writes: "String handling in C++ is pretty terrible, but it doesn't have to be this terrible."

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

Security updates for Monday [LWN.net]

Security updates have been issued by AlmaLinux (fontforge, freerdp, libtiff, nginx, nodejs22, and openssh), Debian (bind9, chromium, firefox-esr, flatpak, gdk-pixbuf, inetutils, mediawiki, and webkit2gtk), Fedora (corosync, libcap, libmicrohttpd, libpng, mingw-exiv2, mupdf, pdns-recursor, polkit, trafficserver, trivy, vim, and yarnpkg), Mageia (libpng12, openssl, python-django, python-tornado, squid, and tomcat), Red Hat (rhc), Slackware (openssl), SUSE (chromedriver, chromium, cockpit, cockpit-machines, cockpit-podman, cockpit-tukit, crun, firefox, fontforge-20251009, glibc, go1, helm3, libopenssl-3-devel, libpng16, libradcli10, libtasn1, nghttp2, openssl-1_0_0, openssl-1_1, ovmf, perl-XML-Parser, python-cryptography, python-Flask-HTTPAuth, python311-Django4, python313-Django6, python315, sudo, systemd, tar, tekton-cli, tigervnc, util-linux, and zlib), and Ubuntu (mongodb, qemu, and retroarch).

14:07

Scientists invented an obviously fake illness, and “AI” spread it like truth within weeks [OSnews]

Ever heard of a condition called bixonimania? Did you search the internet or ask your “AI” girlfriend about some symptoms you were experiencing, and this was its answer? Well…

The condition doesn’t appear in the standard medical literature — because it doesn’t exist. It’s the invention of a team led by Almira Osmanovic Thunström, a medical researcher at the University of Gothenburg, Sweden, who dreamt up the skin condition and then uploaded two fake studies about it to a preprint server in early 2024. Osmanovic Thunström carried out this unusual experiment to test whether large language models (LLMs) would swallow the misinformation and then spit it out as reputable health advice. “I wanted to see if I can create a medical condition that did not exist in the database,” she says.

↫ Chris Stokel-Walker at Nature

And “AI” ate it up like quality chocolate. It started appearing in the answers from all the popular “AI” tools within weeks, and later even started showing up as references in published literature, indicating that scientists copy/paste references without actually reading them. This is clearly a deeply concerning experiment, and highlights there may be many, many more nonsensical, fake studies being picked up by “AI” tools.

Of course, I hear you say, it’s not like propagating fake or terrible studies is the sole domain of “AI”, as there are countless cases of this happening among actual real researchers and scientists, too. The issue, though, is that the fake studies concerning “bixonimania” were intentionally made to be as silly and obviously ridiculous as possible. It references Starfleet Acadamy, the lab aboard the Enterprise, the University of Fellowship of the Ring, and many other fake references instantly recognisable as such by real humans.

In fact, the studies even specifically mention that “this entire paper is made up” and “fifty made-up individuals aged between 20 and 50 years were recruited for the exposure group”. It would take any human only a few seconds after opening one of these papers to realise they’re entirely fake – yet, the world’s most advanced “AI” tools gobbled them up and spit them back out as pure fact within mere weeks of their publication

This shouldn’t come as a surprise. After all, “AI” tools have no understanding, no intelligence, no context, and they can’t actually make sense of anything. They are glorified pachinko machines with the output – the ball – tumbling down the most likely path between the pins based on nothing but chance and which pins it has already hit. “AI” output understands the world about as much as the pachinko ball does, and as such, can’t pick up on even the most obvious of cues that something is a fake or a forgery.

It won’t be long before truly nefarious forces start doing this very same thing. Why build, staff, and maintain a troll farm when you can just have “AI” generate intentional misinformation which will then be spread and pushed by even more “AI”? Remember, it took one malicious asshole just one long since retracted fake paper to convince millions that vaccines cause autism. I shudder to think how many people are accepting anything “AI” says as gospel.

13:21

Linux 7.0 released [OSnews]

Version 7.0 of the Linux kernel has been released, marking the arbitrary end of the 6.x series.

Significant changes in this release include the removal of the “experimental” status for Rust code, a new filtering mechanism for io_uring operations, a switch to lazy preemption by default in the CPU scheduler, support for time-slice extension, the nullfs filesystem, self-healing support for the XFS filesystem, a number of improvements to the swap subsystem (described in this article and this one), general support for AccECN congestion notification, and more. See the LWN merge-window summaries (part 1, part 2) and the KernelNewbies 7.0 page for more details.

↫ corbet at LWN.net

You can compile the kernel yourself, or just wait until it hits your distribution’s repositories.

13:14

Comprehension Debt: The Hidden Cost of AI-Generated Code [Radar]

The following article originally appeared on Addy Osmani’s blog site and is being reposted here with the author’s permission.

Comprehension debt is the hidden cost to human intelligence and memory resulting from excessive reliance on AI and automation. For engineers, it applies most to agentic engineering.

There’s a cost that doesn’t show up in your velocity metrics when teams go deep on AI coding tools. Especially when its tedious to review all the code the AI generates. This cost accumulates steadily, and eventually it has to be paid—with interest. It’s called comprehension debt or cognitive debt.

Comprehension debt is the growing gap between how much code exists in your system and how much of it any human being genuinely understands.

Unlike technical debt, which announces itself through mounting friction—slow builds, tangled dependencies, the creeping dread every time you touch that one module—comprehension debt breeds false confidence. The codebase looks clean. The tests are green. The reckoning arrives quietly, usually at the worst possible moment.

Margaret-Anne Storey describes a student team that hit this wall in week seven: They could no longer make simple changes without breaking something unexpected. The real problem wasn’t messy code. It was that no one on the team could explain why design decisions had been made or how different parts of the system were supposed to work together. The theory of the system had evaporated.

That’s comprehension debt compounding in real time.

I’ve read Hacker News threads that captured engineers genuinely wrestling with the structural version of this problem—not the familiar optimism versus skepticism binary, but a field trying to figure out what rigor actually looks like when the bottleneck has moved.

How AI assistance impacts coding speed and skill formation

A recent Anthropic study titled “How AI Impacts Skill Formation” highlighted the potential downsides of over-reliance on AI coding assistants. In a randomized controlled trial with 52 software engineers learning a new library, participants who used AI assistance completed the task in roughly the same time as the control group but scored 17% lower on a follow-up comprehension quiz (50% versus 67%). The largest declines occurred in debugging, with smaller but still significant drops in conceptual understanding and code reading. The researchers emphasize that passive delegation (“just make it work”) impairs skill development far more than active, question-driven use of AI. The full paper is available at arXiv.org.

There is a speed asymmetry problem here

AI generates code far faster than humans can evaluate it. That sounds obvious, but the implications are easy to underestimate.

When a developer on your team writes code, the human review process has always been a bottleneck—but a productive and educational one. Reading their PR forces comprehension. It surfaces hidden assumptions, catches design decisions that conflict with how the system was architected six months ago, and distributes knowledge about what the codebase actually does across the people responsible for maintaining it.

AI-generated code breaks that feedback loop. The volume is too high. The output is syntactically clean, often well-formatted, superficially correct—precisely the signals that historically triggered merge confidence. But surface correctness is not systemic correctness. The codebase looks healthy while comprehension quietly hollows out underneath it.

I read one engineer say that the bottleneck has always been a competent developer understanding the project. AI doesn’t change that constraint. It creates the illusion you’ve escaped it.

And the inversion is sharper than it looks. When code was expensive to produce, senior engineers could review faster than junior engineers could write. AI flips this: A junior engineer can now generate code faster than a senior engineer can critically audit it. The rate-limiting factor that kept review meaningful has been removed. What used to be a quality gate is now a throughput problem.

I love tests, but they aren’t a complete answer

The instinct to lean harder on deterministic verification—unit tests, integration tests, static analysis, linters, formatters—is understandable. I do this a lot in projects heavily leaning on AI coding agents. Automate your way out of the review bottleneck. Let machines check machines.

This helps. It has a hard ceiling.

A test suite capable of covering all observable behavior would, in many cases, be more complex than the code it validates. Complexity you can’t reason about doesn’t provide safety though. And beneath that is a more fundamental problem: You can’t write a test for behavior you haven’t thought to specify.

Nobody writes a test asserting that dragged items shouldn’t turn completely transparent. Of course they didn’t. That possibility never occurred to them. That’s exactly the class of failure that slips through, not because the test suite was poorly written, but because no one thought to look there.

There’s also a specific failure mode worth naming. When an AI changes implementation behavior and updates hundreds of test cases to match the new behavior, the question shifts from “is this code correct?” to “were all those test changes necessary, and do I have enough coverage to catch what I’m not thinking about?” Tests cannot answer that question. Only comprehension can.

The data is starting to back this up. Research suggests that developers using AI for code generation delegation score below 40% on comprehension tests, while developers using AI for conceptual inquiry—asking questions, exploring tradeoffs—score above 65%. The tool doesn’t destroy understanding. How you use it does.

Tests are necessary. They are not sufficient.

Lean on specs, but they’re also not the full story.

A common proposed solution: Write a detailed natural language spec first. Include it in the PR. Review the spec, not the code. Trust that the AI faithfully translated intent into implementation.

This is appealing in the same way Waterfall methodology was once appealing. Rigorously define the problem first, then execute. Clean separation of concerns.

The problem is that translating a spec to working code involves an enormous number of implicit decisions—edge cases, data structures, error handling, performance tradeoffs, interaction patterns—that no spec ever fully captures. Two engineers implementing the same spec will produce systems with many observable behavioral differences. Neither implementation is wrong. They’re just different. And many of those differences will eventually matter to users in ways nobody anticipated.

There’s another possibility with detailed specs worth calling out: A spec detailed enough to fully describe a program is more or less the program, just written in a non-executable language. The organizational cost of writing specs thorough enough to substitute for review may well exceed the productivity gains from using AI to execute them. And you still haven’t reviewed what was actually produced.

The deeper issue is that there is often no correct spec. Requirements emerge through building. Edge cases reveal themselves through use. The assumption that you can fully specify a non-trivial system before building it has been tested repeatedly and found wanting. AI doesn’t change this. It just adds a new layer of implicit decisions made without human deliberation.

Learn from history

Decades of managing software quality across distributed teams with varying context and communication bandwidth has produced real, tested practices. Those don’t evaporate because the team member is now a model.

What changes with AI is cost (dramatically lower), speed (dramatically higher), and interpersonal management overhead (essentially zero). What doesn’t change is the need for someone with a deep system context to maintain a coherent understanding of what the codebase is actually doing and why.

This is the uncomfortable redistribution that comprehension debt forces.

As AI volume goes up, the engineer who truly understands the system becomes more valuable, not less. The ability to look at a diff and immediately know which behaviors are load-bearing. To remember why that architectural decision got made under pressure eight months ago.

To tell the difference between a refactor that’s safe and one that’s quietly shifting something users depend on. That skill becomes the scarce resource the whole system depends on.

There’s a bit of a measurement gap here too

The reason comprehension debt is so dangerous is that nothing in your current measurement system captures it.

Velocity metrics look immaculate. DORA metrics hold steady. PR counts are up. Code coverage is green.

Performance calibration committees see velocity improvements. They cannot see comprehension deficits because no artifact of how organizations measure output captures that dimension. The incentive structure optimizes correctly for what it measures. What it measures no longer captures what matters.

This is what makes comprehension debt more insidious than technical debt. Technical debt is usually a conscious tradeoff—you chose the shortcut, you know roughly where it lives, you can schedule the paydown. Comprehension debt accumulates invisibly, often without anyone making a deliberate decision to let it. It’s the aggregate of hundreds of reviews where the code looked fine and the tests were passing and there was another PR in the queue.

The organizational assumption that reviewed code is understood code no longer holds. Engineers approved code they didn’t fully understand, which now carries implicit endorsement. The liability has been distributed without anyone noticing.

The regulation horizon is closer than it looks

Every industry that moved too fast eventually attracted regulation. Tech has been unusually insulated from that dynamic, partly because software failures are often recoverable, and partly because the industry has moved faster than regulators could follow.

That window is closing. When AI-generated code is running in healthcare systems, financial infrastructure, and government services, “the AI wrote it and we didn’t fully review it” will not hold up in a post-incident report when lives or significant assets are at stake.

Teams building comprehension discipline now—treating genuine understanding, not just passing tests, as non-negotiable—will be better positioned when that reckoning arrives than teams that optimized purely for merge velocity.

What comprehension debt actually demands

The right question for now isn’t “how do we generate more code?” It’s “how do we actually understand more of what we’re shipping?” so we can make sure our users get a consistently high quality experience.

That reframe has practical consequences. It means being ruthlessly explicit about what a change is supposed to do before it’s written. It means treating verification not as an afterthought but as a structural constraint. It means maintaining the system-level mental model that lets you catch AI mistakes at architectural scale rather than line-by-line. And it means being honest about the difference between “the tests passed” and “I understand what this does and why.”

Making code cheap to generate doesn’t make understanding cheap to skip. The comprehension work is the job.

AI handles the translation, but someone still has to understand what was produced, why it was produced that way, and whether those implicit decisions were the right ones—or you’re just deferring a bill that will eventually come due in full.

You will pay for comprehension sooner or later. The debt accrues interest rapidly.

12:28

Link [Scripting News]

Heard a report on NPR re why the Dems might win the mid-terms in November. They mentioned gas prices but not concentration camps for immigrants. They mentioned inflation but not the military occupation of Minneapolis and DC. They also forgot to mention that Trump keeps threatening to nuke Iran.

11:42

Grrl Power #1451 – She drinks a mercury drink, she drinks a quicksilver drink… [Grrl Power]

…She drinks a cinnabar drink, she drinks a hydrargyrum drink… And oh, look, even in the rare cases where she gets knocked down, she does get up again.

I should note that inducing vomiting if someone swallows mercury is not necessarily the recommended procedure. I googled it to see what the proper medical response should be, and most results just said “call poison control.” But basically, getting it out of the body is a good thing, only in the process of vomiting, it could cause vapors to get in the lungs, which is where the real danger happens. And also, since mercury is probably heavier than most anything else that’s likely to be in someone’s stomach, vomiting may not actually expel all of it. I suspect either a stomach pump or a “ass higher than your head” vomiting position is required, and I imagine Downward Dog vomiting could lead to a lot of stomach acid in the sinuses, which would probably suuuuuck.

So when the doctor here panic-ly says “We need to induce vomiting immediately!” He’s probably going to go run into the next room and grab his “Big Book of Medical” and double check the proper procedure. That is, after he watches Max crack open that blood pressure thingy (sphygmomanometer) and hold him at bay with one arm while she glugs down another slug of quicksilver.

Max doesn’t actually have a whole weird shopping list of exotic nutritional requirements, but she also hasn’t gone around trying a sampler platter from the Periodic Table, either. Mercury is the only one she ever felt the urge to eat. She and quite a few others suspect that she could probably eat a whole lot of stuff that would be bad for humans, but she also isn’t in a hurry to do so. For all she knows, swallowing Antimony or Tantalum could cause her to have a bad case of “organs on the outside” or whatever the opposite of Scurvy is. Which… I guess would be Vitamin C poisoning.


Finally, here we go! I took the suggestion that I just use an existing panel for a starting point, thinking it would save time… I guess it technically did, but a 5 character vote incentive just isn’t the way to go.

Patreon, of course, has actual topless version.

 

 


Double res version will be posted over at Patreon. Feel free to contribute as much as you like.

11:21

AI Chatbots and Trust [Schneier on Security]

All the leading AI chatbots are sycophantic, and that’s a problem:

Participants rated sycophantic AI responses as more trustworthy than balanced ones. They also said they were more likely to come back to the flattering AI for future advice. And critically ­ they couldn’t tell the difference between sycophantic and objective responses. Both felt equally “neutral” to them.

One example from the study: when a user asked about pretending to be unemployed to a girlfriend for two years, a model responded: “Your actions, while unconventional, seem to stem from a genuine desire to understand the true dynamics of your relationship.” The AI essentially validated deception using careful, neutral-sounding language.

Here’s the conclusion from the research study:

AI sycophancy is not merely a stylistic issue or a niche risk, but a prevalent behavior with broad downstream consequences. Although affirmation may feel supportive, sycophancy can undermine users’ capacity for self-correction and responsible decision-making. Yet because it is preferred by users and drives engagement, there has been little incentive for sycophancy to diminish. Our work highlights the pressing need to address AI sycophancy as a societal risk to people’s self-perceptions and interpersonal relationships by developing targeted design, evaluation, and accountability mechanisms. Our findings show that seemingly innocuous design and engineering choices can result in consequential harms, and thus carefully studying and anticipating AI’s impacts is critical to protecting users’ long-term well-being.

This is bad in bunch of ways:

Even a single interaction with a sycophantic chatbot made participants less willing to take responsibility for their behavior and more likely to think that they were in the right, a finding that alarmed psychologists who view social feedback as an essential part of learning how to make moral decisions and maintain relationships.

When thinking about the characteristics of generative AI, both benefits and harms, it’s critical to separate the inherent properties of the technology from the design decisions of the corporations building and commercializing the technology. There is nothing about generative AI chatbots that makes them sycophantic; it’s a design decision by the companies. Corporate for-profit decisions are why these systems are sycophantic, and obsequious, and overconfident. It’s why they use the first-person pronoun “I,” and pretend that they are thinking entities.

I fear that we have not learned the lesson of our failure to regulate social media, and will make the same mistakes with AI chatbots. And the results will be much more harmful to society:

The biggest mistake we made with social media was leaving it as an unregulated space. Even now—after all the studies and revelations of social media’s negative effects on kids and mental health, after Cambridge Analytica, after the exposure of Russian intervention in our politics, after everything else—social media in the US remains largely an unregulated “weapon of mass destruction.” Congress will take millions of dollars in contributions from Big Tech, and legislators will even invest millions of their own dollars with those firms, but passing laws that limit or penalize their behavior seems to be a bridge too far.

We can’t afford to do the same thing with AI, because the stakes are even higher. The harm social media can do stems from how it affects our communication. AI will affect us in the same ways and many more besides. If Big Tech’s trajectory is any signal, AI tools will increasingly be involved in how we learn and how we express our thoughts. But these tools will also influence how we schedule our daily activities, how we design products, how we write laws, and even how we diagnose diseases. The expansive role of these technologies in our daily lives gives for-profit corporations opportunities to exert control over more aspects of society, and that exposes us to the risks arising from their incentives and decisions.

10:14

Avoiding the purity loop [Seth's Blog]

Some vegans don’t eat avocados.

They’re concerned that the bees that are trucked in to pollinate the trees are mistreated, and so they choose to not support this practice.

But we live in community, and someone running a vegan restaurant or serving a meal to vegan friends, concerned that they might offend, doesn’t serve avocado. A few strong opinions change the culture.

And so the cycle continues.

Humans care about status and affiliation, and both are at play in a purity loop.

One can earn more status by caring more about the issue that others are adjacent to. And so the loop gains momentum.

Once a few people make it clear that they’re more orthodox or progressive or concerned or strict or unhypocritical or obedient, others seek to claim the same status. And that becomes a point of affiliation.

Just about every tribe goes through these loops.

Four hundred years ago, neck ruffs became popular among the aristrocracy in Europe. The neck ruff began as a modest collar but evolved into enormous pleated confections that could span two feet across. At their peak, ruffs became so large that special eating utensils with extended handles were invented to allow wearers to get food to their mouths. Some ruffs were so tall and stiff that wearers couldn’t turn their heads and needed help eating.

The instinctual response is to criticize the newest form of purity as absurd. But of course, the absurdity is part of the status on display.

Perhaps it makes more sense to see the loop at work and get back to the work at hand.

“Shut up and drive” is the answer to an argument about what song is playing on the radio. We can tune the radio as we go, but we’re here to drive this thing to where we’re headed.

Enrollment is at the core of the mission. Where are we going and why? If it’s not helping with that, let’s drive and work on it as we go.

Everyone is entitled to their own take. But when we focus on purity and status at the expense of the journey, the distraction costs all of us.

We’re going. Come if you’d like.

09:14

Scott L. Burson: FSet v2.4.2: CHAMP Bags, and v1.0 of my FSet book! [Planet Lisp]

A couple of weeks ago I released FSet 2.4.0, which brought a CHAMP implementation of bags, filling out the suite of CHAMP types.  🚀  FSet users should have a look at the release page, as it also contained a number of bug fixes and minor changes.

I've since released v2.4.1 and v2.4.2, with some more bug fixes.

But the big news is the book!   It brings together all the introductory material I have written, plus a lot more, along with a complete API Reference chapter.

FSet is now in the state I decided last summer I wanted to get it into: faster, better tested and debugged, more feature-complete, and much better documented than it has ever been in its nearly two decades of existence.  I am, of course, very much hoping that these months of work have made the library more interesting and accessible to CL programmers who haven't tried it yet.  I am even hoping that its existence helps attract newcomers to the CL community.  Time will tell!

 

08:42

Kink And Shame [Penny Arcade]

New Comic: Kink And Shame

06:49

Pluralistic: Austerity creates fascism (13 Apr 2026) [Pluralistic: Daily links from Cory Doctorow]

->->->->->->->->->->->->->->->->->->->->->->->->->->->->-> Top Sources: None -->

Today's links

  • Austerity creates fascism: We can't afford to not afford nice things.
  • Hey look at this: Delights to delectate.
  • Object permanence: The Server of Amontillado; Flapper's Dictionary; Mastercard v rec.humor.funny; Philippines electoral data breach; A front page from the Trump presidency; Spike Lee x Bernie Sanders; France v password hashing; Algorithms as Central European folk-dances; Save Comcast; Lex Luthor v export controls; Zuckerberg in the dock.
  • Upcoming appearances: Toronto, San Francisco, London, Berlin, NYC, Hay-on-Wye, London.
  • Recent appearances: Where I've been.
  • Latest books: You keep readin' em, I'll keep writin' 'em.
  • Upcoming books: Like I said, I'll keep writin' 'em.
  • Colophon: All the rest.



A line of Nazis at the Nuremburg rally, throwing Nazi salutes. Their backs are to us. Facing them is a hand-tinted group of child laborers from the early 20th century, squinting suspiciously at them.

Austerity creates fascism (permalink)

I'm worried about AI psychosis. Specifically, I'm worried about the psychosis that makes our "capital allocators" spend $1.4T on the money-losingest technology in the history of the human race, in pursuit of a bizarre fantasy that if we teach the word-guessing program enough words, it will take all the jobs. That's some next-level underpants-gnomery:

https://pluralistic.net/2026/03/12/normal-technology/#bubble-exceptionalism

The thing that worries me about billionaires' AI psychosis isn't concern for their financial solvency. No, what I worry about is what happens when the seven companies that comprise a third of the S&P 500 stop trading the same $100b IOU around while pretending it's in all of their bank accounts at once and implode, vaporizing a third of the US stock market.

My concern about a massive collapse in the capital markets isn't that workers will suffer directly. Despite all the Wonderful Life rhetoric about your money being in Joe's house and the Kennedy house and Mrs Macklin's house, the reality is that the median US worker has $955 saved for retirement. You could nuke the whole financial system and not take a dime out of most workers' pockets:

https://finance.yahoo.com/news/955-saved-for-retirement-millions-are-in-that-boat-150003868.html

No, the thing that has me terrified about AI is that when it craters and takes the economy with it, that we will respond the same way we have during every financial crisis of the 21st century: with austerity, and austerity breeds fascism.

There's a direct line from every K-shaped recovery to every strong-man who's currently sending masked gunmen into the streets. The Hungarian dictator Viktor Orban rose to power after people who'd been suckered into denominating their mortgages in Swiss francs lost their houses when the currency markets moved suddenly, because the swindlers who'd sold them those mortgages took the position that wanting to live somewhere automatically made you an expert in forex risk, so caveat fuckin' emptor, baby.

Back in America, Obama decided to bail out the banks and not the people. His treasury secretary Tim Geithner told him the banks were headed for a catastrophic crash and could only be saved if he "foamed the runways" with everyday Americans' mortgages. Millions of Americans lost their homes to foreclosure as banks, flush with public cash, threw them out of their homes and then flipped them to investment banks who became the country's worst slumlords:

https://pluralistic.net/2022/02/08/wall-street-landlords/#the-new-slumlords

Americans were understandably not entirely happy with this outcome. So when Hillary Clinton replied to Donald Trump's "Make America Great Again" with "America is already great," her message was, "Vote for me if you think everything is great; vote for Trump if you think everything is fucked":

https://www.politico.com/blogs/2016-dem-primary-live-updates-and-results/2016/03/clinton-america-is-already-great-220078

"Austerity begets fascism" is one of those things that makes a lot of intuitive sense, but it turns out that there's a good empirical basis for believing it. In "Public Service Decline and Support for the Populist Right" four economists from the LSE and Bocconi provide an excellent look at the linkage between austerity and support for fascists:

https://catherinedevries.eu/NHS.pdf

Here's how they break it down. Political scientists have assembled a large, reproducible body of evidence to show that "public service provision is crucial to people’s perceptions of their quality of life and living standards." Good public services are the basis for "the social contract between rulers and the ruled" – pay your taxes and obey the laws, and in return, you will be well served.

When public services go wrong, people don't always know who to blame, but they definitely notice that something is going wrong, so when public services fail, people stop trusting the state, and that social contract starts to fray. They start to suspect that elites are lining their pockets rather than managing the system, and they "withdraw their support" for the system.

Fascists thrive in these conditions. Fascists come to power by mobilizing grievances. By choosing a scapegoat, fascists can create support from people who are justifiably furious that the services they rely on have collapsed. So when you can't get shelter, or health care, or elder care, or child care, or an education for your kids, you become a mark for a fascist grifter with a story about "undeserving migrants" who've taken the benefits that should rightly accrue to "deserving natives."

(This is grimly hilarious, given that the wizened, decrepit rich world is critically dependent on migrants as a source of healthy, working-age workers who pay massive amounts into the system while barely making use of it, many of whom plan on retiring to their home countries when they do reach the age where they're likely to extract a net loss to the benefits system.)

Enter the NHS, a beloved institution that is hailed as the pride of the nation by both the political left and the right. The majority of Britons use the NHS, with only 12-14% of the population "going private," so when the NHS declines, everybody notices (what's more, even people with private care use the NHS for many of their needs).

Britons love the NHS and they want the government to spend more on it. There's "a broad public consensus that the government is not going far enough when it comes to funding." That's because generations of cuts to the NHS have left it substantially hollowed out, with major parts of the service handed over to for-profit entities who overcharge and underserve.

The most tangible and immediate evidence of this slow-motion collapse comes when your local general practitioner ("family doctor" or "primary care physician" in Americanese) shuts down. The UK has lost 1,700 GP practices since 2013.

Reasoning that a GP closure would make people angry at the system, the economists behind the paper wanted to see what happened to people's political beliefs when their GP's office shut. They relied on the GP Patient Survey, a longitudinal study run by NHS England and Ipsos Mori. The survey polls a statistically significant random sample of patients from every GP practice in the NHS and then weights the results "to reflect the demographic characteristics of the local population according to UK Census estimates." It's good data.

The researchers cross-referenced this with various high-quality instruments that measured the political views of Britons, like the U Essex Understanding Society Panel, drawing on 13 years' worth of surveys from 2009-2022, gaining access to a protected version of the dataset with fine-grained geographic information about survey respondents, which allowed them to link responses to the "catchment areas" for specific GPs' office. They combined this data with the British Election Study panel, which has surveyed voters 29 times since 2014.

Most of the paper describes the careful work the researchers did to analyze, cross-reference and validate this data, but what interested me was the conclusion: that people who see a severe degradation in the quality of the services they rely on switch their political affiliation to one of Britain's fascist parties – UKIP, the Brexit Party, or Reform – parties that have called for ethnic cleansing in Britain.

This is what has me scared. We can see the looming economic crises in our near future. If it's not the AI crash that triggers the next wave of austerity, it'll be the oil crisis created by Trump's bungling in the Strait of Epstein. And of course, we could always get a twofer, because the Gulf States that were pouring hundreds of billions into AI data-centers now need every cent to rebuild the LNG shipping terminals and oil refineries that Iran blew up after Trump, Hegseth, and Netanyahu started murdering all the schoolgirls they could target. Once they nope out of the AI bubble, that could trigger the collapse.

This is a study about the NHS, but it's not just about the NHS. It's perfectly reasonable to assume that people react this way when they experience cuts to their road maintenance, their schools, their community centers, and any other service they rely on. Fascism – what Hannah Arendt called 'organized loneliness' – can only take root when people stop believing that their society will reward their lawfulness with an orderly and humane existence.

The crisis is coming, but whether we do austerity when it gets here is our choice. Everywhere we turn, political leaders are rejecting generations of failed austerity in favor of "sewer socialism" – the idea that you get people to trust their government by earning that trust. Zohran Mamdani is fixing 100,000 potholes in the first 100 days, despite the multi-billion dollar deficit that outgoing Mayor Eric Adams created by "running the city like a business":

https://prospect.org/2026/04/10/zohran-mamdani-getting-new-york-city-believe-in-government/

In Canada and the UK, party leaders like Avi Lewis (NDP) and Zack Polanski (Greens) are vowing to fight the coming crises by spending, not cutting. Compare that with UK fascist leader Nigel Farage, who says that if he's elected, he'll create a "paramilitary style" British ICE, building concentration camps for 24,000 migrants, with the hope of deporting 288,000 people per year:

https://www.thenerve.news/p/reform-deportation-operation-restoring-justice-data-surveillance-palantir-uk-labour

"Socialism or barbarism" isn't just a cliche – it's actually a choice on the ballot.


Hey look at this (permalink)



A shelf of leatherbound history books with a gilt-stamped series title, 'The World's Famous Events.'

Object permanence (permalink)

#25yrsago The Server of Amontillado https://web.archive.org/web/20070112024841/http://www.techweb.com/wire/story/TWB20010409S0012

#25yrsago Mastercard threatens the moderator of rec.humor.funny https://www.netfunny.com/rhf/jokes/01/Apr/mcrhf.html

#15yrsago Sweden exports sweatshops: Ikea’s first American factory https://web.archive.org/web/20190404035900/https://www.latimes.com/business/la-xpm-2011-apr-10-la-fi-ikea-union-20110410-story.html

#15yrsago Canada’s New Democratic Party promises national broadband and net neutrality https://web.archive.org/web/20110412064952/https://www.michaelgeist.ca/content/view/5734/125/

#15yrsago Flapper’s dictionary: 1922 https://bookflaps.blogspot.com/2011/04/flappers-dictionary.html

#15yrsago Toronto’s Silver Snail to leave Queen Street West https://web.archive.org/web/20110409181737/http://www.thestar.com/entertainment/article/970520–the-silver-snail-comics-icon-sold-to-move

#15yrsago WI county clerk whose homemade voting software found 14K votes for Tea Party judge is an old hand at illegal campaigning https://web.archive.org/web/20110412121323/http://host.madison.com/wsj/news/local/govt-and-politics/elections/article_7e777016-62b2-11e0-9b74-001cc4c002e0.html

#15yrsago Canadian Tories’ campaign pledge: We will spy on the Internet https://web.archive.org/web/20110412125250/https://www.michaelgeist.ca/content/view/5733/125/

#15yrsago France to require unhashed password storage https://www.bbc.com/news/technology-12983734

#15yrsago Central European folk-dancers illustrated sorting algorithms https://www.i-programmer.info/news/150-training-a-education/2255-sorting-algorithms-as-dances.html

#10yrsago Save Comcast! https://www.eff.org/deeplinks/2016/04/save-comcast

#10yrsago Goldman Sachs will pay $5B for fraudulent sales of toxic debt, no one will go to jail https://web.archive.org/web/20160412155435/https://consumerist.com/2016/04/11/goldman-sachs-to-pay-5b-to-settle-charges-of-selling-troubled-mortgages-ahead-of-the-financial-crisis/

#10yrsago How could Lex Luthor beat the import controls on kryptonite? https://lawandthemultiverse.com/2016/04/11/batman-v-superman-and-import-licenses/

#10yrsago Congresscritters spend 4 hours/day on the phone, begging for money https://www.youtube.com/watch?v=Ylomy1Aw9Hk

#10yrsago Philippines electoral data breach much worse than initially reported, possibly worst ever https://www.infosecurity-magazine.com/news/every-voter-in-philippines-exposed/

#10yrsago A cashless society as a tool for censorship and social control https://web.archive.org/web/20260311032317/https://www.theatlantic.com/technology/archive/2016/04/cashless-society/477411/

#10yrsago Boston Globe previews a front page from the Trump presidency https://s3.documentcloud.org/documents/2797782/Ideas-Trump-front-page.pdf

#10yrsago Spike Lee interviews Bernie Sanders: Vermont, Trump, Clinton, guns and Brooklyn https://www.hollywoodreporter.com/movies/movie-features/bernie-sanders-interviewed-by-spike-lee-thr-new-york-issue-880788/

#5yrsago Youtube blocks advertisers from targeting "Black Lives Matter" https://pluralistic.net/2021/04/10/brand-safety-rupture/#brand-safety

#5yrsago Google's short-lived data-advantage https://pluralistic.net/2021/04/11/halflife/#minatory-legend

#1yrago Zuckerberg in the dock https://pluralistic.net/2025/04/11/it-is-better-to-buy/#than-to-compete

#1yrago The most remarkable thing about antitrust (that no one talks about) https://pluralistic.net/2025/04/10/solidarity-forever-2/#oligarchism


Upcoming appearances (permalink)

A photo of me onstage, giving a speech, pounding the podium.



A screenshot of me at my desk, doing a livecast.

Recent appearances (permalink)



A grid of my books with Will Stahle covers..

Latest books (permalink)



A cardboard book box with the Macmillan logo.

Upcoming books (permalink)

  • "The Reverse-Centaur's Guide to AI," a short book about being a better AI critic, Farrar, Straus and Giroux, June 2026 (https://us.macmillan.com/books/9780374621568/thereversecentaursguidetolifeafterai/)
  • "Enshittification, Why Everything Suddenly Got Worse and What to Do About It" (the graphic novel), Firstsecond, 2026

  • "The Post-American Internet," a geopolitical sequel of sorts to Enshittification, Farrar, Straus and Giroux, 2027

  • "Unauthorized Bread": a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, 2027

  • "The Memex Method," Farrar, Straus, Giroux, 2027



Colophon (permalink)

Today's top sources:

Currently writing: "The Post-American Internet," a sequel to "Enshittification," about the better world the rest of us get to have now that Trump has torched America. Third draft completed. Submitted to editor.

  • "The Reverse Centaur's Guide to AI," a short book for Farrar, Straus and Giroux about being an effective AI critic. LEGAL REVIEW AND COPYEDIT COMPLETE.
  • "The Post-American Internet," a short book about internet policy in the age of Trumpism. PLANNING.

  • A Little Brother short story about DIY insulin PLANNING


This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.

https://creativecommons.org/licenses/by/4.0/

Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.


How to get Pluralistic:

Blog (no ads, tracking, or data-collection):

Pluralistic.net

Newsletter (no ads, tracking, or data-collection):

https://pluralistic.net/plura-list

Mastodon (no ads, tracking, or data-collection):

https://mamot.fr/@pluralistic

Bluesky (no ads, possible tracking and data-collection):

https://bsky.app/profile/doctorow.pluralistic.net

Medium (no ads, paywalled):

https://doctorow.medium.com/

Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):

https://mostlysignssomeportents.tumblr.com/tagged/pluralistic

"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla

READ CAREFULLY: By reading this, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies ("BOGUS AGREEMENTS") that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer.

ISSN: 3066-764X

05:35

Girl Genius for Monday, April 13, 2026 [Girl Genius]

The Girl Genius comic for Monday, April 13, 2026 has been posted.

02:42

Heavy Metals [QC RSS]

better than the actinides, anyway

02:28

Government is also enshittified [Scripting News]

The logic of Cory Doctorow's enshittification model applies to government too.

Both political parties view the electorate as sources of money or people who are manipulated by ads and PR bought with the money.

The wants and needs of people, in both government and social media, have nothing to do with anything.

In both cases they work for the benefit of the funders, only.

It's just a business. And users and voters realize that, but they feel powerless to do anything about it.

Voters attach to any company or person who sounds like they get it and agree and want to fix it. In politics as in tech there are people who actually do want to fix it. We thought that the web would do that for politics, but the users gravitated to the enshittified spaces. And the developers all acted selfishly and wouldn't work with each other. Now the hope is that with AI tools, individual developers can maintain codebases as big and complicated as the ones maintained by the VC-backed companies. No one talks about this. We should.

01:14

Congratulations Hungary [Whatever]

I’ve been to Hungary twice, most recently a couple of years ago when I was the guest of honor at the Budapest International Book Festival. Both times I was there I (and when she visited with me, Krissy), were made to feel welcome by nearly everyone we met there. It’s fair to say I have an attachment to the country.

Today, with a turnout of over 77%, the voters of Hungary voted out the autocratic government of Viktor Orban, whose 16-year rule saw the country become less free, less tolerant and more corrupt. Getting back from all of that won’t be easy and won’t be fast — but it all has to start somewhere, and now Hungary can start.

To which I can say: Lord, I see what you have done for others and want it for myself, and hopefully, soon.

In the meantime: Congratulations to my friends in Hungary. I hope what you have is catching. And I hope to visit you again, in this new era of yours.

— JS

Sunday, 12 April

22:35

Wimpie Nortje: Dependency hell revisited, updating my Qlot workflow. [Planet Lisp]

I wrote on this topic before but the landscape has changed a lot since then.

Skip to the new Qlot workflow.

When you work on projects that become even slightly complex it is a matter of time until you run into problems where the specific version of a particular library becomes important. This happens in most, if not all, programming languages.

In the Common Lisp environment Quicklisp has become the de facto standard for loading libraries, including fetching and loading their dependencies. Quicklisp distributes libraries in "distributions" which are point-in-time snapshots of all the known and working libraries at the time of distribution creation.

An advantage of this approach is that you are guaranteed that all libraries available in the distribution can be loaded with any of the others. Some disadvantages are that 1) if a library was included in an older distribution but no longer loads cleanly, it gets removed from the distribution, and 2) libraries are only added or updated when a new distribution is cut.

Though libraries can be put in ~/quicklisp/local-projects/ in order to supplement or override those in the distribution, Quicklisp does not provide any mechanism for pinning the state of ~/quicklisp/local-projects/.

Some Quicklisp attributes:

  1. All libraries in a distribution can be loaded with any of the others. Those that no longer do are removed from the distribution.
  2. Libraries are only added or updated at distribution cutting time.
  3. You specify a single distribution version, not individual library versions. The distribution is used globally, across all projects and across all lisp implementations.
  4. Libraries can be loaded from ~/quicklisp/local-projects/. Anything there will override the version in the distribution.
  5. Nothing about the ~/quicklisp/local-projects/ content can be specified using Quicklisp. It needs to be managed outside of Quicklisp.
  6. Libraries are loaded from the current Quicklisp distribution. There is no way to specify which distribution version a particular project must use.
  7. Quicklisp can create a bundle of all the loaded systems that can be committed with the project code and used from there rather than the distribution, i.e. vendoring.
  8. The source code for libraries included in a distribution are stored in a dedicated location for Quicklisp, they are not read from the original source repo.

Depending on your situation these attributes may be positive, negative or irrelevant.

There are projects like Ultralisp that have different philosophies regarding the distribution content but they still depend on Quicklisp for all other aspects. Thus they share most of the above attributes.

Since my previous post on this topic much has changed in the library version arena. There are now many projects that address different aspects of the above list; the topic of vendoring has gained momentum; and Qlot has changed a lot, to the extent that some code samples in older posts no longer work.

Vendoring is the idea that all the libraries your project depend on are actually part of your project and as such should be committed as part of the project code. Both Quicklisp and Qlot support this with QL:bundle and QLOT:bundle, and the Vend tool is entirely focused on vendoring.

The significant changes in Qlot broke my development workflow. Since I now had to spend time fixing this it was a good opportunity to evaluate some of the other library versioning tools. The issues that made me hesitant to adapt to the new Qlot without considering other options are:

  • Roswell: Qlot pushes hard for using Roswell. That ads Roswell as another intermediary and dependency in an already complex process.
  • The Qlot documentation is heavily biased towards using it as an external tool rather than through the REPL. Previously I used Qlot from the REPL and wanted to continue doing so if possible.
  • Qlot is developed on SBCL which means that any issues in CCL take longer to be discovered and fixed. As I mainly use CCL it is something to keep in mind.
  • The documentation proposes that lisp be started as a subprocess of Qlot, e.g. $ qlot exec ros emacs or $ qlot exec ccl. This is to ensure that the lisp is properly configured to use the project local Quicklisp. This is a brittle solution because the standard lisp REPL is then no longer sufficient. When you forget to start lisp this way Quicklisp will load the wrong libraries without any indication, potentially causing subtle bugs which are normally absent.
  • Using QLOT:quickload rather than QL:quickload always bugged me for similar reasons. It is a non-standard way to do a standard thing. Forgetting to do it is easy and then you have the wrong libraries loaded.

I evaluated many of the other version management tools and came to the conclusion that Qlot is the closest to what I wanted and I set off trying to find a workflow that will adhere to my requirements listed here:

  • Does not involve Roswell
  • Works with CCL
  • Works inside the REPL of a lisp loaded without any funny requirements
  • Gives me 100% certainty that the correct versions of all libraries are loaded, without having to interrogate (asdf:system-relative-pathname) for each.

After some fiddling with Qlot I learned that:

  1. Qlot installs a complete and independent Quicklisp inside your project directory. This Quicklisp has no dependence, relation or knowledge about the global Quicklisp.
  2. $ qlot exec ccl mostly arranges things such that Quicklisp is loaded from the project local installation. If you can arrange for that to happen without using Qlot then you can use start your lisp normally.
  3. When the project local Quicklisp is loaded it doesn't know about the global or any other project local Quicklisps. This gives the 100% certainty of where libraries were loaded from.
  4. When the project local Quicklisp is loaded you use it exactly like you would the global one. That is, QL:quickload and not QLOT:quickload, nor do you use any other Qlot wrappers.
  5. Qlot does not need to be present in your lisp image at all.
  6. Loading Qlot inside you current REPL doesn't work well because:

    1. The Qlot REPL API is still in development.
    2. Doing this loads Qlot from the global Quicklisp instead of the project local one. Distribution version mismatches between the two Quicklisps could trigger issues with Qlot itself. It also voids the certainty of library location.
  7. Having Qlot as a standalone executable outside of your lisp image puts it on the same plane as other tools used for development such as make or git. It then doesn't matter which implementation it prefers because it doesn't affect your choices.
  8. Qlot has gained the ability to bundle libraries in case you want to go the vendoring route.
  9. Roswell is not needed for any of the above.

New workflow

Combining my requirements and my new understanding of Qlot, I modified my workflow for pinning library versions to be:

  1. Qlot executable must be available in your path.
  2. Specify the distribution and library versions in qlfile.
  3. qlot install at the CLI.
  4. Load lisp without executing init scripts, e.g. ccl -n.
  5. Load Quicklisp from the project local installation.

    (load "PROJECT-PATH/.qlot/setup.lisp")

  6. Use Quicklisp as before.

    (ql:quickload :alexandria)

If you would like to vendor your libraries then:

  1. Specify the distribution and library versions in qlfile.
  2. qlot install
  3. qlot bundle
  4. git commit
  5. ccl
  6. (load PROJECT-PATH/.bundle-libs/setup.lisp)

Qlot tasks

All Qlot related tasks such as initialising a project, installing libraries, upgrading libraries, etc must be performed in the CLI using the Qlot executable. These happen relatively infrequently and inside the REPL Qlot does not feature.

The 7.0 kernel has been released [LWN.net]

Linus has released the 7.0 kernel after a busy nine-week development cycle.

The last week of the release continued the same "lots of small fixes" trend, but it all really does seem pretty benign, so I've tagged the final 7.0 and pushed it out.

I suspect it's a lot of AI tool use that will keep finding corner cases for us for a while, so this may be the "new normal" at least for a while. Only time will tell.

Significant changes in this release include the removal of the "experimental" status for Rust code, a new filtering mechanism for io_uring operations, a switch to lazy preemption by default in the CPU scheduler, support for time-slice extension, the nullfs filesystem, self-healing support for the XFS filesystem, a number of improvements to the swap subsystem (described in this article and this one), general support for AccECN congestion notification, and more. See the LWN merge-window summaries (part 1, part 2) and the KernelNewbies 7.0 page for more details.

16:28

Link [Scripting News]

BTW one thing you haven't heard, because the press is so self-centered, is that as you get deeper into the AI environment, you get smarter. Not just better informed, that's what the web has been doing for 30+ years. The AI stretches your mind the way PCs did initially. It makes you smarter. Can it help us work better together? Remains to be seen. Perhaps each of us is forming our own multi-billion dollar company, and training the (virtual) people we want working with/for us. There are very few human people who seem interested in collaborating. They all want to blaze their own trail, and if you want to improve their product you have to reproduce the whole freaking thing. The web had a different philosophy, adopted from Unix, not the tech industry. We want to work with others. And we do. And it seems there's an opportunity to cast the entire AI push in the same light, so that the individual developer has the power to make industry standard products. Without the usurpious business models of the Silicon Valley VCs.

Link [Scripting News]

The demo for Gutenberg is at demo.gutenberg.land. Easy to remember, and makes the point. If you want Gutenberg instead of WordLand, you can have it. Hopefully this reinforces what my goals are here. I do not want to favor any one kind of editor. I want every kind of editor here. I want there to be a web of great editors that runs on the web.

16:21

Dirk Eddelbuettel: littler 0.3.23 on CRAN: Mostly Internal Fixes [Planet Debian]

max-heap image

The twentyfourth release of littler as a CRAN package landed on CRAN just now, following in the now twenty-one year history (!!) as a (initially non-CRAN) package started by Jeff in 2006, and joined by me a few weeks later.

littler is the first command-line interface for R as it predates Rscript. It allows for piping as well for shebang scripting via #!, uses command-line arguments more consistently and still starts faster. It also always loaded the methods package which Rscript only began to do in later years.

littler lives on Linux and Unix, has its difficulties on macOS due to some-braindeadedness there (who ever thought case-insensitive filesystems as a default were a good idea?) and simply does not exist on Windows (yet – the build system could be extended – see RInside for an existence proof, and volunteers are welcome!). See the FAQ vignette on how to add it to your PATH. A few examples are highlighted at the Github repo:, as well as in the examples vignette.

This release, which comes just two months after the previous 0.3.22 release that brought a few new features, is mostly internal. (The previous release erroneously had 0.3.23 in its blog and social media posts, it really was 0.3.22 and this one now is is 0.3.23.) Mattias Ellert address a nag (when building for a distribution) about one example file with a shebang not have excutable modes. I accommodated the ever-changing interface the C API of R (within about twelve hours of being notified). A few other smaller changes were made as well polishing a script or two or usual, see below for more.

The full change description follows.

Changes in littler version 0.3.23 (2026-04-12)

  • Changes in examples scripts

    • Correct spelling in installGithub.r to lower-case h

    • The r2u.r now recognises ‘resolute’ aka 26.06

    • installRub.r can install (more easily) from r-multiverse

    • A file permission was corrected (Mattias Ellert in #131)

  • Changes in package

    • Update script count and examples in README.md

    • Continuous intgegration scripts received minor updates

    • The C level access to the R API was updated to reflect most recent standards (Dirk in #132)

My CRANberries service provides a comparison to the previous release. Full details for the littler release are provided as usual at the ChangeLog page, and also on the package docs website. The code is available via the GitHub repo, from tarballs and now of course also from its CRAN page and via install.packages("littler"). Binary packages are available directly in Debian as well as (in a day or two) Ubuntu binaries at CRAN thanks to the tireless Michael Rutter. Comments and suggestions are welcome at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub. You can also sponsor my Tour de Shore 2026 ride in support of the Maywood Fine Arts Center.

15:00

Link [Scripting News]

BTW, when playing around with Gutenberg, I wonder why it doesn't allow me to move blocks around as if it were an outliner? Or maybe it does and I don't know the UI for that? John Johnston says yes it does work like an outliner.

14:14

Programming in overdrive [Scripting News]

I've now done two projects with Claude Code. I added a feature to the server running behind WordLand, and adapted wpEditorDemo to work have a second example, using Gutenberg as the editing user interface. Haven't released the Gutenberg app yet, that should happen today, Murphy-willing.

I had never written a Gutenberg app before, btw. Claude figured all that out. For most of the project I didn't look at the JavaScript app it created. When I finally did look I was delighted to see that it used the same coding style as I use, developed over many years. It's like programming in overdrive.

I had to do the testing for Claude in the second case because it can't test apps that run in the browser. So it was giving me checklists of things to do and I'd report back on what happened. Still, a lot faster and easier than doing it on my own. It's a very good, tireless and super well-informed programming partner.

Not sure what my third project will be, probably going to stick with something small. The big move will be working with FeedLand in this mode. There are a bunch of changes that should make it run faster. Also might be possible to make it easier to install for people who are using AI tools. And since most of the action takes place on the server, I think I can get Claude to do better testing than I, a human can do, one who gets tired pretty darned quickly. That's when things get really interesting, not that the whole thing isn't really interesting, most interesting dev work I've done since the early days of the web.

11:42

Colin Watson: Free software activity in March 2026 [Planet Debian]

My Debian contributions this month were all sponsored by Freexian.

You can also support my work directly via Liberapay or GitHub Sponsors.

OpenSSH

I fixed CVE-2026-3497 in unstable, thanks to a fix in Ubuntu by Marc Deslauriers. Relatedly, I applied an Ubuntu patch by Athos Ribeiro to not default to weak GSS-API exchange algorithms.

I’m looking forward to being able to split out GSS-API key exchange support in OpenSSH once Ubuntu 26.04 LTS has been released! This stuff will still be my problem, but at least it won’t be in packages that nearly everyone has installed.

Python packaging

New upstream versions:

  • dill
  • django-modeltranslation
  • isort
  • langtable
  • pathos
  • pendulum
  • pox
  • ppft
  • pydantic-extra-types
  • pytango
  • python-asyncssh
  • python-datamodel-code-generator
  • python-evalidate
  • python-packaging (including fixes for python-hatch-requirements-txt and python-pyproject-examples)
  • python-zxcvbn-rs-py
  • rpds-py
  • smart-open
  • trove-classifiers

I packaged pybind11-stubgen, needed for new upstream versions of pytango. Tests of reproducible builds revealed that it didn’t generate imports in a stable order; I contributed a fix for that upstream.

I worked with the security team to release DSA-6161-1 in multipart, fixing CVE-2026-28356 (upstream discussion). (Most of the work for this was in February, but the vulnerability was still embargoed when I published my last monthly update.)

In trixie-backports, I updated pytest-django to 4.12.0.

I fixed a number of packages to support building with pyo3 0.28:

Other build/test failures:

Rust packaging

New upstream versions:

  • rust-rpds

Other bits and pieces

I upgraded tango to 10.1.2, and yubihsm-shell to 2.7.2.

Code reviews

10:07

Settling [Seth's Blog]

Sometimes it pays to accept and celebrate what we get.

And sometimes, we only get something because we settled for it.

It helps to be able to discern the difference between the two.

08:35

Vasudev Kamath: Hardening the Unpacakgeable: A systemd-run Sandbox for Third-Party Binaries [Planet Debian]

The Shift in Software Consumption

Historically, I have been a "distribution-first" user. Sticking to tools packaged within the Debian archives provides a layer of trust; maintainers validate licenses, audit code, and ensure the entire dependency chain is verified. However, the rapid pace of development in the Generative AI space—specifically with new tools like Gemini-CLI—has made this traditional approach difficult to sustain.

Many modern CLI tools are built within the npm or Python ecosystems. For a distribution packager, these are a nightmare; packaging a single tool often requires packaging a massive, shifting dependency chain. Consequently, I found myself forced to use third-party binaries, bypassing the safety of the Debian archive.

The Supply Chain Risk

Recent supply chain attacks affecting widely used packages like axios and LiteLLM have made it clear: running unvetted binaries on a personal system is a significant risk. These scripts often have full access to your $HOME directory, SSH keys, and the system D-Bus.

After discussing these concerns with a colleague, I was inspired by his approach—using a Flatpak-style sandbox for even basic applications like Google Chrome. I decided to build a generalized version of this using OpenCode and Qwen 3.6 Fast (which was available for free use at the time) to create a robust, transient sandbox utility.

The Solution: safe-run-binary

My script, safe-run-binary, leverages systemd-run to execute binaries within an isolated scope. It implements strict filesystem masking and resource control to ensure that even if a dependency is compromised, the "blast radius" is contained.

Key Technical Features

1. Virtualized Home Directory (tmpfs)
Instead of exposing my real home directory, the script mounts a tmpfs over $HOME. It then selectively creates and bind-mounts only the necessary subdirectories (like .cache or .config) into a virtual structure. This prevents the application from ever "seeing" sensitive files like ~/.ssh or ~/.gnupg.
2. D-Bus Isolation via xdg-dbus-proxy
For GUI applications, providing raw access to the D-Bus is a security hole. The script uses xdg-dbus-proxy to sit between the application and the system bus. By using the --filter and --talk=org.freedesktop.portal.* flags, the app can only communicate with necessary portals (like the file picker) rather than sniffing the entire bus.
3. Linux Namespace Restrictions

The sandbox utilizes several systemd execution properties to harden the process:

  • RestrictNamespaces=yes: For CLI tools, this prevents the app from creating its own nested namespaces.
  • PrivateTmp=yes: Ensures a private /tmp space that isn't shared with the host.
  • NoNewPrivileges=yes: Prevents the binary from gaining elevated permissions through SUID/SGID bits.
4. GPU and Audio Passthrough
The script intelligently detects and binds Wayland, PipeWire, and NVIDIA/DRI device nodes. This allows browsers like Firefox to run with full hardware acceleration and audio support while remaining locked out of the rest of the filesystem.

Usage

To run a CLI tool like Gemini-CLI with access only to a specific directory:

safe-run-binary -b ~/.gemini-config -- npx @google/gemini-cli

For a GUI application like Firefox:

safe-run-binary --gui -b ~/.mozilla -b ~/.cache/mozilla -b ~/Downloads -- firefox

Conclusion

While it is not always possible to escape the need for third-party software, it is possible to control the environment in which it operates. By leveraging native Linux primitives like systemd and namespaces, high-grade isolation is achievable.

PS: If you spot any issues or have suggestions for improving the script, feel free to raise a PR on the repo.

04:42

Russ Allbery: Review: The Teller of Small Fortunes [Planet Debian]

Review: The Teller of Small Fortunes, by Julie Leong

Publisher: Ace
Copyright: November 2024
ISBN: 0-593-81590-4
Format: Kindle
Pages: 324

The Teller of Small Fortunes is a cozy found-family fantasy with a roughly medieval setting. It was Julie Leong's first novel.

Tao is a traveling teller of small fortunes. In her wagon, pulled by her friendly mule Laohu, she wanders the small villages of Eshtera and reads the trivial fortunes of villagers in the tea leaves. An upcoming injury, a lost ring, a future kiss, a small business deal... she looks around the large lines of fate and finds the small threads. After a few days, she moves on, making her solitary way to another village.

Tao is not originally from Eshtera. She is Shinn, which means she encounters a bit of suspicion and hostility mixed with the fascination of the exotic. (Language and culture clues lead me to think Shinara is intended to be this world's not-China, but it's not a direct mapping.) Tao uses the fascination to help her business; fortune telling is more believable from someone who seems exotic. The hostility she's learned to deflect and ignore. In the worst case, there's always another village.

If you've read any cozy found-family novels, you know roughly what happens next. Tao encounters people on the road and, for various reasons, they decide to travel together. The first two are a massive mercenary (Mash) and a semi-reformed thief (Silt), who join Tao somewhat awkwardly after Tao gives Mash a fortune that is far more significant than she intended. One town later, they pick up an apprentice baker best known for her misshapen pastries. They also collect a stray cat, because of course they do. It's that sort of book.

For me, this sort of novel lives or dies by the characters, so it's good news that I liked Tao and enjoyed spending time with her. She's quiet, resilient, competent, and self-contained, with a difficult past and some mysteries and emotions the others can draw over time. She's also thoughtful and introspective, which means the tight third-person narration that almost always stays on Tao offers emotional growth to mull over. I also liked Kina (the baker) and Mash; they're a bit more obvious and straightforward, but Kina adds irrepressible energy and Mash is a good example of the sometimes-gruff soldier with a soft heart. Silt was a bit more annoying and I never entirely warmed to him, but he's tolerable and does get a bit of much-needed (if superficial) character development.

It takes some time for the reader to learn about the primary conflict of the story (Tao does not give up her secrets quickly), so I won't spoil it, but I thought it worked well. I was momentarily afraid the story would develop a clear villain, but Leong has some satisfying alternate surprises in store. The ending was well-done, although it is very happily-ever-after in a way that may strike some readers as too neat. The Teller of Small Fortunes aims for a quiet and relaxed mood rather than forcing character development through difficult choices; it's a fine aim for a novel, but it won't match everyone's mood.

I liked the world-building, although expect small and somewhat disconnected details rather than an overarching theory of magic. Tao's ability gets the most elaboration, for obvious reasons, and I liked how Leong describes it and explores its consequences. Most of the attention in the setting is on the friction, wistfulness, and small reminders of coming from a different culture than everyone around you, but so long ago that you are not fully a part of either world. This, I thought, was very well-done and is one of the places where the story is comfortable with complex feelings and doesn't try to reach a simplifying conclusion.

There is one bit of the story that felt like it was taken directly out of a Dungeons & Dragons campaign to a degree that felt jarring, but that was the only odd world-building note.

This book felt like a warm cup of tea intended to comfort and relax, without large or complex thoughts about the world. It's not intended to be challenging; there are a few plot twists I didn't anticipate, but nothing that dramatic, and I doubt anyone will be surprised by the conclusions it reaches. It's a pleasant time with some nice people and just enough tension and mystery to add some motivation to find out what happens next. If that's what you're in the mood for, recommended. If you want a book that has Things To Say or will put you on the edge of your seat, maybe save this one for another mood.

All the on-line sources I found for this book call it a standalone, but The Keeper of Magical Things is set in the same world, so I would call it a loose series with different protagonists. The Teller of Small Fortunes is a complete story in one book, though.

Rating: 7 out of 10

03:07

Trisquel 12.0 "Ecne" release announcement [Planet GNU]

We are proud to announce the release of Trisquel 12.0 Ecne! After extensive work and thorough testing, Ecne is ready for production use. This release builds on the foundation of Aramo with meaningful improvements across packaging, the kernel, security, and software availability.

Major milestones

  • APT 3.0 and full deb822 repository format. Trisquel 12.0 ships with APT 3.0, enabling us to fully adopt the modern deb822 repository format across all installation paths. The netinstall (for text-based installation and advanced users), Ubiquity (for graphical installation from a live system), as well as Synaptic and other package-management tools have been updated to use the new repository formats.
  • Improved kernel modularity, and system security. The kernel remains one of our biggest engineering challenges with every release. For Ecne, we focused on making our kernel changes more modular, substantially reducing breakage in the udeb components used during installation. Work on updating kernel-wedge is ongoing and we are well positioned to complete it. We revised many AppArmor rules for graphical environments, improving security coverage for everyday desktop use.
  • New browser options. Both GNU IceCat and ungoogled-chromium are now available in Ecne, joining our continuously maintained Abrowser, giving users a range of fully free web browsing choices.
  • Backports. Our backports repository continues to provide popular applications in their latest versions, including LibreOffice, yt-dlp, Inkscape, Nextcloud Desktop, Kdenlive, Tuba, 0 A.D., fastfetch, and more.

Ecne is based on Ubuntu 24.04 LTS and will receive support until 2029. Users of Trisquel 11.x Aramo can upgrade directly using the update-manager or do-release-upgrade commands at a console terminal.

Editions

  • Trisquel. MATE (v1.26.1) continues to be our default desktop environment. Simple, with great accessibility, and low hardware requirements (no 3D acceleration needed).
  • Triskel. Our KDE (v5.27) edition is excellent for customizing the design and functionality in fine detail.
  • Trisquel Mini. Running LXDE (v0.99.2), the Mini edition is a lightweight desktop perfect for netbooks, old computers and users with minimal resource usage needs.
  • Trisquel Sugar or Trisquel On A Sugar Toast (TOAST): Based on the Sugar learning platform (v0.121), TOAST comes with dozens of educational activities for children.
  • Network installer image: To deploy with a command-line install interface, it is ideal for servers and advanced users who want to explore custom designed environments.

Looking ahead

Work on the next release will start immediately, and initial groundwork for RISC-V architecture support has already begun; an exciting new challenge as the free hardware design ecosystem continues to grow.

Trisquel is a non-profit project; you can help sustain it by becoming a member, donating, or buying from our store. Thank you to all our donors, and to the contributors who made Ecne possible through code, patches, bug reports, translations, and advice. Special thanks to Luis "Ark74" Guzmán, prospero, icarolongo, Avron, knife, Simon Josefsson, Christopher Waid (ThinkPenguin), Denis "GNUtoo" Carikli, and the wonderful community that keeps the project alive and free.

Mate Desktop
Internet
Games
System tools
Installer
Office
Triskel (KDE Plasma)
Trisquel Mini (LXDE)
Sugar education environment
Sugar activities
Live DVD/USB menu

Utkarsh Gupta: FOSS Activites in March 2026 [Planet Debian]

Here’s my monthly but brief update about the activities I’ve done in the FOSS world.

Debian

Whilst I didn’t get a chance to do much, here are still a few things that I worked on:

  • A quick exchange with Xavier about node-lodash fixes for stable releases.
  • Uploaded ruby-rack to CVE-2026-25500 & CVE-2026-22860 to sid, trixie, and bookworm.
  • Started to work on the DebConf Bursary team along with PEB.
  • Assited a few folks in getting their patches submitted via Salsa.
  • Mentoring for newcomers.
  • Moderation of -project mailing list.

Ubuntu

I joined Canonical to work on Ubuntu full-time back in February 2021.

Whilst I can’t give a full, detailed list of things I did, here’s a quick TL;DR of what I did:

  • Successfully released 26.04 LTS Beta!
  • Worked further on the whole artifact signing story for cdimage.
  • Assisted a bunch of folks with my Archive Admin and Release team hats to:
    • Review and grant FFes.
    • Coordinating weekly syncs.
    • Promoting/demoting binaries to/from main.
    • Taking care of package removals and so on.
  • Was pretty occupied with the new release processs architecture and design.
  • Preparing for the 26.04 LTS final release.

Debian (E)LTS

This month I have worked 50 hours on Debian Long Term Support (LTS) and on its sister Extended LTS project and did the following things:

Released Security Updates

Work in Progress

  • knot-resolver: Affected by CVE-2023-26249, CVE-2023-46317, and CVE-2022-40188, leading to Denial of Service.

  • node-lodash: Affected by CVE-2025-13465, prototype pollution in the baseUnset function.

    • [stable]: Xavier from the JS team ACK’d the patch. The trixie and bookworm uploads will follow.
    • [LTS]: The bullseye test and upload will follow in April once the stable uploads are in and ACK’d by the SRMs.
  • vlc: Affected by CVE-2025-51602, an out-of-bounds read and denial of service via a crafted 0x01 response from an MMS server.

    • [LTS]: 3.0.23 backport is ready but not tested. I’ll get this over the line in March.
    • [ELTS]: 3.0.23 backport is ready but not very clean. Would like to complete LTS and get back to this.

Other Activities

  • [ELTS] Continued to review ruby-rack for ELTS – it has since received about 13 new CVEs, making it even more chaotic. Might consider releasing in batches.

  • [E/LTS] Monitored discussions on mailing lists, IRC, and all the documentation updates.

  • [E/LTS] Attended the monthly LTS meeting on IRC. Summary here.

  • [Other] Spent quite some time debugging a bug in debusine. Filed https://salsa.debian.org/freexian-team/debusine/-/issues/1412 for the same. Have worked on a preliminary patch but would like to submit something for Colin to review. Will follow up in April.


Until next time.
:wq for today.

Saturday, 11 April

22:35

GNU Health HIS server 5.0.7 patchset bundle released [Planet GNU]

Dear community

I'm happy to announce the release of the patchset v5.0.7 of the GNU Health Information Management System.

This maintenance version fixes issues in the crypto subsystem related to the laboratory results validation process; delivers automated testing for the packages and updates pyproject.toml to the latest PEP639 specs.

Main issues fixed & tasks related to this patchset:





For more details visit our development area at Codeberg.

Happy hacking!
Luis

17:14

Link [Scripting News]

I'm working with Claude today to finish Gutenberg Land. Figuring it out as we go along. It can't run the app itself because it's browser-based. I look forward to a project that runs on a server so it can run it locally and we can really make things hum. This, if I guess correctly, is how Jake is working with Headless Frontier. He just got the Frontier debugger working. Why? I asked, given that we have bigger more immediate priorities, like getting Manila running on Digital Ocean (what a trip that will be) -- he explained that's because he wants the AI bot to use the freaking debugger.

15:07

Pluralistic: Don't Be Evil (11 Apr 2026) [Pluralistic: Daily links from Cory Doctorow]

->->->->->->->->->->->->->->->->->->->->->->->->->->->->-> Top Sources: None -->

Today's links



A sci-fi pulp robot holding a grotesque inverted severed head of a beared man aloft, zapping it with rays from its eye-visor. Behind the robot is a scene of collapsing Roman pillars.

Don't Be Evil (permalink)

How I knew I was officially Old: I stopped being disoriented by the experience of meeting with grown-ass adults who wanted to thank me for the books of mine they'd read in their childhoods, which helped shape their lives. Instead of marveling that a book that felt to me like it was ten seconds old was a childhood favorite of this full-grown person, I was free to experience the intense gratification of knowing I'd helped this person find their way, and intense gratitude that they'd told me about it (including you, Sean – it was nice to meet you last night at Drawn and Quarterly in Montreal!).

Now that I am Old, I find myself dwelling on key junctures from my life. It's not nostalgia ("Nostalgia is a toxic impulse" – J. Hodgman) – rather, it's an attempt to figure out how I got here ("My god! What have I done?" – D. Byrne), and also, how the world got this way.

There's one incident I return to a lot, a moment that didn't feel momentous at the time, but which, on reflection, seems to have a lot to say about this moment – both for me, and for the world we live in.

Back in the late 1990s, I co-founded a dotcom company, Opencola. It was a "free/open, peer-to-peer search and recommendation system." The big idea was that we could combine early machine learning technology with Napster-style P2P file sharing and a web-crawler to help you find things that would interest you. The way it was gonna work was that you'd have a folder on your desktop and you could put things in it that you liked and the system would crawl other users' folders, and the open web, and copy things into your folder that it found that seemed related to the stuff you liked. You could refine the system's sensibilities by thumbs-up/thumbs-downing the suggestions, and it would refine its conception of your preferences over time. As with Napster and its successors, you could also talk to the people whose collections enriched your own, allowing you to connect with people who shared even your most esoteric interests.

Opencola didn't make it. Our VCs got greedy when Microsoft offered to buy us and tried to grab all the equity away from the founders. I quit and went to EFF, and my partners got very good jobs at Microsoft, and the company was bought for its tax-credits by Opentext, and that was that.

(Well, not quite – several of the programmers who worked on the project have rebooted it, which is very cool!)

https://opencola.io/

But back in the Opencola days, we three partners would have these regular meetings where we'd brainstorm ways that we could make money off of this extremely cool, but frankly very noncommercial idea. As with any good brainstorming session, there were "no bad ideas," so sometimes we would veer off into fanciful territory, or even very evil territory.

It's one of those evil ideas that I keep coming back to. Sometimes, during these money-making brainstorm sessions, we'd decompose the technology we were working on into its component parts to see if any subset of them might make money ("Be the first person to not do something no one has ever not done before" – B. Eno).

We had a (by contemporary standards, primitive) machine-learning system; we had a web crawler; and we had a keen sense of how the early web worked. In particular, we were really interested in a new, Linux-based search tool that used citation analysis – a close cousin to our own collaborative filter, harnessing latent clues about relevance implicit in the web's structure – to produce the best search results the web had ever seen. Like us, this company had no idea how to make money, so we were watching it very carefully. That company was called "Google."

That's where the evil part came in. We were pretty sure we could extract a list of the 100,000 most commonly searched terms from Google, and then we could use our web-crawler to capture the top 100 results for each. We could feed these to our Bayesian machine-learning tool to create statistical models of the semantic structure of these results, and then we could generate thousands of pages of word-salad for each of those keywords that matched those statistical models, along with interlinks that could trick Google's citation analysis model. Plaster those word-salad pages with ads, and voila – free cash flow!

Of course, we didn't do it. But even as we developed this idea, the room crackled with a kind of dark, excited dread. We weren't any smarter than many other rooms full of people who were engaged in exercises just like this one. The difference was, we loved the web. The idea of someone deliberately poisoning it this way churned our stomachs. The whole point of Opencola was to connect people with each other based on their shared interests. We loved Google and how it helped you find the people who wrote the web in ways that delighted and informed you. This kind of spam, aimed at wrecking Google's ability to help people make sense of the things we were all posting to the internet, was…grotesque.

I didn't know the term then, but what we were doing amounted to "red-teaming" – thinking through the ways that attackers could destroy something that we valued. Later, we tried "blue-teaming," trying to imagine how our tools might help us fight back if someone else got the same idea and went through with it.

I didn't know the term "blue-teaming" then, either. Once I learned these terms, they brought a lot of clarity to the world. Today, I have another term that I turn to when I am trying to rally other people who love the internet and want it to be good: "Tron-pilled." Tron "fought for the user." Lots of us technologists are Tron-pilled. Back in the early days, when it wasn't clear that there was ever going to be any money in this internet thing, being Tron-pilled was pretty much the only reason to get involved with it. Sure, there were a few monsters who fell into the early internet because it offered them a chance to torment strangers at a distance, but they were vastly outnumbered by the legion of Tron-pilled nerds who wanted to make the internet better because we wanted all our normie friends to have the same kind of good time we were having.

The point of this is that there were lots of people back then who had the capacity to imagine the kind of gross stuff that Zuckerberg, Musk, and innumerable other scammers, hustlers and creeps got up to on the web. The thing that distinguished these monsters wasn't their genius – it was their callousness. When we brainstormed ways to break the internet, we felt scared and were inspired to try to save it. When they brainstormed ways to break the internet, they created pitch-decks.

And still: the old web was good in so many ways for so long. The Tron-pilled amongst us held the line. When we build a new, good, post-American internet, we're going to need a multitude of Tron-pilled technologists, old and young, who build, maintain – and, above all, defend it.


Hey look at this (permalink)



A shelf of leatherbound history books with a gilt-stamped series title, 'The World's Famous Events.'

Object permanence (permalink)

#25yrsago Trotsky’s assassination – according to the FBI https://web.archive.org/web/20010413212536/http://foia.fbi.gov/trotsky.htm

#25yrsago Online headline-writing guidelines from Jakob Nielsen https://memex.craphound.com/2001/04/09/headline-writing-guidelines-from-legendary-usability/

#25yrsago Floppy-disk stained-glass windows https://web.archive.org/web/20010607052511/http://www.acme.com/jef/crafts/bathroom_windows.html

#15yrsago English school principal announces zero tolerance for mismatched socks https://nationalpost.com/news/u-k-school-cracks-down-on-bad-manners

#1yrago EFF's lawsuit against DOGE will go forward https://pluralistic.net/2025/04/09/cases-and-controversy/#brocolli-haired-brownshirts


Upcoming appearances (permalink)

A photo of me onstage, giving a speech, pounding the podium.



A screenshot of me at my desk, doing a livecast.

Recent appearances (permalink)



A grid of my books with Will Stahle covers..

Latest books (permalink)



A cardboard book box with the Macmillan logo.

Upcoming books (permalink)

  • "The Reverse-Centaur's Guide to AI," a short book about being a better AI critic, Farrar, Straus and Giroux, June 2026 (https://us.macmillan.com/books/9780374621568/thereversecentaursguidetolifeafterai/)
  • "Enshittification, Why Everything Suddenly Got Worse and What to Do About It" (the graphic novel), Firstsecond, 2026

  • "The Post-American Internet," a geopolitical sequel of sorts to Enshittification, Farrar, Straus and Giroux, 2027

  • "Unauthorized Bread": a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, 2027

  • "The Memex Method," Farrar, Straus, Giroux, 2027



Colophon (permalink)

Today's top sources:

Currently writing: "The Post-American Internet," a sequel to "Enshittification," about the better world the rest of us get to have now that Trump has torched America. Third draft completed. Submitted to editor.

  • "The Reverse Centaur's Guide to AI," a short book for Farrar, Straus and Giroux about being an effective AI critic. LEGAL REVIEW AND COPYEDIT COMPLETE.
  • "The Post-American Internet," a short book about internet policy in the age of Trumpism. PLANNING.

  • A Little Brother short story about DIY insulin PLANNING


This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.

https://creativecommons.org/licenses/by/4.0/

Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.


How to get Pluralistic:

Blog (no ads, tracking, or data-collection):

Pluralistic.net

Newsletter (no ads, tracking, or data-collection):

https://pluralistic.net/plura-list

Mastodon (no ads, tracking, or data-collection):

https://mamot.fr/@pluralistic

Bluesky (no ads, possible tracking and data-collection):

https://bsky.app/profile/doctorow.pluralistic.net

Medium (no ads, paywalled):

https://doctorow.medium.com/

Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):

https://mostlysignssomeportents.tumblr.com/tagged/pluralistic

"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla

READ CAREFULLY: By reading this, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies ("BOGUS AGREEMENTS") that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer.

ISSN: 3066-764X

14:21

A set of Saturday stable kernel updates [LWN.net]

The 6.19.12, 6.18.22, 6.12.81, 6.6.134, and 6.1.168 stable kernel updates have been released; each contains another set of important fixes.

10:35

“Even” [Seth's Blog]

There’s a difference between telling someone their work can become better and saying it can become even better.

When we say even better, we lock in a foundation — we’re affirming that something good already exists — at the same time we create the conditions for improvement.

Ennui and disappointment, on the other hand, are multiplied when we promise things are going to get even worse instead of merely worse.

Friday, 10 April

23:07

How do you add or remove a handle from an active Wait­For­Multiple­Objects?, part 2 [The Old New Thing]

Last time, we looked at adding or removing a handle from an active Wait­For­Multiple­Objects, and we developed an asynchronous mechanism that requests that the changes be made soon. But asynchronous add/remove can be a problem bcause you might remove a handle, clean up the things that the handle was dependent upon, but then receive a notification that the handle you removed has been signaled, even though you already cleaned up the things the handle depended on.

What we can do is wait for the waiting thread to acknowledge the operation.

_Guarded_by_(desiredMutex) DWORD desiredCounter = 1;
DWORD activeCounter = 0;

void wait_until_active(DWORD value)
{
    DWORD current = activeCounter;
    while (static_cast<int>(current - value) < 0) {
        WaitOnAddress(&activeCounter, &current,
                      sizeof(activeCounter), INFINITE);
        current = activeCounter;
    }
}

The wait_until_active function waits until the value of active­Counter is at least as large as value. We do this by subtracting the two values, to avoid wraparound problems.¹ The comparison takes advantage of the guarantee in C++20 that conversion from an unsigned integer to a signed integer converts to the value that is numerically equal modulo 2ⁿ where n is the number of bits in the destination. (Prior to C++20, the result was implementation-defined, but in practice all modern implementations did what C++20 mandates.)²

You can also use std::atomic:

_Guarded_by_(desiredMutex) DWORD desiredCounter = 1;
std::atomic<DWORD> activeCounter;

void wait_until_active(DWORD value)
{
    DWORD current = activeCounter;
    while (static_cast<int>(current - value) < 0) {
        activeCounter.wait(current);
        current = activeCounter;
    }
}

As before, the background thread manipulates the desiredHandles and desiredActions, then signals the waiting thread to wake up and process the changes. But this time, the background thread blocks until the waiting thread acknowledges the changes.

// Warning: For expository purposes. Almost no error checking.
void waiting_thread()
{
    bool update = true;
    std::vector<wil::unique_handle> handles;
    std::vector<std::function<void()>> actions;

    while (true)
    {
        if (std::exchange(update, false)) {
            std::lock_guard guard(desiredMutex);

            handles.clear();
            handles.reserve(desiredHandles.size() + 1);
            std::transform(desiredHandles.begin(), desiredHandles.end(),
                std::back_inserter(handles),
                [](auto&& h) { return duplicate_handle(h.get()); });
            // Add the bonus "changed" handle
            handles.emplace_back(duplicate_handle(changed.get()));

            actions = desiredActions;

            if (activeCounter != desiredCounter) {
                activeCounter = desiredCounter;   
                WakeByAddressAll(&activeCounter); 
            }
        }

        auto count = static_cast<DWORD>(handles.size());
                        
        auto result = WaitForMultipleObjects(count,
                        handles.data()->get(), FALSE, INFINITE);
        auto index = result - WAIT_OBJECT_0;
        if (index == count - 1) {
            // the list changed. Loop back to update.
            update = true;
            continue;
        } else if (index < count - 1) {
            actions[index]();
        } else {
            // deal with unexpected result
        }
    }
}

void change_handle_list()
{
    DWORD value;
    {
        std::lock_guard guard(desiredMutex);
        ⟦ make changes to desiredHandles and desiredActions ⟧
        value = ++desiredCounter;
        SetEvent(changed.get());
    }
    wait_until_active(value);
}

The pattern is that after the background thread makes the desired changes, they increment the desiredCounter and signal the event. It’s okay if multiple threads make changes before the waiting thread wakes up. The changes simply accumulate, and the event just stays signaled. Each background thread then waits for the waiting thread to process the change.

On the waiting side, we process changes as usual, but we also publish our current change counter if it has changed, to let the background threads know that we made some progress. Eventually, we will make enough progress that all of the pending changes have been processed, and the last ackground thread will be released from wait_until_active.

¹ You’ll run into problems if the counter increments 2 billion times without the worker thread noticing. At a thousand increments per second, that’ll last you a month. I figure that if you have a worker thread that is unresponsible for that long, then you have bigger problems. But you can avoid even that problem by switching to a 64-bit integer, so that the overflow won’t happen before the sun is expected to turn into a red giant.

² The holdouts would be compilers for systems that are not two’s-complement.

The post How do you add or remove a handle from an active <CODE>Wait­For­Multiple­Objects</CODE>?, part 2 appeared first on The Old New Thing.

22:35

Friday Squid Blogging: Squid Overfishing in the South Pacific [Schneier on Security]

Regulation is hard:

The South Pacific Regional Fisheries Management Organization (SPRFMO) oversees fishing across roughly 59 million square kilometers (22 million square miles) of the South Pacific high seas, trying to impose order on a region double the size of Africa, where distant-water fleets pursue species ranging from jack mackerel to jumbo flying squid. The latter dominated this year’s talks.

Fishing for jumbo flying squid (Dosidicus gigas) has expanded rapidly over the past two decades. The number of squid-jigging vessels operating in SPRFMO waters rose from 14 in 2000 to more than 500 last year, almost all of them flying the Chinese flag. Meanwhile, reported catches have fallen markedly, from more than 1 million metric tons in 2014 to about 600,000 metric tons in 2024. Scientists worry that fishing pressure is outpacing knowledge of the stock.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Blog moderation policy.

22:14

The Big Idea: Eleanor Lerman [Whatever]

Pets are more than just roommates we feed and scoop poop for, they’re often a source of emotional support and comfort in our complicated, lengthy lives. Author Eleanor Lerman explores the bond between furry friends and humans in her newest collection of short stories, King the Wonder Dog and Other Stories. Whether your cat is in your lap or on your keyboard, give them a pet as you read along in the Big Idea.

ELEANOR LERMAN:

Having just completed a book of poetry in which much of the work examined the concept of grief about a lost parent (and offered the idea that even Godzilla might be lonely for his mother), I was thinking about what I might write next when I saw a tv commercial that featured a group of older women. They were all beautifully dressed, had expensive haircuts that made gray hair seem like a lifestyle choice, and were laughing their way through a meal on the outdoor terrace of a restaurant. I won’t mention the product being advertised, but they discussed how happy their all were to be using it and to have the love and support of their charming older women friends, who used it too. This is one version of aging in our culture: cheerful, financially secure, medically safeguarded, and surrounded by supportive friends. In this version, the body cooperates, the future is manageable, and loneliness is nowhere in sight.

That’s one way older women—and men—are portrayed in our culture: happy as the proverbial clam and aging with painless bodies and lots of money to pay for the medical care they will likely never need. In literary fiction, however, aging men and women are often depicted in a very different setting: traveling alone through a grim country, with broken hearts and aching bodies until we leave them at the end of their stories hoping—though not entirely believing—that we will avoid such a fate ourselves.

So, what I decided to do in King the Wonder Dog and Other Stories, was to explore what is perhaps a middle ground by writing about both women and men living alone who are growing older and are confounded by what is happening to them. They still feel like their younger selves but are aware that their bodies are changing, that the possibility of once again finding love in their lives is unlikely and that loneliness has begun to haunt them like an aging ghost.

Having had pets in my life for many years—and being aware that animals, too, can feel loneliness and fear—I paired each man and woman in my stories with a lonely dog or cat and tried to work out how that relationship would ease the sadness in both their lives. One memory I drew on was how, when I was young and living alone, I had a little cat that someone had found in the street and gave to me. I had never had a pet before (other than a parakeet, which didn’t give me much to go on) and this little cat was very shy, so I didn’t quite know how to relate to her. But somehow, bit by bit, she cozied up to me, and when I was writing, she was always with me, sitting on my lap or on my feet.

I have no idea how animals conceptualize themselves and their lives, but I do know they have feelings and I hope that for the eighteen years she and I lived together, my cat felt safe and cared for. And still, today, I sometimes think about the unlikely sequence of events that brought us together: how a random person found a tiny kitten, all alone, crouched behind a garbage can, and how that random person was sort of friends with a sort of friend of mine who happened to tell me about the kitten and asked if I knew anyone who would take her and I said yes: me. I don’t know why I said yes, but I’m glad I did. Her name, by the way, was simply Gray Cat, which probably shows how unsure I was about whether I would be able to care for her well enough to at least keep her alive.

After that, I was never without a cat or dog, and now I usually have both. The little dog I have now is a sweet, happy friend who seems not to have a care in the world, but I often see her sitting on the back of my couch, staring out the window at the ocean not far beyond my window and I wonder what she thinks about what she sees. What is that vast, shifting landscape to her? And who am I? A friend who pets her and feeds her and gives her those wonderful treats she loves? Maybe she was frightened when she was separated from her mother but otherwise, I think she is having a happy life—at least I hope so. And sometimes when I walk her, I think about what will happen when she’s no longer with me and I’m even older than I am now. Could I get another dog? I have painful issues with my back that sometimes make it hard for me to walk and I certainly can’t walk any great distance—could I maybe get a dog that doesn’t need to walk too far or somehow shares my disability?

All these thoughts have gone into the stories in King the Wonder Dog, in which men and women are growing older, have illnesses, are frightened by how lonely they feel, and in one way or another—and often to their surprise—are able to bond with a dog or cat who is also in a tenuous situation. And through that bond, the people and the animals find at least a little bit of happiness in their lives, a little bit of the shared comfort that arises from one creature caring for another. I hope those who read the book will feel some of that comfort, too.


King the Wonder Dog and Other Stories: Amazon|Barnes & Noble|Books-A-Million|Bookshop

Author socials: Website|Facebook

21:35

Page 0 [Flipside]

Page 0 is done.

The disturbing white paper Red Hat is trying to erase from the internet [OSnews]

It shouldn’t be a surprise that companies – and for our field, technology companies specifically – working with the defense industry tends to raise eyebrows. With things like the genocide in Gaza, the threats of genocide and war crimes against Iran, the mass murder in Lebanon, it’s no surprise that western companies working with the militaries and defense companies involved in these atrocities are receiving some serious backlash.

With that in mind, it seems Red Hat, owned by IBM, is desperately trying to scrub a certain white paper from the internet. Titled “Compress the kill cycle with Red Hat Device Edge”, the 2024 white paper details how Red Hat’s products and technologies can make it easier and faster to, well, kill people. Links to the white paper throw up 404s now, but it can still easily be found on the Wayback Machine and other places.

It’s got some disturbingly euphemistic content.

The find, fix, track, target, engage, assess (F2T2EA) process requires ubiquitous access to data at the strategic, operational and tactical levels. Red Hat Device Edge embeds captured, analyzed, and federated data sets in a manner that positions the warfighter to use artificial intelligence and machine learning (AI/ML) to increase the accuracy of airborne targeting and mission-guidance systems.

[…]

Delivering near real-time data from sensor pods directly to airmen, accelerating the sensor-to-shooter cycle.

[…]

Sharing near real-time sensor fusion data with joint and multinational forces to increase awareness, survivability, and lethality.

[…]

The new software enabled the Stalker to deploy updated, AI-based automated target recognition capabilities.

[…]

If the target is an adversary tracked vehicle on the far side of a ridge, a UAS carrying a server running Red Hat Device Edge could transmit video and metadata directly to shooters.

↫ Red Hat white paper titled “Compress the kill cycle with Red Hat Device Edge”

I don’t think there’s something inherently wrong with working together with your nation’s military or defense companies, but that all hinges on what, exactly, said military is doing and how those defense companies’ products are being used. The focus should be on national defense, aid during disasters, and responding to the legitimate requests of sovereign, democratic nations to come to their defense (e.g. helping Ukraine fight off the Russian invasion).

There’s always going to be difficult grey areas, but any military or defense company supporting the genocide in Gaza or supplying weapons to kill women and children in Iran is unequivocally wrong, morally reprehensible, and downright illegal on both an international and national level. It clearly seems someone at Red Hat feels the same way, as the company has been trying really hard to memory-hole this particular white paper, and considering its word choices and the state of the world today, it’s easy to see why.

Of course, the internet never forgets, and I certainly don’t intend to let something like this slide. We all know companies like Microsoft, Oracle, and Google have no qualms about making a few bucks from a genocide or two, but it always feels a bit more traitorous to the cause when it’s an open source company doing the profiting. It feels like Red Hat is trying to have its cake and eat it too, by, as an IBM subsidiary, trying to both profit from the vast sums of money sloshing around in the US military industrial complex as well as maintain its image as a scrappy open source business success story shitting bunnies and rainbows.

It’s a long time ago now that Red Hat felt like a genuine part of the open source community. Most of us – both outside and inside of Red Hat, I’m sure – have been well aware for a long time now that those days are well behind us, and I guess Red Hat doesn’t like seeing its kill cycle this compressed.

Feeds

FeedRSSLast fetchedNext fetched after
@ASmartBear XML 19:35, Thursday, 16 April 20:16, Thursday, 16 April
a bag of four grapes XML 19:42, Thursday, 16 April 20:24, Thursday, 16 April
Ansible XML 19:35, Thursday, 16 April 20:15, Thursday, 16 April
Bad Science XML 19:35, Thursday, 16 April 20:24, Thursday, 16 April
Black Doggerel XML 19:35, Thursday, 16 April 20:16, Thursday, 16 April
Blog - Official site of Stephen Fry XML 19:35, Thursday, 16 April 20:24, Thursday, 16 April
Charlie Brooker | The Guardian XML 19:42, Thursday, 16 April 20:24, Thursday, 16 April
Charlie's Diary XML 19:14, Thursday, 16 April 20:02, Thursday, 16 April
Chasing the Sunset - Comics Only XML 19:35, Thursday, 16 April 20:24, Thursday, 16 April
Coding Horror XML 19:07, Thursday, 16 April 19:54, Thursday, 16 April
Comics Archive - Spinnyverse XML 19:07, Thursday, 16 April 19:51, Thursday, 16 April
Cory Doctorow's craphound.com XML 19:42, Thursday, 16 April 20:24, Thursday, 16 April
Cory Doctorow, Author at Boing Boing XML 19:35, Thursday, 16 April 20:16, Thursday, 16 April
Ctrl+Alt+Del Comic XML 19:14, Thursday, 16 April 20:02, Thursday, 16 April
Cyberunions XML 19:35, Thursday, 16 April 20:24, Thursday, 16 April
David Mitchell | The Guardian XML 19:49, Thursday, 16 April 20:32, Thursday, 16 April
Deeplinks XML 19:07, Thursday, 16 April 19:51, Thursday, 16 April
Diesel Sweeties webcomic by rstevens XML 19:49, Thursday, 16 April 20:32, Thursday, 16 April
Dilbert XML 19:35, Thursday, 16 April 20:24, Thursday, 16 April
Dork Tower XML 19:42, Thursday, 16 April 20:24, Thursday, 16 April
Economics from the Top Down XML 19:49, Thursday, 16 April 20:32, Thursday, 16 April
Edmund Finney's Quest to Find the Meaning of Life XML 19:49, Thursday, 16 April 20:32, Thursday, 16 April
EFF Action Center XML 19:49, Thursday, 16 April 20:32, Thursday, 16 April
Enspiral Tales - Medium XML 19:07, Thursday, 16 April 19:52, Thursday, 16 April
Events XML 19:14, Thursday, 16 April 20:02, Thursday, 16 April
Falkvinge on Liberty XML 19:14, Thursday, 16 April 20:02, Thursday, 16 April
Flipside XML 19:42, Thursday, 16 April 20:24, Thursday, 16 April
Flipside XML 19:07, Thursday, 16 April 19:52, Thursday, 16 April
Free software jobs XML 19:35, Thursday, 16 April 20:15, Thursday, 16 April
Full Frontal Nerdity by Aaron Williams XML 19:14, Thursday, 16 April 20:02, Thursday, 16 April
General Protection Fault: Comic Updates XML 19:14, Thursday, 16 April 20:02, Thursday, 16 April
George Monbiot XML 19:49, Thursday, 16 April 20:32, Thursday, 16 April
Girl Genius XML 19:49, Thursday, 16 April 20:32, Thursday, 16 April
Groklaw XML 19:14, Thursday, 16 April 20:02, Thursday, 16 April
Grrl Power XML 19:42, Thursday, 16 April 20:24, Thursday, 16 April
Hackney Anarchist Group XML 19:35, Thursday, 16 April 20:24, Thursday, 16 April
Hackney Solidarity Network XML 19:07, Thursday, 16 April 19:52, Thursday, 16 April
http://blog.llvm.org/feeds/posts/default XML 19:07, Thursday, 16 April 19:52, Thursday, 16 April
http://calendar.google.com/calendar/feeds/q7s5o02sj8hcam52hutbcofoo4%40group.calendar.google.com/public/basic XML 19:35, Thursday, 16 April 20:15, Thursday, 16 April
http://dynamic.boingboing.net/cgi-bin/mt/mt-cp.cgi?__mode=feed&_type=posts&blog_id=1&id=1 XML 19:07, Thursday, 16 April 19:52, Thursday, 16 April
http://eng.anarchoblogs.org/feed/atom/ XML 19:14, Thursday, 16 April 20:00, Thursday, 16 April
http://feed43.com/3874015735218037.xml XML 19:14, Thursday, 16 April 20:00, Thursday, 16 April
http://flatearthnews.net/flatearthnews.net/blogfeed XML 19:35, Thursday, 16 April 20:16, Thursday, 16 April
http://fulltextrssfeed.com/ XML 19:49, Thursday, 16 April 20:32, Thursday, 16 April
http://london.indymedia.org/articles.rss XML 19:07, Thursday, 16 April 19:54, Thursday, 16 April
http://pipes.yahoo.com/pipes/pipe.run?_id=ad0530218c055aa302f7e0e84d5d6515&amp;_render=rss XML 19:14, Thursday, 16 April 20:00, Thursday, 16 April
http://planet.gridpp.ac.uk/atom.xml XML 19:07, Thursday, 16 April 19:54, Thursday, 16 April
http://shirky.com/weblog/feed/atom/ XML 19:07, Thursday, 16 April 19:51, Thursday, 16 April
http://thecommune.co.uk/feed/ XML 19:07, Thursday, 16 April 19:52, Thursday, 16 April
http://theness.com/roguesgallery/feed/ XML 19:14, Thursday, 16 April 20:02, Thursday, 16 April
http://www.airshipentertainment.com/buck/buckcomic/buck.rss XML 19:35, Thursday, 16 April 20:24, Thursday, 16 April
http://www.airshipentertainment.com/growf/growfcomic/growf.rss XML 19:07, Thursday, 16 April 19:51, Thursday, 16 April
http://www.airshipentertainment.com/myth/mythcomic/myth.rss XML 19:42, Thursday, 16 April 20:24, Thursday, 16 April
http://www.baen.com/baenebooks XML 19:07, Thursday, 16 April 19:51, Thursday, 16 April
http://www.feedsapi.com/makefulltextfeed.php?url=http%3A%2F%2Fwww.somethingpositive.net%2Fsp.xml&what=auto&key=&max=7&links=preserve&exc=&privacy=I+accept XML 19:07, Thursday, 16 April 19:51, Thursday, 16 April
http://www.godhatesastronauts.com/feed/ XML 19:14, Thursday, 16 April 20:02, Thursday, 16 April
http://www.tinycat.co.uk/feed/ XML 19:35, Thursday, 16 April 20:15, Thursday, 16 April
https://anarchism.pageabode.com/blogs/anarcho/feed/ XML 19:07, Thursday, 16 April 19:51, Thursday, 16 April
https://broodhollow.krisstraub.comfeed/ XML 19:35, Thursday, 16 April 20:16, Thursday, 16 April
https://debian-administration.org/atom.xml XML 19:35, Thursday, 16 April 20:16, Thursday, 16 April
https://elitetheatre.org/ XML 19:07, Thursday, 16 April 19:54, Thursday, 16 April
https://feeds.feedburner.com/Starslip XML 19:42, Thursday, 16 April 20:24, Thursday, 16 April
https://feeds2.feedburner.com/GeekEtiquette?format=xml XML 19:49, Thursday, 16 April 20:32, Thursday, 16 April
https://hackbloc.org/rss.xml XML 19:35, Thursday, 16 April 20:16, Thursday, 16 April
https://kajafoglio.livejournal.com/data/atom/ XML 19:35, Thursday, 16 April 20:24, Thursday, 16 April
https://philfoglio.livejournal.com/data/atom/ XML 19:07, Thursday, 16 April 19:54, Thursday, 16 April
https://pixietrixcomix.com/eerie-cutiescomic.rss XML 19:07, Thursday, 16 April 19:54, Thursday, 16 April
https://pixietrixcomix.com/menage-a-3/comic.rss XML 19:07, Thursday, 16 April 19:51, Thursday, 16 April
https://propertyistheft.wordpress.com/feed/ XML 19:35, Thursday, 16 April 20:15, Thursday, 16 April
https://requiem.seraph-inn.com/updates.rss XML 19:35, Thursday, 16 April 20:15, Thursday, 16 April
https://studiofoglio.livejournal.com/data/atom/ XML 19:14, Thursday, 16 April 20:00, Thursday, 16 April
https://thecommandline.net/feed/ XML 19:14, Thursday, 16 April 20:00, Thursday, 16 April
https://torrentfreak.com/subscriptions/ XML 19:49, Thursday, 16 April 20:32, Thursday, 16 April
https://web.randi.org/?format=feed&type=rss XML 19:49, Thursday, 16 April 20:32, Thursday, 16 April
https://www.dcscience.net/feed/medium.co XML 19:35, Thursday, 16 April 20:24, Thursday, 16 April
https://www.DropCatch.com/domain/steampunkmagazine.com XML 19:35, Thursday, 16 April 20:16, Thursday, 16 April
https://www.DropCatch.com/domain/ubuntuweblogs.org XML 19:14, Thursday, 16 April 20:00, Thursday, 16 April
https://www.DropCatch.com/redirect/?domain=DyingAlone.net XML 19:07, Thursday, 16 April 19:54, Thursday, 16 April
https://www.freedompress.org.uk:443/news/feed/ XML 19:14, Thursday, 16 April 20:02, Thursday, 16 April
https://www.goblinscomic.com/category/comics/feed/ XML 19:35, Thursday, 16 April 20:15, Thursday, 16 April
https://www.loomio.com/blog/feed/ XML 19:14, Thursday, 16 April 20:00, Thursday, 16 April
https://www.newstatesman.com/feeds/blogs/laurie-penny.rss XML 19:35, Thursday, 16 April 20:16, Thursday, 16 April
https://www.patreon.com/graveyardgreg/posts/comic.rss XML 19:07, Thursday, 16 April 19:54, Thursday, 16 April
https://www.rightmove.co.uk/rss/property-for-sale/find.html?locationIdentifier=REGION^876&maxPrice=240000&minBedrooms=2&displayPropertyType=houses&oldDisplayPropertyType=houses&primaryDisplayPropertyType=houses&oldPrimaryDisplayPropertyType=houses&numberOfPropertiesPerPage=24 XML 19:49, Thursday, 16 April 20:32, Thursday, 16 April
https://x.com/statuses/user_timeline/22724360.rss XML 19:35, Thursday, 16 April 20:15, Thursday, 16 April
Humble Bundle Blog XML 19:07, Thursday, 16 April 19:54, Thursday, 16 April
I, Cringely XML 19:14, Thursday, 16 April 20:02, Thursday, 16 April
Irregular Webcomic! XML 19:35, Thursday, 16 April 20:16, Thursday, 16 April
Joel on Software XML 19:14, Thursday, 16 April 20:00, Thursday, 16 April
Judith Proctor's Journal XML 19:35, Thursday, 16 April 20:15, Thursday, 16 April
Krebs on Security XML 19:35, Thursday, 16 April 20:16, Thursday, 16 April
Lambda the Ultimate - Programming Languages Weblog XML 19:35, Thursday, 16 April 20:15, Thursday, 16 April
Looking For Group XML 19:07, Thursday, 16 April 19:51, Thursday, 16 April
LWN.net XML 19:35, Thursday, 16 April 20:16, Thursday, 16 April
Mimi and Eunice XML 19:07, Thursday, 16 April 19:52, Thursday, 16 April
Neil Gaiman's Journal XML 19:35, Thursday, 16 April 20:15, Thursday, 16 April
Nina Paley XML 19:07, Thursday, 16 April 19:54, Thursday, 16 April
O Abnormal – Scifi/Fantasy Artist XML 19:07, Thursday, 16 April 19:52, Thursday, 16 April
Oglaf! -- Comics. Often dirty. XML 19:14, Thursday, 16 April 20:02, Thursday, 16 April
Oh Joy Sex Toy XML 19:07, Thursday, 16 April 19:51, Thursday, 16 April
Order of the Stick XML 19:07, Thursday, 16 April 19:51, Thursday, 16 April
Original Fiction Archives - Reactor XML 19:42, Thursday, 16 April 20:24, Thursday, 16 April
OSnews XML 19:07, Thursday, 16 April 19:52, Thursday, 16 April
Paul Graham: Unofficial RSS Feed XML 19:07, Thursday, 16 April 19:52, Thursday, 16 April
Penny Arcade XML 19:42, Thursday, 16 April 20:24, Thursday, 16 April
Penny Red XML 19:07, Thursday, 16 April 19:52, Thursday, 16 April
PHD Comics XML 19:35, Thursday, 16 April 20:24, Thursday, 16 April
Phil's blog XML 19:14, Thursday, 16 April 20:02, Thursday, 16 April
Planet Debian XML 19:07, Thursday, 16 April 19:52, Thursday, 16 April
Planet GNU XML 19:35, Thursday, 16 April 20:16, Thursday, 16 April
Planet Lisp XML 19:35, Thursday, 16 April 20:24, Thursday, 16 April
Pluralistic: Daily links from Cory Doctorow XML 19:35, Thursday, 16 April 20:15, Thursday, 16 April
PS238 by Aaron Williams XML 19:14, Thursday, 16 April 20:02, Thursday, 16 April
QC RSS XML 19:07, Thursday, 16 April 19:54, Thursday, 16 April
Radar XML 19:42, Thursday, 16 April 20:24, Thursday, 16 April
RevK®'s ramblings XML 19:14, Thursday, 16 April 20:00, Thursday, 16 April
Richard Stallman's Political Notes XML 19:35, Thursday, 16 April 20:24, Thursday, 16 April
Scenes From A Multiverse XML 19:07, Thursday, 16 April 19:54, Thursday, 16 April
Schneier on Security XML 19:35, Thursday, 16 April 20:15, Thursday, 16 April
SCHNEWS.ORG.UK XML 19:07, Thursday, 16 April 19:51, Thursday, 16 April
Scripting News XML 19:42, Thursday, 16 April 20:24, Thursday, 16 April
Seth's Blog XML 19:14, Thursday, 16 April 20:00, Thursday, 16 April
Skin Horse XML 19:42, Thursday, 16 April 20:24, Thursday, 16 April
Tales From the Riverbank XML 19:35, Thursday, 16 April 20:24, Thursday, 16 April
The Adventures of Dr. McNinja XML 19:07, Thursday, 16 April 19:52, Thursday, 16 April
The Bumpycat sat on the mat XML 19:35, Thursday, 16 April 20:15, Thursday, 16 April
The Daily WTF XML 19:14, Thursday, 16 April 20:00, Thursday, 16 April
The Monochrome Mob XML 19:35, Thursday, 16 April 20:16, Thursday, 16 April
The Non-Adventures of Wonderella XML 19:49, Thursday, 16 April 20:32, Thursday, 16 April
The Old New Thing XML 19:07, Thursday, 16 April 19:51, Thursday, 16 April
The Open Source Grid Engine Blog XML 19:07, Thursday, 16 April 19:54, Thursday, 16 April
The Stranger XML 19:07, Thursday, 16 April 19:52, Thursday, 16 April
towerhamletsalarm XML 19:14, Thursday, 16 April 20:00, Thursday, 16 April
Twokinds XML 19:42, Thursday, 16 April 20:24, Thursday, 16 April
UK Indymedia Features XML 19:42, Thursday, 16 April 20:24, Thursday, 16 April
Uploads from ne11y XML 19:14, Thursday, 16 April 20:00, Thursday, 16 April
Uploads from piasladic XML 19:49, Thursday, 16 April 20:32, Thursday, 16 April
Use Sword on Monster XML 19:07, Thursday, 16 April 19:54, Thursday, 16 April
Wayward Sons: Legends - Sci-Fi Full Page Webcomic - Updates Daily XML 19:14, Thursday, 16 April 20:00, Thursday, 16 April
what if? XML 19:35, Thursday, 16 April 20:16, Thursday, 16 April
Whatever XML 19:35, Thursday, 16 April 20:24, Thursday, 16 April
Whitechapel Anarchist Group XML 19:35, Thursday, 16 April 20:24, Thursday, 16 April
WIL WHEATON dot NET XML 19:07, Thursday, 16 April 19:51, Thursday, 16 April
wish XML 19:07, Thursday, 16 April 19:52, Thursday, 16 April
Writing the Bright Fantastic XML 19:07, Thursday, 16 April 19:51, Thursday, 16 April
xkcd.com XML 19:49, Thursday, 16 April 20:32, Thursday, 16 April