Wednesday, 29 April

11:21

Claude Mythos Has Found 271 Zero-Days in Firefox [Schneier on Security]

That’s a lot. No, it’s an extraordinary number:

Since February, the Firefox team has been working around the clock using frontier AI models to find and fix latent security vulnerabilities in the browser. We wrote previously about our collaboration with Anthropic to scan Firefox with Opus 4.6, which led to fixes for 22 security-sensitive bugs in Firefox 148.

As part of our continued collaboration with Anthropic, we had the opportunity to apply an early version of Claude Mythos Preview to Firefox. This week’s release of Firefox 150 includes fixes for 271 vulnerabilities identified during this initial evaluation.

As these capabilities reach the hands of more defenders, many other teams are now experiencing the same vertigo we did when the findings first came into focus. For a hardened target, just one such bug would have been red-alert in 2025, and so many at once makes you stop to wonder whether it’s even possible to keep up.

Our experience is a hopeful one for teams who shake off the vertigo and get to work. You may need to reprioritize everything else to bring relentless and single-minded focus to the task, but there is light at the end of the tunnel. We are extremely proud of how our team rose to meet this challenge, and others will too. Our work isn’t finished, but we’ve turned the corner and can glimpse a future much better than just keeping up. Defenders finally have a chance to win, decisively.

They’re right. Assuming the defenders can patch, and push those patches out to users quickly, this technology favors the defenders.

News article.

10:14

Photoshopping the package [Seth's Blog]

I bought a snack food the other day, and was disappointed to discover that the thing inside the container had little in common with the picture on the front. It was pallid, lifeless and drab.

The marketer who decided to improve the picture was making a choice, one with consequences. When you choose to disappoint a customer later so you can make a sale right now, you’ve also chosen to create disappointment for a living.

If you’re not proud of it, don’t serve it. Improving the image on the package shouldn’t be a substitute for making something people want to buy.

09:00

The Rich Roe of Wisdom [Penny Arcade]

New Comic: The Rich Roe of Wisdom

06:28

Girl Genius for Wednesday, April 29, 2026 [Girl Genius]

The Girl Genius comic for Wednesday, April 29, 2026 has been posted.

Tuesday, 28 April

23:42

Urgent: Big media consolidation [Richard Stallman's Political Notes]

US citizens: call on Congress to block Paramount from consolidating the main US news media.

US citizens: Join with this campaign to address this issue.

To phone your congresscritter about this, the main switchboard is +1-202-224-3121.

Please spread the word.

Cannabis law [Richard Stallman's Political Notes]

The bully is eager to reclassify marijuana. The change in regulations he wants would make it easier to do research on medical uses, but would not relieve the threats and restrictions on people who actually use marijuana or its derivatives.

Argentina's President ordered government to invest in cryptocurrency [Richard Stallman's Political Notes]

Argentina's President Milei imitated another right-wing extremist president by accepting a personal payoff for ordering the country's government to invest in cryptocurrency.

Some schools want to remove personal computers from classrooms [Richard Stallman's Political Notes]

Some schools, and some US states, want to get rid of personal computers for students in the classroom, for educational reasons.

It is too bad they don't appreciate the injustice of the software in those computers, because that is an separate reason for doing the same thing. The two reasons are not entirely independent — the fact that the software is nonfree is part of the explanation for why it does bad things. But they are independent enough that they can broaden the base of the argument.

We need to bring these two converging movements together.

Columbia's Center on Global Energy Policy [Richard Stallman's Political Notes]

* Columbia's Center on Global Energy Policy (CGEP) describes itself as an independent organization producing research on energy policy.* In fact, it gets millions of dollars from oil companies, and what it "produces" is obtained wholesale from them.

Smaller fraction of people post on "social media" in UK [Richard Stallman's Political Notes]

In the UK, a social change is occurring: a smaller fraction of people post regularly on "social media".

People who formerly loved Twitter and the idea of "social media" say that there is no such thing any more, and they miss it.

I don't personally know what it is they miss. I never used Twitter because that required running nonfree JavaScript code, and I refuse on principle to do that. I could not even see individual postings there, until Nitter provided a way to do that without submitting to nonfree software. But ex-Twitter killed off the API which made that possible.

22:56

Everyone’s an Engineer Now [Radar]

Cat Wu leads product for Claude Code and Cowork at Anthropic, so she’s well-versed in building reliable, interpretable, and steerable AI systems. And since famously, 90% of Anthropic’s code is now written by Claud Code, she’s also deeply familiar with fitting them into routine day-to-day work. Last month, Cat joined Addy Osmani at AI Codecon for a fireside chat on the future of agentic coding and, equally important, agentic code review; how Anthropic actually uses the tools they’re building; and what skills matter now. A lot of what she described is worth sitting with for a while.

The feedback loop is itself a product

Claude Code’s origin story may surprise you. Boris Cherny initially built it as a side project to test Anthropic’s APIs. Then he shared the tool in a notebook, and within two months the entire company was using it. That organic growth, Cat said, was part of what convinced the team it was worth releasing externally.

But what really made that internal adoption legible was the response on Anthropic’s internal “dog-fooding” Slack channel. The Claude Code channel gets a new message every 5 to 10 minutes around the clock, and this feedback directly and immediately informs the product experience. Cat described it this way:

We hire for people who love polishing the user experience. And so a lot of our engineers actually live in this channel and find when there’s issues with new features that they’ve worked on and they proactively lay out the fixes.

The team ships new versions of Claude Code to internal users many times a day. The feedback loop is tight enough that it functions as a continuous integration system for product quality, not just code quality.

The best illustration of how far this goes: Cat accidentally introduced a small interaction bug between prompts and auto-suggestions. But by the time she started working on a fix, she found another team member had already beaten her to it. It turns out, he had set up a scheduled task in Claude Code to scan the feedback channel for anything that hadn’t been responded to in 24 hours and open a PR for it. When Cat hadn’t yet gotten to a fix (whoops!), her teammate’s Claude saw the unaddressed issue and fixed it for her. And Cat only found out when “[her own] Claude noticed that his Claude had already landed a change.”   

The infrastructure for rapid improvement, in other words, is now partly automated. The agents are writing the code, then monitoring the feedback and closing the loop.

The bottleneck has shifted to review

There’s no question that AI-assisted coding has created a boom in output: Anthropic engineers are producing roughly 200% more than they were a year ago, Cat noted. Today the main constraint is reviewing all that code to ensure it’s production-ready.

Cat’s team made a deliberate architectural choice about how to handle this. Their conclusion: You can buy a lot of additional robustness for not that much extra cost.

We opted for the heaviest, most robust version [of code review]. We actually plot how many agents and how comprehensive of a review Claude does and then how many bugs does it recall. And we picked a number of very high recall and decided we should ship this, because if you really want AI code review to be a load-bearing part of your process, you actually probably just want the most comprehensive possible review.

The review agent doesn’t just look at the diff. It traces code across multiple files and catches bugs in adjacent code that has nothing to do with the change in question. Cat gave two examples. One was a ZFS encryption refactor where the agent found a key cache invalidation bug that wasn’t related to the author’s change at all but would have invalidated it. The other was a routine auth update that turned out to have a bad side effect, caught premerge. In both cases, engineers manually reviewing the code likely would have missed the bugs.

The human review that remains is deliberately small in scope. For most PRs, the human reviewer skims for design principle violations and obvious problems and assumes functional correctness has been handled. The agents run 5 to 10 in parallel, each given slightly different tasks, returning independently and then deduplicating what they found.

The cultural shift that made this work, though, was ownership. The team moved to a model where the engineer who authors a PR owns it end to end, including postdeploy bugs, and doesn’t lean on peer reviewers to catch mistakes. “Otherwise,” as Cat pointed out, “you have situations where junior engineers put out a bunch of PRs and then your senior engineers are like drowning in AI-generated stuff where they’re not sure how thoroughly it’s been tested.”

Full ownership meant the AI review had to actually be trustworthy, which drove the decision to go for high recall rather than a lighter touch. That said, engineers are still expected to understand every line of code an agent creates. . .for now. As Cat explained, it’s the only way to truly prevent “unknown security vulnerabilities and to be able to quickly respond to incidents if they are to happen.” 

Everyone’s kind of an engineer now

Cowork, Anthropic’s agent tool for nontechnical users, is the company’s attempt to take what Claude Code does for engineers and bring it to knowledge work more broadly. The picture Cat sketched is of someone looking at five or six agent tasks running simultaneously in a side panel, managing a fleet of agents the way a senior engineer manages a PR queue.

In the nearer-term, she’s keeping tabs on the shift toward people using Claude Code to build things for themselves, their teams, or their families that wouldn’t have justified professional development effort or “otherwise been possible.” The prototype is the garage project, the family expense tracker, the tool that a small team actually needs but that no SaaS product quite addresses. Cat’s goal and hope is that Claude Code helps people “solve their own problems for themselves” and “stewards a new future of personal software.”

Product taste as the new technical skill

More people building more software is unambiguously good. Boris Cherny has even floated the idea that coding as we know it is “solved.” But what does that mean for the craft of software engineering? Cat’s read of the current moment is more nuanced, and more useful:

I think pre-AI, the skills that were very important were being able to take a spec and implement it well. And I think now the really important skill is product taste. Even for engineers. Can you use code to ingest a massive amount of user feedback? Do you have good intuition about which feature to build to address those needs, because it’s often different than exactly what users are asking you for? And then, when Claude builds it, are you setting up the right bar so that what you ship people actually love?

Cat’s not alone in highlighting the importance of taste in a world where code is a commodity. Steve Yegge, Wes McKinney, and many others, myself included, see taste and judgment as a uniquely human value. This has practical implications for how engineers should spend their time now, and for what the next generation needs to learn.   

For junior engineers specifically, Cat described a progression: Start by using Claude Code to understand the codebase (ask all the “dumb questions” without embarrassment), take those answers to a senior engineer for calibration, and then close the loop by updating the CLAUDE.md with whatever was missing. The last step is the nonobvious one.

Think of Claude Code as your intern that you’re trying to level up. Like, teach it back to Claude. Add a /verify slash command. Put it in the CLAUDE.md or the agent README. Approach this as senior engineers helping you level up, and then you helping Claude and other agents level up.

The improvement process, in other words, should be bidirectional. Engineers get better at using the tools; the tools get better through the engineers’ accumulated knowledge. And significantly, this process keeps humans firmly in the loop, playing a role that’s “active, continuous, and skilled.”

You can watch Cat and Addy’s full chat, plus everything else from AI Codecon on the O’Reilly learning platform. Not a member? Sign up for a free 10-day trial, no strings attached.

22:49

22:14

Link [Scripting News]

You kind of get a sense when the platform vendor is going to compete with you instead of work with you. That's how big companies usually work with independent devs. There is a bigger picture, developers who might build on WordPress as opposed to inside. Now probably no one is going to try, maybe not even me. Or maybe we will. :-)

22:00

Apple wants to kill your Time Capsule, but they run NetBSD so they can’t [OSnews]

It seems like Apple is finally going to remove support for AFP from macOS, twelve years after first moving from AFP to SMB for its default network file-sharing technology. This change shouldn’t impact most people, as it’s highly unlikely you’re using AFP for anything in 2026. Still, there is one small group of people to whom this change has an actual impact: owners of Apple’s Time Capsule devices. Time Capsules only support AFP and SMB1, and with SMB1 being removed from macOS ages ago, and now AFP being on the chopping block as well, macOS 27 would render your Time Capsule more or less unusable.

It’s important to note that the last Time Capsule sold by Apple, the fifth generation, was released in 2013, and the product line as a whole was discontinued in 2018. If you bought a Time Capsule in the twilight years of the line’s availability, I think you have a genuine reason to be perturbed by Apple cutting you off from your product if you upgrade to macOS 27, but at least you have the option of keeping an older version of macOS around so you can keep interacting with your time Capsule. It still feels like a bit of a shitty move though, as those fifth generation models came with up to 3TB of storage, which can still serve as a solid NAS solution.

Thank your lucky stars, then, that open source can, as usual, come to the rescue when proprietary software vendors do what they always do and screw over their customers. Did you know every generation of Time Capsule actually runs NetBSD, and that it’s trivially easy to add support for Samba 4 and SMB3 authentication to your Time Capsule, thereby extending its life expectancy considerably? TimeCapsuleSMB does exactly that.

If the setup completes successfully, your Time Capsule will run its own Samba 4 server, advertise itself over Bonjour (show up automatically in the “Network” folder on macOS), and accept authenticated SMB3 connections from macOS. You should then be able to open Finder, choose Connect to Server, and use a normal SMB URL instead of relying on Apple’s legacy stack. You should also be able to use the disk for Time Machine backups.

↫ TimeCapsuleSMB

It’s compatible with both NetBSD 4 and NetBSD 6-based Time Capsules, although you’ll need to run a single SMB activation command every time a NetBSD 4-based Time Capsule reboots. This will also disable any AFP and SMB1 support, but that is kind of moot since those are exactly the technologies that don’t and won’t work anymore once macOS 27 is released. The installation is also entirely reversible if, for whatever reason, you want to undo the addition of Samba 4.

This whole saga is such an excellent example of why open source software protects users’ rights, by design.

20:21

Remembering Seth Nickell [LWN.net]

LWN has received the sad news that Seth Nickell passed away, on April 16, from his father, Eric Nickell:

Many of you knew Seth from his work in the GNOME Usability Project, but his roots in that community trace back to his high school years. As a father of a high school junior, I remember being terrified when he flashed the hard drive of a computer he purchased for himself with this weird "Linux" thing. And I was a bit awed by the college application essay he wrote about open source and Linus Torvalds.

It was his interest in packet radio that drew him into working with the Linux AX.25 HOWTO as a high schooler, and from there to his focus on making the Linux desktop work for everyone.

The family plans to share news of a memorial at a later time. He will be deeply missed.

19:21

The Big Idea: Marie Vibbert [Whatever]

Though humans have a strong desire to be an individual, slightly stronger is our innate need to not be alone. Humans are not solitary creatures, so why do we try so hard to act like we are all just individuals with no ties or connections to those around us? Author Marie Vibbert wonders if we wouldn’t all be better off as a hive mind in the Big Idea for her newest novel, Multitude.

MARIE VIBBERT:

Over 11,000 tons of discarded clothing lie in the Chilean desert. These are garments that never sold, from low and high brands, and almost entirely made of petroleum-based fabrics: rayon, polyester, acrylic. It’s a major environmental problem. The clothes catch fire, leak chemicals and microplastics, and just… keep coming.

Meanwhile, in Scotland, they are looking for new, industrial applications for wool because this renewable clothing resource that doesn’t spontaneously combust sits rotting in warehouses, unable to compete with the subsidized price of polyester.

Humanity has a problem. A communication problem that creates wasted effort and wasted resources. Food being thrown out while people starve. Diseases like cholera running rampant when their cures exist. I could go on and on with examples. Why can’t we put our efforts where they are needed? Why do our systems dictate so much cruel irony?

When you look at humanity as a whole, we are tearing ourselves apart, starving ourselves, killing ourselves. We don’t seem to understand that we are us? 

These were my thoughts going into a project whose first note was: The Borg, but friendly?

I thought it would be a short story. Something quick. Get in and get out. A hive mind comes to Earth, tries to communicate with humans as a hive, fails, and sees what a mess we are. Nudge the reader toward empathy, toward seeing problems between “us” and “them” as an insufficient definition of “us.” I figured it’d hit about 2,000 words long. But the more I thought about it, the bigger the problem became. How to show the perspective? How to encompass humanity and then move the camera back to show us in perspective?

How do we look, to a hive mind? What would they expect?

Humans are, in many ways, a collective creature. A single human can no more build a skyscraper than a single ant can build a mound. Even writing a novel is a collective act, when you consider that this language that I am using is a vast collection of consensuses on symbols, meaning, and parsing. English, on a certain level, is a stack of inside jokes passed down and expanded every generation.

Beyond that, every work of fiction builds on and reacts to those that came before. I am writing in a genre, science fiction, defined by all the works labelled as such, and in turn defined by the pressures and uncertainties of our society that caused the first authors to write things not of this world, the first readers to like that and want to emulate it, and on, and on. 

I was on a panel at WorldCon on Hive Minds in Science Fiction when it occurred to me that an assumption I hadn’t seen tackled yet was that collectivism automatically meant a repression of individuality. It seems an easy conclusion? If my family votes democratically on dinner, my individual desire to eat nothing but spaghetti every night is subordinated. Yet, the four of us are still individuals as we enjoy my spouse and child’s preferred chicken and rice.

Why wouldn’t a hive mind contain room for the individual? Does a Borg stop loving spaghetti once it absorbs the thoughts of thousands of chicken fans? Wouldn’t it be more of a conversation than a dictatorship? If it’s truly collective, why would there be dictators? And, come to think of it, don’t we, as large groups, change our opinions over time? Americans once ate more chipped beef on toast than chicken fingers. We thought the Edwardian S-bend corset and the mullet were a great ideas. We went from loving elephant leg jeans to skinny jeans. Collectively. Like an individual goes through phases of loving fly fishing or obsession with one particular series of books, societies go through a group fondness for orange or dark wood paneling. 

At the risk of making this blog post nothing but rhetorical questions, why do we assume innovation is a characteristic of the individual? Why do we assign conformity to the collective alone?

I tried to imagine myself a hive-member. Many advantages came immediately to mind. I wouldn’t have had to gamble on picking a college major; I’d have access to the needs of the society around me to help find work that was needed. I wouldn’t be competing for the access to share my stories, I’d just tell them, and my hive would hear them and like them or not.

Competition is not just the “healthy” activity of small businesses or inventors, of students seeking academic awards. It’s also war. All around the world, humans are killing humans so that they can avoid sharing resources. Humans are defining others, drawing lines around some of their siblings and excluding others, to limit access to resources. Yet to a non-human observer, we are one species, one sprawling community, alike in our needs and wants and behaviors.

And humans can be so kind, too.

In 2023, I had to travel to New York City because I had to get a Visa to attend my first Hugo awards as a nominee, and as I sat in Central Park waiting for my appointment, admiring the unnatural warmth of the post-climate-change day, I saw a middle-aged man patiently leading a group of elderly people. He looked so happy. I dashed off four pages in my journal about him, imagining his life taking care of elders. I wondered why my science fiction stories weren’t as easy or as fun as simple character portraits. I enjoyed the flashes of lives I’d seen in short stories by Mary Grimm or Maureen McHugh, or the prose poems of Mary Biddinger.

I used to love to climb into a character’s head and walk around, show her worries and fears and daily chores, and then I’d show my work to science fiction writers and be told I had no plot, or perhaps I was “just” a poet. Because of this critique, I chose to wall off the desire to write the way that came most naturally, eschewing character-study and stream-of-consciousness in favor of sentences that “did something.” (My own term.) I began to focus on ideas, on technology, on concrete consequences and violent action.

Eventually, I got pretty good at it, good enough to feel its limitations.  I opened up my old “plotless” stories and found them not so plotless, after all. Rather, they reflected my own sense of helplessness as a teen and early-twenties writer, and that point of view was uninteresting to the science fiction editor of the 90s and 2000s, who focused on competent characters moving the plot by choice.

At the young age of 47, I revised one of those 20-year-old “plotless” stories and sold it to a market paying the Science Fiction and Fantasy Writers Association’s professional rate of eight cents a word. Not to brag. (Yes, to brag). In some ways, the genre itself has moved on from rigorously espousing action and certainty from its heroes, but also, I had learned how to structure a story through the mechanics of action, and this helped me see the similar structuring of non-action-based stories.

Part of the literary legacy my writing depends on is science fiction’s desire for logical, action-driven plots, but the origins of this project are the literary flash fiction piece, rooted in character and moment, and my desire to return to it, now that I have proven myself in the plot mines. 

Which brings us back to the beginning: How better to show the individual in the collective of humanity than through a series of very short point of view pieces? The result is an introspective novella I wrote in thousand-word chunks around other projects. More than any other book I’ve written, I feel naked in its pages, exposing my deepest, most personal self. I felt free to do this because it was something I thought would never sell: too literary, too experimental.

Well, I sent it to Apex Books and they disagreed. I hope you enjoy, and be kind to my Space Cephalopods. 

—-

Multitude: Amazon|Barnes & Noble|Bookshop

Author socials: Website|Bluesky|Instagram

18:56

Developing a cross-process reader/writer lock with limited readers, part 1: A semaphore [The Old New Thing]

Say you want to have the functionality of a reader/writer lock, but have it work cross-process. The built-in SRWLOCK works only within a single process. Can we build a reader/writer lock that works across processes?

For convenience, let’s say that you want to support a maximum of N simultaneous readers, for some fixed value N. We can do this:

  • Create a semaphore with a token count of N. Share this semaphore with all of the processes, either by giving it a name or by duplicating the handle into each of the processes.
  • To take a read lock, claim one token from the semaphore. To release the lock, release the token.
  • To take a write lock, claim N tokens from the semaphore. To release the lock, release N tokens.

The idea for the write lock is that it’s accomplished by claiming all the read locks, thereby ensuring that nobody else can get a read lock.

#define MAX_SHARED 100
HANDLE sharedSemaphore;

void AcquireShared()
{
    WaitForSingleObject(sharedSemaphore, INFINITE);
}

void ReleaseShared()
{
    ReleaseSemaphore(sharedSemaphore, 1, nullptr);
}

void AcquireExclusive()
{
    for (unsigned i = 0; i < MAX_SHARED; i++) {
        WaitForSingleObject(sharedSemaphore, INFINITE);
    }
}

void ReleaseShared()
{
    ReleaseSemaphore(sharedSemaphore, MAX_SHARED, nullptr);
}

Since we are using Wait­For­Single­Object, we can also add a timeout, so that the caller can decide to abandon the operation if they can’t claim the lock.

bool AcquireSharedWithTimeout(DWORD timeout)
{
    return WaitForSingleObject(sharedSemaphore, timeout) == WAIT_OBJECT_0;
}

bool AcquireExclusiveWithTimeout(DWORD timeout)
{
    DWORD start = GetTickCount();
    for (unsigned i = 0; i < MAX_SHARED; i++) {
        DWORD elapsed = GetTickCount() - start;
        if (elapsed > timeout ||
            WaitForSingleObject(sharedSemaphore, timeout - elapsed) == WAIT_TIMEOUT)) {
            // Restore the tokens we already claimed.
            if (i > 0) {
                ReleaseSemaphore(sharedSemaphore, i, nullptr);
            }
            return false;
        }
    }
    return true;
}

Exclusive acquisition is tricky because we have to call Wait­For­Single­Object multiple times, with decreasing timeouts as time passes. If we run out of time, then we need to give back the tokens we had prematurely claimed.

There’s still a problem here. We’ll look at it next time.

The post Developing a cross-process reader/writer lock with limited readers, part 1: A semaphore appeared first on The Old New Thing.

17:42

Link [Scripting News]

Claude unlearns things that we had settled a long time ago. It fumbles around with a process, making it worse with every iteration, the same fumbling it did five days ago when it initially learned how to do what it can't do now. Usually when I regress in software, I am responsible for it, i did something to break it, but here's a tool that's capable of derailing us with me doing nothing new. In that way it behaves more like an imperfect human than a GIGO machine.

17:21

16:56

Link [Scripting News]

New version of XML-RPC package for JavaScript. It now handles POST messages that don't have a body.

15:49

Fedora Linux 44 has been released [LWN.net]

The Fedora Project has announced the release of Fedora Linux 44. There are "what's new" articles for Fedora Workstation, Fedora KDE Plasma Desktop, and Fedora Atomic Desktops. The Fedora Asahi Remix for Apple Silicon Macs, based on Fedora 44, is also available. See the Fedora Spins page for a full list of alternative desktop options.

Fedora Linux 44 Workstation ships with the latest GNOME release, GNOME 50. This comes with a long list of refinements to your desktop, including everything from accessibility to color management and remote desktop. Many of the applications that are installed by default on Fedora Workstation have also seen improvements, from Document Viewer to File Manager and Calendar. To learn more about these and other changes, you can read the GNOME 50 release notes.

KDE Plasma Desktop: If you are a KDE user, you should also notice a couple of very obvious changes. Fedora KDE Plasma Desktop 44 is based on the latest Plasma 6.6, which includes the new Plasma Login Manager and Plasma Setup to provide a more cohesive and integrated experience from the moment the computer is powered on for the first time. The installation process has been simplified, enabling you to easily set up Fedora KDE Plasma Desktop for a computer for a friend or a loved one.

The release notes include important changes between Fedora 43 and Fedora 44 for desktop users, developers, and system administrators.

[$] Strawberry is ripe for managing music collections [LWN.net]

There are dozens of music-player applications for Linux; the options range from bare-bones programs that only play local files to full-blown music-management projects with a full suite of tools for managing (and playing) a music collection. Strawberry is in the latter category; it has a bumper crop of features, including smart playlists, support for editing music metadata tags, the ability to organize music files, and more.

Moseying Around Cincinnati’s Asian Food Festival [Whatever]

still have more posts to do over my trip to Colorado (I cannot seem to get through that dang trip!), but I wanted to post about my experience at Cincinnati’s Asian Food Festival because it just happened this past weekend and I thought some fresh content was a good way to get me into a writing mood.

I was so excited for this festival. I had it on my calendar for two whole months prior because I couldn’t wait for it. I told multiple friends about it out of excitement. I ended up going with Kayla, Brad, and Bryant, and we went on Saturday, since it’s only a two-day festival and Saturday just worked better for everyone instead of Sunday.

The Cincinnati Asian Food Festival has been going on for fifteen years, with this past year surpassing 125,000 attendees, and they have over 60 different vendors. Most of these are food and drink vendors, but there’s also some other goods for sale and even a ZYN station set up, just in case you really needed your nicotine fix.

I am sad to say I didn’t have a super positive experience at the festival, despite my initial excitement for it. As you can imagine from hearing the words “125,000 attendees,” it was very crowded. On one hand, I’m happy that something like an Asian Food Festival would be a popular event and that all these businesses are getting a ton of traffic, but on the other hand, when you cram that many people into a three block radius, it gets very difficult to walk around.

Long lines impede the flow of foot traffic (what little flow there is) because they jut right out into the street everyone is trying to walk down, every line to order is at least twenty minutes long and then you have to wait to actually receive your food. If you’re with your friends you will absolutely lose them in the crowd unless you’re literally holding hands. You will get shoulder checked by multiple people and almost kick a pug you didn’t see. There is absolutely nowhere to sit and eat or even stand and eat. There’s also almost no shade.

For what it’s worth, these issues are not limited to just the Asian Food Festival. This is pretty much all food festivals ever. And I go to a fair amount of them. I’m honestly very tired of these issues, and I feel like the Asian Food Festival just so happened to be the straw that broke the camel’s back. You can’t have a literal food festival and then have nowhere for people to eat. You need to figure out better line control so people can actually differentiate between the line and the sea of people, and where the end of the line is.

At one point, I ordered something and then tried to move to the “pick up” area to wait for my food, but it was so intensely packed that I couldn’t move from the ordering spot. I tried to step to the side in the other direction but was met with another wall of people. The cashier ended up telling me to move, and I got frustrated because I was actively trying to, but there was nowhere to move to! Like, yes I am well aware of the line behind me, I promise I’m not just standing at the register for fun.

I mean look at this!

A large sea of people in the middle of the street. A huge, daunting crowd that seems insurmountable to get through.

Imagine trying to get through this if you have a stroller, or are in a wheelchair? You’re gonna have to run someone over if you want through. There were so many points where literally just nobody was moving. Like a traffic jam, but just people standing completely still and there’s no way around anyone. So you just stand there and wait a few minutes until you can continue taking tiny-half-shuffle-steps and try not to step on the back of the shoes of the person in front of you.

Also, I know you’re probably thinking that I just happened to go during the busiest time. Well, it was open from 11am to 10pm on Saturday, and I got there at 11:45am and left at 7pm. So I was there for a hot minute. I’m sure 9pm might’ve been less crowded, but I’m also sure a lot of places would be sold out or closing down for the night by then to prep for Sunday.

Okay, so now that I’ve gotten my population qualms and lack of seating issues out of the way, let’s talk about the actual food and drinks I got.

Oh, I almost forgot, parking in a public lot nearby was $30. So, that fucking sucked. And, yes, there’s more financially savvy options of taking the bus or walking, but I live two full hours away from the Court Street Plaza where it was held, so yeah, I need somewhere to park my dang car.

It always takes me a couple passes of everything to figure out what I want to try first. I knew I wanted to start off with a coffee, and Lotus Street Foods had a Thai Iced Coffee for six dollars:

Bryant's hand holding out my Thai Iced Coffee.

Bryant so kindly modeled my beverage for me because I was holding the actual food item I got from Lotus. Here’s their Asian fried jerky for nine dollars:

A small container holding a few pieces of Asian jerky and a small mound of white rice.

I actually really liked the flavor of the jerky. It had a sticky, sort of sweet glaze, but it was definitely hard to bite through and chew. Wasn’t quite the same texture as jerky but wasn’t the same texture as regular meat. The rice was unfortunately cold and extremely bland. Great flavor on the meat though!

For the coffee, I would’ve liked a little more condensed milk in it. It wasn’t quite creamy enough for my taste and was just a little too plain coffee-y flavored. I like a sweeter, creamier coffee though, so I know I’m not the best judge of coffee when it actually tastes like coffee. I just think the balance was a little off. And for what it’s worth this wasn’t my first time trying this drink, so I have some sweeter ones I’ve had in the past to compare it to.

Kayla really wanted to try the elote from LALO Chino Latino, especially since it wasn’t listed on their online menu that it was going to be offered:

A cob of corn covered in a light orange sauce and some cilantro.

She said it was totes delish last year, but sadly this elote missed the mark this time around so bad that she barely ate half. She let me try a bite and yeah, it was rough. The corn itself was cold and had no flavor, and was tough and almost rubbery in texture. It felt like something you shouldn’t actually be chewing on. The sauce was lackluster, and honestly if the corn itself isn’t good then the dish isn’t going to be good no matter what you put on top. So that was unfortunate.

However, I did get the Vietnamese Birria Beef Taco from them for six dollars, and their horchata coffee, also for six dollars:

A small birria taco and a side of dipping sauce being held by Bryant. He is also holding the coffee in the other hand.

I didn’t finish the Thai coffee, so I was hoping this horchata coffee was going to be the redeeming caffeine fix of the day. While I did like the horchata coffee better than my first coffee, I can confidently say it was totally lacking in horchata flavor. There were some notes of cinnamon in there, but I would not go so far as to label this as “horchata” coffee. Kayla got one too and agreed that it’s more like if you added a little bit of cinnamon to a regular latte. So that was a little disappointing.

As for the birria taco, it was so good! I know you can’t see the inside, but there was plenty of tender birria, and the cilantro and onion on top was nice and fresh. The consommé had a lot of good flavor, the outside was golden brown, and I was wishing I had got a second one.

The next place I stopped was Evolve Bake+Shop. Though it was only about 1:30, this stand was almost completely sold out of baked goods. By the time I did another once through the street, they were sold out and had gone back home to bake more goodies for Sunday. The owner was so sweet and apologetic, but honestly I’m thrilled for them that they sold out so quick. I managed to get my hands on two of their few remaining cookies: their gluten-free ube crinkle cookie, and their strawberry matcha oatmilk cookie for four dollars each:

Two cookies, each one being held in one of Kayla's hands. They both are in plastic packaging. The ube crinkle one is purple with a white crinkle top, and the other one is green with a white drizzle and some pink chunks visible.

I actually didn’t know until I looked them up on Instagram for this post, but all their baked goods are 100% vegan/plant-based! It’s nice to know there are some vegan options at the festival.

I shared the ube cookie with everyone, and the consensus was that it was pretty good, but the gluten-free aspect of it made the mouthfeel just a little bit odd. Gluten-free stuff tends to have that sort of sandy texture sometimes. But it was dense and had good flavor.

As for the strawberry matcha cookie, I had that all to myself (as I am writing this post) and it was the bomb dot com! It’s super moist and soft, and has a great balance of sweetness and earthy matcha flavor. I think these cookies were well worth the four dollars. Evolve also won Best Desserts for the third year! I’m glad for them.

For years, it has been a dream of mine to try Tang Hu Lu. If you don’t recognize the name, I’m willing to bet you’ll recognize it when you see it. It’s hard to mistake the glassy, shiny, iconic strawberries on a stick. I got this Tang Hu Lu from Tenji Sushi for ten dollars:

A big kebab stick with four sugar covered strawberries on it and one green grape at the end.

I was a tiny bit disappointed by the presentation of this, because the pictures they had of it showed it having mandarin orange slices and more grapes, so only getting one grape and no orange slices was a bit of a letdown, but honestly I can’t be too mad because these strawberries were so good. They were juicy and sweet and perfectly firm without being that hard unripe texture. If you’ve ever had an urge to eat glass shards and not get hurt, this is the perfect food for you. The glassy sugar coating shatters apart and crunches so damn good, sort of like rock candy. I do think ten dollars was a lot for four strawberries and one grape, but at least I finally got to try the street food I’ve always wanted to.

There was no shortage of different Asian cuisines that were represented at this festival, including Indian dishes. Kayla ended up getting these chicken lollipops and cheesy naan bites from Khaao Macha, who were the Best of Yums winner last year:

Two flaming hot red colored chicken lollipops and one basket of cheesy naan.

I didn’t try the chicken, but Kayla said it was good (I did sniff it and it smelled like Taco Bell’s mild sauce packets). I did try some of the naan and it was definitely yummy. I mean, you really can’t go wrong with cheesy naan. The chicken was ten dollars and you got two of them, and the naan was seven dollars. I would say the naan was sizeable for the price, and good for sharing.

At this point, we took a little break on food and watched some of the free entertainment on the main stage:

A taiko drum performance, each of the performers wearing a matching red uniform.

I think taiko drums are cool so this was really awesome to see, and then there was a Nepali dance performance right after this. It was very neat to see different culture’s traditions and performances. I like that the entertainment is free and they have such a variety of performances.

Back to snacking, I finally got to try my most anticipated item from the online vendor menu, Chhnagnh’s Pot Ang (roasted corn with sweet coconut sauce). I also tried their lemongrass beef skewer, and Kayla got their chicken skewer. The skewers were six dollars each and the corn was seven.

Two meat skewers and one corn on the cob, roasted and covered in creamy white sauce with green onions on top.

I can honestly say I’ve never had Cambodian food before, but this looked very promising. I absolutely loved the corn, it was roasted so perfectly and had great flavor. The coconut sauce wasn’t really giving coconut, but it was sweet and creamy so at least it added some texture and flavor, and weirdly enough the green onion went really well with it all. It just added a nice fresh component without overpowering anything flavor-wise.

Kayla let me try her chicken skewer and it was pretty good but the chicken was just a little dry. The beef was so delish though. It had just the right amount of lemongrass flavor in it without being overwhelming and was very tender and warm. This was my favorite savory food I tried all day.

The last thing I ate was from Fusako, and I hate to totally bash a place, but y’all. What I was presented with was egregious.

Here’s the menu on their truck:

A menu for Fusako, detailing three items: street corn gyoza, Japanese curry Coney, and a hash brown sushi fusion sort of dish. Everything looks totes delish and decked out.

This looked so good and impressive. Everything looked filling and decked out in garnishes and sauces and I had high hopes. I got the Mexican street corn gyoza, which was supposed to be crispy fried dumplings stuffed with sweet corn, with cotija cheese, a chili-lime aioli, lime zest, and green onion. Sounded amazing. Here’s what I got for eight dollars:

Two tiny dumplings covered in sauce and corn.

Two tiny gyoza, covered in a mess of sauce and corn, with no lime zest or green onions in sight. It looked so haphazardly thrown together. It was totally cold and the gyoza were tough instead of crispy. The entire thing lacked flavor, and the wait was so long. I was really disappointed.

I hated to leave on an L, but it was getting late.

Oh, and earlier in the day I had a really terrible yuzu mule for ten dollars.

In total, I spent $88 dollars before tip (I bought Kayla’s chicken skewer and a Thai coffee for Bryant), and usually I just chose the 15% tip option but I’m not gonna do all that math. We’ll just say around a hundred bucks.

Overall, I just wasn’t really impressed with the food or drinks I had gotten throughout the day. There were some good things but my experience overall with how crowded it was and the prices and lack of seating just kind of made for a less than ideal experience. They clearly need to open up more blocks for the festival to spread out.

I always get so excited for food truck festivals, and I keep being let down by them. Is it me? Am I the problem? Am I just not cut out for the food truck lifestyle? I hate waiting in lines and I hate standing to eat. I don’t prefer fast, casual service, and I usually like my food to come on real dishes. Oh no. Maybe it is me.

Huge shout out to the Library Square public library for keeping me from having to use a Porta-Potty. Very happy to use actual toilets and wash my fucking hands. And get some AC for a second.

I am glad I got to experience something new and hang out with my friends, but I think I won’t return next year unless they implement some kind of crowd management or cap tickets.

What sounded the best to you? Have you been to any of the previous years of the festival? Let me know in the comments, and have a great day!

-AMS

15:28

Link [Scripting News]

A question I'd like to put out there. Maybe AI needs the massive data centers now, but they could definitely get more efficient over time. There might be another Moore's Law in there. And the work is going very fast, and maybe they're leaving other optimizations for later. Take a look at how computers themselves have gotten more efficient since when I started in the 1970s. It was a miracle that I could buy a computer to put in my living room in 1979. A couple of years before that I had a 100 pound terminal that I could lug cross-town to show my grandfather. We may end up with a lot of unused data centers and energy generation capacity. But that's how great evolutionary steps work. You go where you're called to go. We are a big Ouija board. This stuff is really important, we're going to remove layers and layers to tech, get to the answer sooner, and more easily, and empower people with much less tech education that we have to do the good parts of what we do, the fun stuff. There's art in the lower-level stuff too, but in tech we like to bury that stuff and forget its even there. That's how we get to build more complex machines that do more. By pushing the repetitive complex stuff into the pipes. If this were parallel to the development that led to smart phones, we're at the point where we have the glass palaces with huge cooling systems, and maybe Fortran has been invented, but it might still be machine code.

Link [Scripting News]

This week is being spent, among other things, teaching Claude how to write code that fits in with my library of apps. I like this. It's like a painter telling an assistant the rules for adding to the sculpture. Art has been practiced like that for a long time. Anyway, here's an example of my side of a workflow where we're getting its dialog management code, that works fine, so it fits in with the other code. "these all look good, and the last one is most important, we don't need a blob of html to be there before you run, you create the dom structures you need. this may seem inefficient, but it makes it much easier to add a new element, or even more complex changes. that won't matter much to you, but when a human is editing it matters a lot. simplicity makes work flow better and reduces chance of being detoured by a bug that has to be found and fixed." I didn't edit that at all. I am also teaching it why things work the way they do because of differences between machines and humans. I'm learning a lot about our strength and weakness from seeing how it would work, left to its own needs (ie no human-edited code base, just AI-edited).

15:07

In Memoriam: Tomáš Kalibera [LWN.net]

We have received the sad news that Tomáš Kalibera, a member of the R Project core team, has passed away after a short illness.

A friend who knew him well wrote to me: he was very happy, and his work fulfilled him. That is, perhaps, the best thing one can say about a life in open source — that the work mattered, that it reached millions, and that the person who did it found meaning in it.

Kalibera was mentioned in this 2019 article about C programs passing strings to Fortran subroutines. He will be greatly missed.

14:21

All FOSDEM 2026 videos are online [LWN.net]

FOSDEM's organizers have announced that all of the video recordings "worth publishing" from FOSDEM 2026 are now available.

Videos are linked from the individual schedule pages for the talks and the full schedule page. They are also available, organised by room, at video.fosdem.org/2026.

LWN's coverage of talks from FOSDEM 2026 can be found on our conference index.

Security updates for Tuesday [LWN.net]

Security updates have been issued by Debian (openjdk-21 and webkit2gtk), Fedora (botan3, chromium, cockpit, firefox, flatpak, gum, libarchive, libcoap, mingw-python3, ngtcp2, nss, openssh, openssl, openvpn, PackageKit, python3-docs, python3.11, python3.12, python3.13, python3.14, vim, and xrdp), Oracle (firefox, gdk-pixbuf2, java-1.8.0-openjdk, java-21-openjdk, python3.12, python3.9, sudo, and tigervnc), Red Hat (tigervnc and xorg-x11-server-Xwayland), Slackware (mpg123 and proftpd), SUSE (emacs, firefox, fontforge, freeciv, freerdp, libngtcp2-16, libsystemd0, and strongswan), and Ubuntu (authd, clamav, glance, haproxy, jq, lcms2, nginx, nltk, ntfs-3g, packagekit, pillow, strongswan, and vim).

13:56

When Correct Systems Produce the Wrong Outcomes [Radar]

We tend to assume that if every part of a system behaves correctly, the system itself will behave correctly. That assumption is deeply embedded in how we design, test, and operate software. If a service returns valid responses, if dependencies are reachable, and if constraints are satisfied, then the system is considered healthy. Even in distributed systems, where failure modes are more complex, correctness is still tied to the behavior of individual components. In modern AI systems, particularly those combining retrieval, reasoning, and tool invocation, this assumption is increasingly stressed under continuous operation.

This model works because most systems are built around discrete operations. A request arrives, the system processes it, and a result is returned. Each interaction is bounded, and correctness can be evaluated locally. But that assumption begins to break down in systems that operate continuously. In these systems, this behavior is not the result of a single request. It emerges from a sequence of decisions that unfold over time. Each decision may be reasonable in isolation. The system may satisfy every local condition we know how to measure. And yet, when viewed as a whole, the outcome can be wrong.

One way to think about this is as a form of behavioral drift systems that remain operational but gradually diverge from their intended trajectory. Nothing crashes. No alerts fire. The system continues to function. And still, something has gone off course.

The composability problem

The root of the issue is not that components are failing. It is that correctness no longer composes cleanly. In traditional systems, we rely on a simple intuition: If each part is correct, then the system composed of those parts will also be correct. This intuition holds when interactions are limited and well-defined.

In autonomous systems, that intuition becomes unreliable. Consider a system that retrieves information, reasons over it, and takes action. Each step in that process can be implemented correctly. Retrieval returns relevant data. The reasoning step produces plausible conclusions. The action is executed successfully. But correctness at each step does not guarantee correctness of the sequence.

The system might retrieve information that is contextually valid but incomplete or misaligned with the current task. The reasoning step might interpret it in a way that is locally consistent but globally misleading. The action might reinforce that interpretation by feeding it back into the system’s context. Each step is valid. The trajectory is not. This is what behavioral drift looks like in practice: locally correct decisions producing globally misaligned outcomes.

In these systems, correctness is no longer a property of individual steps. It is a property of how those steps interact over time. This breakdown is subtle but fundamental. It means that testing individual components, even exhaustively, does not guarantee that the system will behave correctly when those components are composed into a continuously operating whole.

Behavior emerges over time

To understand why this happens, it helps to look at where behavior actually comes from. In many modern AI systems, behavior is not encoded directly in a single component. It emerges from interaction:

  • Models generate outputs based on context
  • Retrieval systems shape that context
  • Planners sequence actions based on those outputs
  • Execution layers apply those actions to external systems
  • Feedback loops update the system’s state

Each of these elements operates with partial information. Each contributes to the next state of the system. The system evolves as these interactions accumulate. This pattern is especially visible in LLM-based and agentic AI systems, where context assembly, reasoning, and action selection are dynamically coupled. Under these conditions, behavior is dynamic and path dependent. Small differences early in a sequence can lead to large differences later on. A slightly suboptimal decision, repeated or combined with others, can push the system further away from its intended trajectory.

This is why behavior cannot be fully specified ahead of time. It is not simply implemented; it is produced. And because it is produced over time, it can also drift over time.

Observability without alignment

Modern observability systems are very good at telling us what a system is doing. We can measure latency, throughput, and resource utilization. We can trace requests across services. We can inspect logs, metrics, and traces in near real time. In many cases, we can reconstruct exactly how a particular outcome was produced. These signals are essential. They allow us to detect failures that disrupt execution. But they are tied to a particular model of correctness. They assume that if execution proceeds without errors and if performance remains within acceptable bounds, then the system is behaving as expected.

In systems exhibiting behavioral drift, that assumption no longer holds. A system can process requests efficiently while producing outputs that are progressively less aligned with its intended purpose. It can meet all its service-level objectives while still moving in the wrong direction. Observability captures activity. It does not capture alignment.

This distinction becomes more important as systems become more autonomous. In AI-driven systems, particularly those operating as long-lived agents, this gap between activity and alignment becomes operationally significant. The question is no longer just whether the system is working. It is whether it is still doing the right thing. This gap between activity and alignment is where many modern systems begin to fail without appearing to fail.

The limits of step-level validation

A natural response to this problem is to add more validation. We can introduce checks at each stage:

  • Validate retrieved data.
  • Apply policy checks to model outputs.
  • Enforce constraints before executing actions.

These mechanisms improve local correctness. They reduce the likelihood of obviously incorrect decisions. But they operate at the level of individual steps.

They answer questions like:

  • Is this output acceptable?
  • Is this action allowed?
  • Does this input meet requirements?

They do not answer:

  • Does this sequence of decisions still make sense as a whole?

A system can pass every validation check and still drift. Behavioral drift is not caused by invalid steps. It is caused by valid steps interacting in ways we did not anticipate. Increasing validation does not eliminate this problem. It only shifts where it appears, often pushing it further downstream, where it becomes harder to detect and correct.

Coordination becomes the system

If correctness does not compose automatically, then what determines system behavior? Increasingly, the answer is coordination. In traditional distributed systems, coordination refers to managing shared state, ensuring consistency, ordering operations, and handling concurrency. In autonomous systems, coordination extends to decisions.

The system must coordinate:

  • Which information is used
  • How that information is interpreted
  • What actions are taken
  • How those actions influence future decisions

This coordination is not centralized. It is distributed across models, planners, tools, and feedback loops. In agentic AI architectures, this coordination spans model inference, retrieval pipelines, and external system interactions. The system’s behavior is not defined by any single component. It emerges from the interaction between them.

In this sense, the system is no longer just the sum of its parts. The system is the coordination itself. Failures arise not from broken components, but from the dynamics of interaction timing, sequencing, feedback, and context. This also explains why small inconsistencies can propagate and amplify. A slight mismatch in one part of the system can cascade through subsequent decisions, shaping the trajectory in ways that are difficult to anticipate or reverse.

Control planes introduce structure, not assurance

One response to this complexity is to introduce more structure. Control planes, policy engines, and governance layers provide mechanisms to enforce constraints at key decision points. They can validate inputs, restrict actions, and ensure that certain conditions are met before execution proceeds. This is an important step. Without some form of structure, it becomes difficult to reason about system behavior at all. But structure alone is not sufficient.

Most control mechanisms operate at entry points. They evaluate decisions at the moment they are made. They determine whether a particular action should be allowed, whether a policy is satisfied, and whether a request can proceed. The problem is that many of the failures in autonomous systems do not originate at these entry points. They emerge during execution, as sequences of individually valid decisions interact in unexpected ways. A control plane can ensure that each step is permissible. It cannot guarantee that the sequence of steps will produce the intended outcome. This distinction is subtle but important: control provides structure, but not assurance.

From events to trajectories

Traditional monitoring focuses on events. A request is processed. A response is returned. An error occurs. Each event is evaluated independently. In systems exhibiting behavioral drift, behavior is better understood as a trajectory. A trajectory is a sequence of states connected by decisions. It captures how the system evolves over time. Two trajectories can consist of individually valid steps and still produce very different outcomes. One remains aligned. The other drifts. This represents a shift from failure as an event to failure as a trajectory, a distinction that traditional system models are not designed to capture.

Correctness is no longer about individual events. It is about the shape of the trajectory. This shift has implications not just for how we monitor systems, but for how we design them in the first place.

Detecting drift and responding in motion

If failure manifests as drift, then detecting it requires a different set of signals. Instead of looking for errors, we need to look for patterns:

  • Changes in how similar situations are handled
  • Increasing variability in decision sequences
  • Divergence between expected and observed outcomes
  • Instability in response patterns

These signals are not binary. They do not indicate that something is broken. They indicate that something is changing. The challenge is that change is not always failure. Systems are expected to adapt. Models evolve. Data shifts. The question is not whether the system is changing. It is whether the change remains aligned with intent. This requires a different kind of visibility, one that focuses on behavior over time rather than isolated events. Once drift is identified, the system needs a way to respond. Traditional responses, restart, rollback, stop, assume failure is discrete and localized. Behavioral drift is neither.

What is needed is the ability to influence behavior while the system continues to operate. This might involve constraining action space, adjusting decision selection, introducing targeted validation, or steering the system toward more stable trajectories. These are not binary interventions. They are continuous adjustments.

Control as a continuous process

This perspective aligns with how control is handled in other domains. In control systems engineering, behavior is managed through feedback loops. The system is continuously monitored, and adjustments are made to keep it within desired bounds. Control is no longer just a gate. It becomes a continuous process that shapes behavior over time.

This leads to a different definition of reliability. A system can be available, responsive, and internally consistent—and still fail if its behavior drifts away from its intended purpose. Reliability becomes a question of alignment over time: whether the system remains within acceptable bounds and continues to behave in ways consistent with its goals.

What this means for system design

If behavior is trajectory-based, then system design must reflect that. We need to monitor patterns, understand interactions, treat behavior as dynamic, and provide mechanisms to influence trajectories. We are very good at detecting failure as breakage. We are much less equipped to detect failure as drift. Behavioral drift accumulates gradually, often becoming visible only after significant misalignment has already occurred.

As systems become more autonomous, this gap will become more visible. The hardest problems will not be systems that fail loudly, but systems that continue working while gradually moving in the wrong direction. The question is no longer just how to build systems that work. It is how to build systems that continue to work for the reasons we intended.

13:28

Freexian Collaborators: Monthly report about Debian Long Term Support, March 2026 (by Santiago Ruano Rincón) [Planet Debian]

The Debian LTS Team, funded by [Freexian’s Debian LTS offering] (https://www.freexian.com/lts/debian/), is pleased to report its activities for March.

Activity summary

During the month of March, 20 contributors have been paid to work on Debian LTS (links to individual contributor reports are located below).

The team released 24 DLAs fixing 250 CVEs.

We also welcomed two new members: Lukas Märdian and Emmanuel Arias to the team, who actually started to contribute to the LTS project several months ago.

The team continued preparing security updates in its usual rhythm. Beyond the updates targeting Debian 11 (“bullseye”), which is the current release under LTS, the team also proposed updates for more recent releases (Debian 12 (“bookworm”) and Debian 13 (“trixie”)), including Debian unstable. We highlight several notable security updates here below.

  • ansible (DLA 4502-1), prepared by Lee Garret in collaboration with Jochen, fixing a vulnerability that allows attackers to bypass unsafe content protections
  • asterisk (DLA 4515-1), prepared by Lukas Märdian, fixing four CVEs that include possible privilege escalations.
  • gimp (DLA 4500-1), prepared by Thorsten, fixing four CVEs related to denial of service or execution of arbitrary code.
  • gst-plugins-base1.0 and gst-plugins-ugly1.0 (DLA-4514-1, DLA-4516-1, respectively), both prepared by Utkarsh, addressing vulnerabilities that may yield to arbitrary code execution.
  • imagemagick, released by Bastien Roucariès (DLA 4497-1) fixing multiple vulnerabilities that could lead to information leaks, bypass of security policies, denial of service or arbitrary code execution.
  • libpng1.6 (DLA 4521-1), prepared by Tobias Frost, fixing an arbitrary code execution vulnerability
  • linux: Ben Hutchings released DLA 4498-1 and DLA 4499-1 for linux 5.10 and linux 6.1, respectively. Those updates especially address the “CrackArmor” flaw.
  • ruby-rack (DLA 4505-1), prepared by Utkarsh Gupta, addressing two vulnerabilities
  • strongswan (DLA 4512-1), prepared by Thorsten Alteholz, fixing a Denial of Service vulnerability
  • roundcube (DLA 4517-1) prepared by Guilhem Moulin, who discovered that one of the fixes provided by upstream was incomplete.

Contributions from outside the LTS Team:

As usual, the thunderbird update, released as DLA 4511-1, was prepared by its maintainer Christoph Goehre. Thanks a lot for his continuous contributions.

The LTS Team has also contributed with updates to the latest Debian releases:

  • Andreas Henriksson completed the uploads of glib2.0 for both trixie and bookworm
  • Arnaud Rebillout: python-cryptography for trixie
  • Arnaud and Bastien worked together to prepare a ca-certificates-java release for unstable
  • Bastien completed the upload of gpsd for trixie that was proposed in January.
  • Bastien uploaded a regression update of apache2 for trixie
  • Bastien prepared a zabbix point update for trixie
  • Bastien in collaboration with Markus released netty updates for trixie and bookworm DSA 6160-1
  • Daniel Leidert proposed python-tornado releases for both trixie and bookworm.
  • Daniel also prepared a python-authlib update for trixie
  • Guilhem prepared a mapserver update for bookworm.
  • Lucas Kanashiro proposed merge requests to fix three CVEs in erlang for both trixie and bookworm
  • Sylvain Beucler continued the work to replace p7zip with 7zip in the different supported releases, and proposed a point update for bookworm
  • Tobias prepared trixie and bookworm security updates, released as DSA-6189-1
  • Utkarsh prepared trixie and bookworm security update for ruby-rack, released as DSA-6180-1

Individual Debian LTS contributor reports

Thanks to our sponsors

Sponsors that joined recently are in bold.

13:07

CodeSOD: Lint Brush Off [The Daily WTF]

A few years back, C# added the concept of "primary constructors". Instead of declaring the storage for class members and then initializing them in the constructor, you can annotate the class itself with the required fields, and C# automatically generates a constructor for you. It's all very TypeScript and very Microsoft, and certainly cuts down on some boilerplate.

Esben B's team isn't really using them in many places, but they are using a linter which is opinionated about them. So this in-line constructor causes the linter to complain:

    public DocumentNetworkController(ILookupClient service)

The linter wants you to switch this to a primary constructor. Esben didn't want to do that, and didn't want to change the global linter configuration, and so added a pragma to disable that particular warning:

#pragma warning disable IDE0290 // Use primary constructor
    public DocumentNetworkController(ILookupClient service)
#pragma warning restore IDE0290

The linter didn't like this. It threw a new warning: that this suppression wasn't needed. Which was news to Esben, as clearly the suppression was needed if you wanted to make the warnings go away. The obvious solution was to disable the warning that you didn't need to disable the warning:

#pragma warning disable IDE0079, IDE0290 // Use primary constructor
    public DocumentNetworkController(ILookupClient service)
#pragma warning restore IDE0290, IDE0079

Except this doesn't work. These pragmas take effect on the next line, which means you can't disable IDE0079 on the same line as IDE0290 and expect it to work. Which means the final version of the code looked like this:

#pragma warning disable IDE0079 // Disable warning about not needed supression
#pragma warning disable IDE0290 // Use primary constructor
    public DocumentNetworkController(ILookupClient service)
#pragma warning restore IDE0290, IDE0079

Esben writes:

So the nice recommendation to use a primary ctor ended up with 3 lines of annoying boilerplate code. Good times \o/

While yes, this is frustrating, I will say there's an element of "when the table saw keeps taking fingers off, that may be more of a you problem." I don't know the details, so I can't say, "just change the linter config or adopt its recommendation" and claim that the problem goes away, but when the tool hurts you, it's a definite sign of one of two things: it's either the wrong tool, or you're using it wrong.

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

12:14

Spectre [RevK®'s ramblings]

A "Spectre" is a new shape.

Yes, I did say new shape! It is pretty incredible that there can be any such things as new shape, I mean, how do we not know all the shapes already? Also, to be fair, it is a reasonable bet that the ancient Greek's knew of this and forgot to write anything down. But certainly in my life time - this is indeed a new shape. Discovered in 2023!

So what makes it special? There is, of course, a whole wikipedia article on what is called The Einstein Problem, but I'll try and explain it simply.

This shape tessellates. Basically that means you can tile your bathroom wall with it - the shape fits together with itself to cover a surface with no gaps. Lots of shapes do this, squares, triangles, hexagons, and so on. You can rotate the tiles if needed (e.g. for triangle you have to). There are many that do not, such as pentagons, and circles, etc. You cannot tile a wall with circles without leaving gaps.

So what? Well, most tessellating shapes create a repeating pattern. Hexagons make a familiar honeycomb pattern for example. But with the Spectre you can make a pattern that does not repeat. In fact, you cannot make a repeating pattern with it at all, no matter how hard you try. Yes, some groups of tiles may appear the same in other places but even then these do not form a regular pattern, at any level.

There is some debate over the rules - this was, it seems, a competition. The rules allowed you to turn a tile over. The researchers created a shape called the "Hat" which worked but some of the tiles had to be turned over. People, quite reasonably, said "If I want to tile my bathroom wall I have to buy two sets of tiles". So the researchers came out with the "Spectre" a year later, and that works without turning over tiles. In fact if you can turn over titles, you can make a repeating pattern with it.

But basically, until this was discovered, nobody even knew if a forced aperiodic tessellating monotile shape even existed. That is what makes it a new shape.

You can now tile your bathroom wall with one type of tile and it is a non repeating pattern.

But how?

Well, you could just try placing randomly where they fit, but you quickly end up with gaps that are not Spectre shaped, and have to back track and try again.

However, there is an agorithm, published by Simon Tatham, here. I'd like to thank him for his work, but I have a word of caution if you want to use his paper. I also appreciate, as a coder, the counting from 0 all the way through.

It just so happens I had an idea how to use this shape, for reasons which will become apparent later this year I hope. So I wanted to code generating a surface covered with these tiles. Long story short - here it is, open source on Codeberg.

But this took me a couple of days, which is a long time for me, so let me explain the issues.

The principle is simple, a recursive set of meta tiles are groups of tiles in a pattern (represented has hexagons in the paper).

You can start at the top, pick a meta tile from a set of 9 different types, and that tells you how to place 7 or 8 subtitles in a honeycomb pattern, and their types (from the set of 9) and orientation. You repeat as far as you want and the last level you actually have Spectre tiles not hexagons.

You can also start at the bottom with a Spectre, and decide which of the meta tile types it shall be at random. You can then find a meta tile which includes that type, and it tells you the neighbouring Spectre tiles to place. This is then a meta tile which you can again decide is part of a higher level meta tile randomly, and that tells you what meta tiles to place for its neighbours and work down to Spectre tiles below. So you have one upward process in a loop, and at each point have a recursive downward process placing 6 or 7 neighbours at each level down. This is the approach I took.

If I started at the top I would pick one of 9 meta tiles, and maybe one of 12 orientations, and that would be it, the Spectres under that are determined by the algorithm and not any more random. By starting at the bottom, I pick one of 12 orientations and place the first tile, and pick one of 9 meta tiles, but at each level as I go up I get to pick which higher meta tile it is in, and in some cases which of two sub tiles it is. This is random at each level and makes for a much more randomised final output.

So what was confusing me?

The distraction that took me most of the time trying to get this working is the rather excellent graphic representations in Simon's paper. They show a hexagon meta tile substituted with 7 or 8 joined hexagon meta tiles, and shows a hexagon meta tile substituted with a Spectre tile. These diagrams have specific orientation, and rotation, so one is lulled in to a sense of simplicity that you are literally replacing one hexagon with a set of them, each with specific orientation.

Looks pretty, and simple, but this is NOT the case!

The diagrams are actually simply a mapping, a look up table for what gets joined to what and what side. The hexagon has 6 sides, basically at the lowest level each Spectre is joined to exactly 6 other Spectre tiles (there is a special case for that G meta tile where it is two Spectre tiles, the others are all one, just to add to the fun). So you have each Spectre tiles as having 6 connection sets of edges - but these are not simple, as each of the 9 types of meta tile is a Spectre with a specific set of edges for each of the 6 sides.

The numbering is the key - on the yellow tile there is a side 0, which is actually the three edges 8, 7, and 6 (marked 0.0, 1.0, and 2.0). On the purple, there is a side 4, which is edges 13, 12, and 11 (marked 0.4, 1.4, and 2.4). But side 4 on the yellow tile is only sides 12 and 11 (marked 0.4 and 1.4). But you a see yellow side 0 and purple side 4 would fit together. Some of these 6 sides are one edge (see purple side 3), but can be as many as 6 edges in some cases. Each of the 9 meta tiles has a specific set of edges making up the 6 joint points to other tiles. Each similarly has a set of edges on the hexagon pattern, which is different for each type.

So in practice you connect the defined edges, and they end up nothing like hexagonal tiles. In fact they twist and distort all over the place. The graphical representations are really not helpful in my view, sorry. Also, I would have numbered side.subedge so 1.0, 1.1, 1.2, not 0.1, 1.1, 2.1, personally.

Once I grasped that logic, the code became simple. As I say, you start with one Spectre, and connect neighbours. You only need to know the specific 6 sets of edges for that tile. Then when you use the meta tile rules you know which set of edges that connects to on the neighbouring Spectre. It is pretty simple to then align the new Spectre connected on that edge. Having placed the 7 or 8 Spectres to make a meta tile, you then just need to know the 6 joining points on that meta tile, which are themselves 6 sets of specific Spectre tile edges within the meta tile.

One issues is these connecting sides are several edges, so I actually picked one end, e.g. for yellow it would be 8, 5, 2, 0, 13, 12, 10, and for purple it would be 8, 5, 3, 0, 13, 10 as the 6 outgoing edges. These are the first edge of each side (numbered 0.x). When placing a Spectre next to one of these, you pick the other ends, so 6, 3, 1, 13, 11, 9 for yellow and 6, 4, 1, 0, 11, 9 for purple, the last edge for each side.

So connecting yellow side 0 to purple side 4 you connect yellow edge 8 to purple edge 11. The 11 being the incoming edge. This means you don't have to think of the sets of edges, just one edge on one Spectre tile for each of the 6 outgoings sides of your meta tile, at any level. This is quite a small amount of data to hold in a simple recursive algorithm

Another thing I got wrong is I stored a list of tiles, and referenced them as the 6 sides. Each tile with a starting point and rotation so I could plot it, and align new attached tiles. But this really is not necessary, and ends up using memory. I can plot the tiles (output a path to SVG) as I go, and I just need the 6 sides of a meta tile to be the 6 sets of position, rotation, and outgoing edge number. The only memory usage is a small set of data for the level of recursion. You quickly cover a very large number of tiles in each level (multiple by 8 or 9 each time), so need very few levels of recursion.

Co-ordinates

One issue is coordinates. Ultimately the output uses pixels or millimetres to several decimal places, and indeed I allow a final output rotation. But internally all lines on the Sprectre tiles are at multiples of 30 degrees. Even so you do not want to use floating point - rounding errors will accumulate as you recurse and lead to tiles not quite aligning, and also it is not possible to test two points are the same (why you need this is explained below). So the solution is to use coordinates that are integers! How do you do that with 30 degree angles, well simple - each distances is an integer multiple of sin60 plus an integer multiple of cos60 - at the final stage you multiple out and add these. You can also make a simple table of one unit distance integers for each 30 degree angle. And a table of the relative integer offsets for each point on a Spectre at each angle. This means no floating point maths, nor sin/cos, until you actually output to SVG.

Finishing

One problem which I don't think Simon's paper covers, and was unexpected, is knowing when to stop! I am trying to cover a rectangular area. How do I know I have got there?! I could just set a maximum level, but using random choices for meta tiles at each level one can find the whole things quickly gets to way bigger than the area but ends up one sided leaving gaps in your rectangle. If I had gone top down, or always picked the current tile being in the middle of the meta tile, I could maybe work out a max level, but that is not what I am doing.

After a bit of head scratching I finally worked out a way. I wanted to make a grout line on top of the final tile output so I decided to keep track of all the edges I placed. A simple start/end for each unit length edge in a list. This can be made as I go along, and the integers mean I can always match to an existing edge to plot the grout efficiently as a series of lines.

This also meant I could actually keep two lists - one a list of first use of an edge, and then moving over to another list of second use of an edge, when a tile is attached the other side.

I could also check each edge I added to a list to see if it falls (even one end) within my target rectangle, and so only keep edges I need.

But this has the side effect that as soon as my list of single used edges, within rectangle, is empty, I must have 100% covered the rectangle, as no edges that don't have a tile on one side are in the target area. I can then immediately abort the whole placement process at every level just by checking the list of single use edges is now empty.

Cropping

The final challenge was the edge of rectangle. Firstly the SVG has a viewBox, and so I can simply plot tiles that fall even slightly within the rectangle, and the same for grout lines. These go off the edge, but you cannot see when looking at the final SVG.

This has a problem, if you want to use the SVG in another design, as I did, the embedding does not inherently crop the edges. But SVG has an answer for this, clipPath. It allows me to clip an object to a path, a rectangle in this case. Perfect.

The snag is support of clipPath is not that good. I don't know why, but lots of things mess up, ignoring it, or barfing in some other way. One was my resin printer, which simply ignored the whole block of tiles if it has a clipPath.

So I ended up making a whole path generation set of functions which understood cropping the path to the edge of the rectangle. I could not sleep, and ended up coding this at 2am.

The final result is I can now make an SVG of a randomised set of tessellating aperiodic Spectre monotiles, with loads of options. I even added a sort of bevel edge to the tiles with a lighting based shade.

12:07

What Anthropic’s Mythos Means for the Future of Cybersecurity [Schneier on Security]

Two weeks ago, Anthropic announced that its new model, Claude Mythos Preview, can autonomously find and weaponize software vulnerabilities, turning them into working exploits without expert guidance. These were vulnerabilities in key software like operating systems and internet infrastructure that thousands of software developers working on those systems failed to find. This capability will have major security implications, compromising the devices and services we use every day. As a result, Anthropic is not releasing the model to the general public, but instead to a limited number of companies.

The news rocked the internet security community. There were few details in Anthropic’s announcement, angering many observers. Some speculate that Anthropic doesn’t have the GPUs to run the thing, and that cybersecurity was the excuse to limit its release. Others argue Anthropic is holding to its AI safety mission. There’s hype and counterhype, reality and marketing. It’s a lot to sort out, even if you’re an expert.

We see Mythos as a real but incremental step, one in a long line of incremental steps. But even incremental steps can be important when we look at the big picture.

How AI Is Changing Cybersecurity

We’ve written about shifting baseline syndrome, a phenomenon that leads people—the public and experts alike—to discount massive long-term changes that are hidden in incremental steps. It has happened with online privacy, and it’s happening with AI. Even if the vulnerabilities found by Mythos could have been found using AI models from last month or last year, they couldn’t have been found by AI models from five years ago.

The Mythos announcement reminds us that AI has come a long way in just a few years: The baseline really has shifted. Finding vulnerabilities in source code is the type of task that today’s large language models excel at. Regardless of whether it happened last year or will happen next year, it’s been clear for a while this kind of capability was coming soon. The question is how we adapt to it.

We don’t believe that an AI that can hack autonomously will create permanent asymmetry between offense and defense; it’s likely to be more nuanced than that. Some vulnerabilities can be found, verified, and patched automatically. Some vulnerabilities will be hard to find but easy to verify and patch—consider generic cloud-hosted web applications built on standard software stacks, where updates can be deployed quickly. Still others will be easy to find (even without powerful AI) and relatively easy to verify, but harder or impossible to patch, such as IoT appliances and industrial equipment that are rarely updated or can’t be easily modified.

Then there are systems whose vulnerabilities will be easy to find in code but difficult to verify in practice. For example, complex distributed systems and cloud platforms can be composed of thousands of interacting services running in parallel, making it difficult to distinguish real vulnerabilities from false positives and to reliably reproduce them.

So we must separate the patchable from the unpatchable, and the easy to verify from the hard to verify. This taxonomy also provides us guidance for how to protect such systems in an era of powerful AI vulnerability-finding tools.

Unpatchable or hard to verify systems should be protected by wrapping them in more restrictive, tightly controlled layers. You want your fridge or thermostat or industrial control system behind a restrictive and constantly updated firewall, not freely talking to the internet.

Distributed systems that are fundamentally interconnected should be traceable and should follow the principle of least privilege, where each component has only the access it needs. These are bog-standard security ideas that we might have been tempted to throw out in the era of AI, but they’re still as relevant as ever.

Rethinking Software Security Practices

This also raises the salience of best practices in software engineering. Automated, thorough, and continuous testing was always important. Now we can take this practice a step further and use defensive AI agents to test exploits against a real stack, over and over, until the false positives have been weeded out and the real vulnerabilities and fixes are confirmed. This kind of VulnOps is likely to become a standard part of the development process.

Documentation becomes more valuable, as it can guide an AI agent on a bug-finding mission just as it does developers. And following standard practices and using standard tools and libraries allows AI and engineers alike to recognize patterns more effectively, even in a world of individual and ephemeral instant software—code that can be generated and deployed on demand.

Will this favor offense or defense? The defense eventually, probably, especially in systems that are easy to patch and verify. Fortunately, that includes our phones, web browsers, and major internet services. But today’s cars, electrical transformers, fridges, and lampposts are connected to the internet. Legacy banking and airline systems are networked.

Not all of those are going to get patched as fast as needed, and we may see a few years of constant hacks until we arrive at a new normal: where verification is paramount and software is patched continuously.

This essay was written with Barath Raghavan, and originally appeared in IEEE Spectrum.

10:35

Puddles [Seth's Blog]

When there is motion, it creates an impact of the environment.

First, the path is barely noticeable. But then, others see the hint of a path and walk on it, making it more clear. Finally, the path becomes the route.

Sometimes there’s a small rut. But a rut shifts gravity and wheels or feet land in the rut, making it deeper. This is how moguls appear on ski hills as well.

When it rains, the paths and ruts fill with water, and we call them puddles.

Of course, puddles are a metaphor.

Puddles only exist when there’s been some sort of motion that caused a depression that could collect the water. If you want to see how the audience is responding, how the culture is shifting, how your customers are acting–look for the puddles.

Fill in the rut and a new one will appear somewhere else. There are almost always puddles.

10:21

Abhijith PA: Patience could've saved me time. [Planet Debian]

If I had been patient, it would have saved me time. One such instance is following.

From my early blogs, you might know I am using mutt to do email. Just after I get along with mutt, I started using notmuch. Because limit search in mutt is always a pain when you have multiple folders. And what better tool out there than notmuch-mutt to bind both these.

notmuch-mutt provide three macros by default.

macro index <F8> \
"<enter-command>set my_old_pipe_decode=\$pipe_decode my_old_wait_key=\$wait_key nopipe_decode nowait_key<enter>\
<shell-escape>notmuch-mutt -r --prompt search<enter>\
<change-folder-readonly>`echo ${XDG_CACHE_HOME:-$HOME/.cache}/notmuch/mutt/results`<enter>\
<enter-command>set pipe_decode=\$my_old_pipe_decode wait_key=\$my_old_wait_key<enter>" \
      "notmuch: search mail"
macro index <F9> \
"<enter-command>set my_old_pipe_decode=\$pipe_decode my_old_wait_key=\$wait_key nopipe_decode nowait_key<enter>\
<pipe-message>notmuch-mutt -r thread<enter>\
<change-folder-readonly>`echo ${XDG_CACHE_HOME:-$HOME/.cache}/notmuch/mutt/results`<enter>\
<enter-command>set pipe_decode=\$my_old_pipe_decode wait_key=\$my_old_wait_key<enter>" \
      "notmuch: reconstruct thread"
macro index <F6> \
"<enter-command>set my_old_pipe_decode=\$pipe_decode my_old_wait_key=\$wait_key nopipe_decode nowait_key<enter>\
<pipe-message>notmuch-mutt tag -- -inbox<enter>\
<enter-command>set pipe_decode=\$my_old_pipe_decode wait_key=\$my_old_wait_key<enter>" \
      "notmuch: remove message from inbox"

One for search, one for reconstructing threads and one for manipulating tags, which I missed.

Now my impatient part. I have already mapped f6 for my folder movements and in my initial days of notmuch, I only use just search. So I never cared about the f6 macro provided by notmuch-mutt. As time goes by I got very comfortable with notmuch. I was stretching my notmuch legs. I started to live more on notmuch search results date:today tag:unread than more on the mutt index. To the problem, since notmuch-mutt dump all results to a temp maildir location, can’t perform flag changes back to the original maildir which was annoying, because we need to distinguish what mail you read and what not when you subscribed to most of all debian mailing list.

I was under the impression that, the notmuch-mutt is not capable of doing so and I just went like that without checking docs. I started doing all crazy hack to sync these maildirs.

I even started reading notmuch-mutt codebase.

Later, I settled on notmuch-vim. Cause I can manipulate flags sync back from notmuch to maildir.

And while searching for something, I accidentally revisited the the the notmuch-mutt macro page and saw the tag manipulation. I was like :( .

If I read about the third macro patiently when added that to config, I could’ve saved time by not doing ugly hacks around it.

I think I learned my lesson.

09:35

Mustang VixSkin® Review by Jey Pawlik [Oh Joy Sex Toy]

Mustang VixSkin® Review by Jey Pawlik

Save a horse, ride a Mustang VixSkin® dildo from Vixen Creations! Join me on this review of the Mustang, my valiant steed for so many years. I was actually surprised that there hadn’t been a review on OJST about this already, so I dove in and gave this dildo the cowboy review it needed!Actually, I’ve […]

07:35

Pluralistic: Vicky Osterweil's "The Extended Universe" (28 Apr 2026) [Pluralistic: Daily links from Cory Doctorow]

->->->->->->->->->->->->->->->->->->->->->->->->->->->->-> Top Sources: None -->

Today's links



The Haymarket Books cover for Vicky Osterweil's 'The Extended Universe.'

Vicky Osterweil's "The Extended Universe" (permalink)

Vicky Osterweil's The Extended Universe: How Disney Killed the Movies and Took Over the World makes the kind of long, polemical, startling and illuminating argument that defines great cultural criticism; it's the sort of book that encapsulates the reasons I read criticism in the first place:

https://www.haymarketbooks.org/books/2525-the-extended-universe

My first brush with this kind of criticism came more than two decades ago, when I read John Kessel's now-classic "Creating the Innocent Killer," a critique of Orson Scott Card's Ender's Game, a book I had read and enjoyed enough to re-read several times:

https://johnjosephkessel.wixsite.com/kessel-website/creating-the-innocent-killer

Kessel's argument is that Card used Ender's Game to smuggle in some very ugly ideas, wrapped in a story that was compelling, even exhilarating. In Ender's Game, we meet Andrew "Ender" Wiggin, a small, physically weak boy possessed of a prodigious intellect and a great deal of sensitivity and empathy. Ender is tormented by an escalating series of aggressors, whom he retaliates against with overwhelming force, first to the point of lethality and then all the way to literal genocide. And here's where Card makes his move: Ender's sensitivity and empathy and intellect tell him that he must respond this way, because he can tell that his aggressors will not back off from their intention to harm him; and because Ender is so small and weak, he has to use whatever tactic his brilliant mind can devise, and if that tactic results in the death penalty for mere bullying, well, that's the bully's fault, not Ender's. Indeed, in dying at Ender's hands, these bullies re-victimize Ender, because Ender is a gentle, smart, wise, weak person, and these inescapable murders that he is goaded into committing are a stain on his soul that he can never wash away.

Before reading "Creating the Innocent Killer," I confess I didn't really understand what criticism was for. Like many people, I conflated "criticism" with "reviews," thinking of critical works as a species of inconveniently difficult-to-digest essays that might help me figure out which books to read and which movies to see.

Kessel's magnificent essay changed all that, and not in spite of the fact that Kessel had pointed out some very important problems with a book that I loved, but because of that fact. In helping me understand the ugliness hidden within something whose beauty and virtues I saw very clearly, Kessel taught me more about myself – about where my aesthetics and my values overlapped, and where they diverged. It was literally life-changing.

Like Kessel, Osterweil's 'Extended Universe' deals with media that I have a great deal of affection for – the products of the Walt Disney Company. Though I'm primarily interested in theme parks – I love a big, ambitious built environment of any description and Disney pursues these with a seriousness that few others can touch – the Disney films (and the films of the studios Disney purchased, like Marvel and Lucasfilm) are obviously intimately bound up in those theme park designs.

Osterweil has her own ambivalent affection for these movies. Like so many of us, she's been raised on them, and they've shaped how she sees the world and its stories. But – like me – Osterweil is deeply suspicious of capitalism, American imperialism, and the notion of "intellectual property," and she uses reviews of a dozen Disney films to make the case that Walt Disney and the studio he founded with his brother are standards-bearers for these odious forces, and not just in the overt ways that might immediately spring to mind, but also in subtle ways that can be teased out of a close reading of the films.

In so doing, Osterweil also makes a sharp and well-argued case that intellectual property, colonialism and racial oppression are all facets of the same drive, the drive of people who fancy themselves born to rule to dominate others, which requires that those others also be dehumanized and their work denigrated. When Walt Disney insisted that his be the only name associated with "his" movies, he was playing out the same logic that underpinned his virulent opposition to labor unions and his participation in American imperialism in Latin America.

As with Kessel, Osterweil's argument is full of surprises and illuminations that are especially vivid for those of us who have great affection for these works. As her chapter on Black Panther shows, this contradiction need not go unresolved. There is plenty of scope for fans to seize the reins of the narrative (and as her chapter on the reactionary backlash to the later Star Wars movies shows, it's not just the forces of progress and anti-racism who can pull off this move).

Like the very best criticism, Osterweil's book is more than a way to deepen your understanding of the material she dissects – it's a way to deepen your understanding of the world that produced it, and to deepen your understanding of yourself.


Hey look at this (permalink)



A shelf of leatherbound history books with a gilt-stamped series title, 'The World's Famous Events.'

Object permanence (permalink)

#20yrsago Frank Zappa’s anti-censorship letter https://www.flickr.com/photos/mudshark/117551768/in/set-72057594090059726/

#15yrsago Chemistry kit with no chemicals https://web.archive.org/web/20110427212354/http://blog.makezine.com/archive/2011/04/chemistry-set-boasts-no-chemicals.html

#15yrsago Russian corruption: crooked officials steal multi-billion-dollar company, $230M tax refund, then murder campaigning lawyer https://web.archive.org/web/20110426045152/http://www.foreignpolicy.com/articles/2011/04/20/russia_s_crime_of_the_century?

#15yrsago Golden-age short-change cons https://web.archive.org/web/20110429014539/https://blog.modernmechanix.com/2011/04/26/tricks-of-short-change-artists/

#10yrsago Campaigners search Londoners’ phones to help them understand the Snoopers Charter https://www.youtube.com/watch?v=szN7DlmMLYg

#10yrsago Mitsubishi’s dieselgate: cheating since 1991 https://web.archive.org/web/20160427145038/https://www.cnet.com/roadshow/news/mitsubishi-cheated-fuel-economy-tests-since-1991/#ftag=CAD590a51e

#10yrsago Bellwether: Connie Willis’s classic, hilarious novel about the science of trendiness https://memex.craphound.com/2016/04/26/bellwether-connie-williss-classic-hilarious-novel-about-the-science-of-trendiness/

#5yrsago The Big U https://pluralistic.net/2021/04/26/moolah-boolah/#poison-ivies


Upcoming appearances (permalink)

A photo of me onstage, giving a speech, pounding the podium.



A screenshot of me at my desk, doing a livecast.

Recent appearances (permalink)



A grid of my books with Will Stahle covers..

Latest books (permalink)



A cardboard book box with the Macmillan logo.

Upcoming books (permalink)

  • "The Reverse-Centaur's Guide to AI," a short book about being a better AI critic, Farrar, Straus and Giroux, June 2026 (https://us.macmillan.com/books/9780374621568/thereversecentaursguidetolifeafterai/)
  • "Enshittification, Why Everything Suddenly Got Worse and What to Do About It" (the graphic novel), Firstsecond, 2026

  • "The Post-American Internet," a geopolitical sequel of sorts to Enshittification, Farrar, Straus and Giroux, 2027

  • "Unauthorized Bread": a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, April 20, 2027

  • "The Memex Method," Farrar, Straus, Giroux, 2027



Colophon (permalink)

Today's top sources:

Currently writing: "The Post-American Internet," a sequel to "Enshittification," about the better world the rest of us get to have now that Trump has torched America. Third draft completed. Submitted to editor.

  • "The Reverse Centaur's Guide to AI," a short book for Farrar, Straus and Giroux about being an effective AI critic. LEGAL REVIEW AND COPYEDIT COMPLETE.
  • "The Post-American Internet," a short book about internet policy in the age of Trumpism. PLANNING.

  • A Little Brother short story about DIY insulin PLANNING


This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.

https://creativecommons.org/licenses/by/4.0/

Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.


How to get Pluralistic:

Blog (no ads, tracking, or data-collection):

Pluralistic.net

Newsletter (no ads, tracking, or data-collection):

https://pluralistic.net/plura-list

Mastodon (no ads, tracking, or data-collection):

https://mamot.fr/@pluralistic

Bluesky (no ads, possible tracking and data-collection):

https://bsky.app/profile/doctorow.pluralistic.net

Medium (no ads, paywalled):

https://doctorow.medium.com/

Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):

https://mostlysignssomeportents.tumblr.com/tagged/pluralistic

"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla

READ CAREFULLY: By reading this, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies ("BOGUS AGREEMENTS") that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer.

ISSN: 3066-764X

04:07

Ravi Dwivedi: A day in Vienna [Planet Debian]

On the 7th of September 2025, my friend Dione and I had a day trip to Vienna—the capital of Austria. We were attending a conference in Budapest, Hungary, which is 250 km from Vienna. So, it was a good opportunity to visit Vienna.

We took a morning train from Budapest to Vienna and got back to Budapest by night. However, booking these tickets turned out to be a bit complicated. There were many websites to book the train ticket—Hungarian Railways, Austrian Railways, and third-party sites such as Omio. All these websites had different prices for the same ticket.

I booked the tickets from the Hungarian Railways website as it was the cheapest. The train from Budapest to Vienna was €13, operated by Eurocity. Also, I had to pay €2 for the seat reservation on top. The train from Vienna to Budapest—operated by Railjet—was €21, along with €2 extra for reservation again—making it €23. The tickets for the two-way journey added up to €38.

The prices of these tickets were dynamic—the earlier you book, the cheaper they are. I booked these tickets more than 15 days in advance. I paid €38 for the tickets, whereas Dione paid around €100 for the tickets, as she booked at the last moment—a day before the journey.

As for the seat reservation, long-distance trains in Europe usually require paying extra for the seat reservation. This ensures that you get your preferred seat, such as a window seat or an aisle seat. Nevertheless, you will get a seat on long-distance trains because they do not sell more tickets than there are seats. Therefore, you will get a seat without reservation as well. However, we reserved our seats so that we can sit together. This helped us more in the return part of the journey—from Vienna to Budapest—which was more crowded than the train we took from Budapest to Vienna in the morning.

On another note, reservation is mandatory on some trains in Europe, but ours wasn’t one of them. In addition, people also use rail passes, so an extra charge is required on top for reserving the seats for pass holders. On the other hand, local trains do not require seat reservations in general.

Our train’s scheduled departure was at 08:55 from the Budapest Kelenfold station. We reached the train station 40 minutes before the train’s scheduled departure. The Kelenfold station had free Wi-Fi, which was handy because I didn’t have a local SIM.

A departures board at Budapest Kelenfold station.

A departures board at Budapest Kelenfold station. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

A platform on Budapest Kelenfold station.

This is platform number 15 of Budapest Kelenfold station where we boarded our train. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

Our train arrived on time. I tried to find our coach number but could not find the numbers written anywhere on the side of the coach. Luckily, we were helped by a fellow passenger who directed me to look at the doors, where the numbers were mentioned clearly!

Then we got into our compartment and took our respective seats. Our tickets were checked twice - once while the train was in Hungary and the other when in Austria. Showing the PDF of the train ticket on our mobile to the ticket inspector was good enough for the purpose. Austria and Hungary are a part of the open transit Schengen area, which means this was the extent of the border control checks we had to go through.

Interior of the train.

Interior of our Budapest to Vienna train. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

The train also had free Wi-Fi, albeit with poor connection at times. There were no eatery options inside the train.

We deboarded at the Wien Hauptbahnhof station in Vienna. The journey was 250 km and took 2.5 hours, reaching Vienna at 11:25, which was the scheduled time.

A blue and white colored train on a railway platform

This blue colored train was the one we took for our Budapest to Vienna journey. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

A red colored train standing at the Vienna station

An ÖBB train standing at a platform of Vienna train station. ÖBB is the national carrier of Austria. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

Wien Hauptbahnhof train station

Wien Hauptbahnhof train station. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

At the station, we bought a 24-hour public transport pass from a ticket machine for €8. The pass includes unlimited access to all the public transport in Vienna for 24 hours. My pass was valid from the 7th of September at 11:34 to the 8th of September at 11:33. A single public transport ticket (from anywhere to anywhere) costs €2.4. A single ticket of €2.4 can be used once on any public transport in Vienna—trams, metros, and buses.

Therefore, the pass is a good deal if you are going to take at least four public transport trips in a day. Unlike the public transport pass I got in Budapest, the pass in Vienna was anonymous and not tied to the rider’s name.

Public transport pass for Vienna.

My public transport pass in Vienna.

After getting our passes, we took the subway and went to the Schönbrunn Palace. We hopped on to the subway at the Wien Hauptbahnhof station and deboarded at the Schönbrunn subway station—the closest one to the palace. The ride was smooth; the train was pretty silent.

By the way, like Budapest, there were no AFC gates for boarding the subway in Vienna. The stations had ticket validators instead, where you are supposed to validate your tickets before getting into the subway.

Vienna subway

Instead of AFC gates, Vienna has ticket validators as in the picture. You need to tap your ticket in the validator before boarding the subway. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

These validators are in place to ensure that you use your ticket only once. Unlike AFC gates, which are present in metros of most of the countries I have been to, the ticket validators don’t act as a physical barrier to enter the boarding area.

If you board the metro without validating your ticket, you will be facing hefty fines upon getting caught. I have heard that the fine is around €100. On the other hand, if you have a public transport pass like we did, then you don’t need to validate it before boarding.

In addition, there were no annoying security checks either, unlike in Indian cities. In the Delhi metro, for example, you would need to scan your bags and pass through a security check before getting to the AFC gates.

Vienna subway

Vienna subway. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

Now back to the story, after alighting at the Schönbrunn subway station, we walked to the Schönbrunn Palace. One can roam around outside the palace and click pictures for free. To go inside, however, requires buying tickets. The tickets for the palace can be booked in advance on the internet. We didn’t take the tickets in advance, as we decided to visit the palace at the last moment.

So we went to the ticket counter and found out that we needed to wait for 1 hour 40 minutes before going inside if we took the tickets at that moment. In addition, one ticket costs €44 (around 4000 Indian rupees). Since we had to return to Budapest in the evening and only had a few hours in the city, we decided not to go inside the palace. Instead, we clicked a few pictures outside the palace.

Photo of Schönbrunn Palace.

Schönbrunn Palace. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

The Schönbrunn Palace is a UNESCO World Heritage Site and is a historically significant place. It servedas one of the residences of the powerful Habsburg dynasty. The palace looked so good that my friend Dione said, “It seemed like the palace was built yesterday”. This remark applied to other parts of Vienna we went to. For example, the subway stations also seemed like they were built yesterday.

A street near Schönbrunn Palace.

A street near Schönbrunn Palace. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

Now, we wanted to go someplace to grab a bite. I asked my friend Urbec for suggestions on where to go. They suggested we visit the steps named Strudlhofstiege, which had the added benefit of being in a neighborhood with good bakeries and buildings.

So, we took the subway and deboarded at the Roßauer Lände station, followed by walking around a kilometer to reach the stairs.

A subway station in Vienna.

Roßauer Lände subway station. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

Platform of the _Roßauer Lände_ subway station.

Platform of the Roßauer Lände subway station. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

stairs with road in the front and trees in the background. Blue sky can also be seen in the background.

The The Strudlhofstiege steps. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

On the way, we were also looking for a place to eat. Unfortunately, it was Sunday, and Vienna closes on Sunday. That means most of the shops—including bakeries and cafés—are closed. Only places like railway stations have shops open on Sundays.

By the way, walking around in the streets of Vienna was a treat. The streets were not crowded (as it was not exactly a touristy neighborhood) and had good pedestrian infrastructure, with clean streets and separate cycling tracks. The buildings were also beautiful.

Buildings and streets in Vienna.

A random street in Vienna.

Buildings and streets in Vienna.

Another street in Vienna.

After some walking, we found a restaurant open. I grabbed the menu to check the prices. A lady at the shop asked me what I was doing, and I told her that I was browsing the menu. She said that the menu was in German. I don’t know how she knew that we didn’t know German, but it seemed like a racist thing to be told.

We roamed around further and found a café by the name of Blue Orange, where we ordered coffee and croissants. When we got our order, the waiter told us that they were having some issues, so they wouldn’t charge us for the croissant if it wasn’t good.

Picture of a café.

A picture of Blue Orange café. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

My friend and I took a bite, and both of us didn’t like the croissant. After some time, the waiter came to us and asked whether the croissant was okay, to which we said no. Therefore, they didn’t charge us for the croissant. This was the first time something like this happened to me. It felt like I was in a different world. I added a small tip at the end for this gesture, which I had to put in a jar at the counter.

The cappuccino I ordered was €4.50, while the espresso that Dione ordered was €3.60. The croissant would have been €3.60. I remember Paris having cheaper croissants!

Then when the waiter brought our drinks out, they automatically gave me the espresso and Dione the cappuccino. Dione found this funny because there is a stereotype in her country (Australia) that men drink strong black coffee, and women drink milky drinks like cappuccinos. She found it interesting that this stereotype seems to exist in Austrian culture too.

We hopped on a tram to reach the nearest subway station and went to the Wien Hauptbahnhof station to have something before we caught our return train to Budapest.

Trams with buildings and the blue sky in the background

Trams in Vienna. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

At the station, I had Esterhazyschnitten and Punschkrapfen (thanks, Urbec, for the suggestion). The lady at the shop warned me that punschkrapfen had alcohol in it, to which I said okay.

Esterhazyschnitten was a cake made of almonds, while punschkrapfen was a jam-filled sponge cake, soaked in rum. Esterhazyschnitten was my favorite out of the two. The punschkrapfen was too sweet for my taste.

Punschkrapfen

Punschkrapfen. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

Esterhazyschnitten

Esterhazyschnitten. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

While the station was well-built, there were a couple of things about the Wien Hauptbahnhof station that we didn’t like. There were no seats inside the station, so we had to eat outside the building. Also, the toilets needed to be paid for. It costs 50 cents to use the toilets at this station.

The Vienna train station had departure boards all over the place. So, we went to the platform our train was to arrive on.

A departure board in Vienna displaying information about the trains

Departure boards in Vienna displaying information about the trains. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

Platform and tracks at Wien Hauptbahnhof station.

Platform and tracks at Wien Hauptbahnhof station. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

When our train arrived, we had some difficulty locating our compartment. This train was operated by a different company (Railjet) than the one we took in the morning (Eurocity) from Budapest to Vienna, and we were able to locate the coach numbers using the digital board at the station. Each compartment had a digital board next to it on the station displaying the coach number. However, that wasn’t the problem. Even after reading the coach numbers and trying to find ours, it didn’t appear where we expected in the sequence.

When we were not able to find our coach for a while, we asked a ticket inspector of the train who was standing on the platform. He directed us towards the front side of the train. So we started running to the front side as we didn’t know how long the train stops.

As we ran toward our coach, we found out that the engine of the back train was connected with the last compartment of the train at the front. At that point, we realized that the train was a combination of two trains. At a later station, the train on the back side parted ways and went towards Vienna Airport.

Inside our train.

Interior of the train we took from Vienna to Budapest. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

A red colored train standing on the platform of Budapest Kelenfold station.

This is the train we took for our return journey from Vienna to Budapest. It is standing on a platform in Budapest Kelenfold station. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

We had a smooth journey and reached Budapest a couple of hours later.

Vienna is a beautiful city; we enjoyed being there, and we would like to visit the city again!

That’s it for now. Signing off. See you in the next one!

Credits: Thanks to Dione and Badri for proofreading.

02:21

What makes the web? [Scripting News]

I’ve been trying to come up with a simple test that lets you know whether some software is on the web or if it just can be made to appear in a web browser. So here we go.

If you can hook up a piece of the app to the a piece of another app then it’s on the web.

This comes from the basic feature of linking, which is the unique feature of the web.

Every other feature that makes the web the web in my experience allows two things to be part of each other.

Comment here.

01:49

Music For Your Monday: Tame Impala’s “Dracula” [Whatever]

I heard an absolute banger of an earworm this past week, and have been listening to it nonstop ever since. I want to bestow upon y’all Tame Impala’s new song, “Dracula.”

If you had asked me a week ago if I liked Tame Impala, I would’ve said I was completely indifferent about him and couldn’t even name a song from him. That is still true except for “Dracula.” This song is an absolute home-run of a bop, and there’s even a remix version with JENNIE which is also very good. Here’s both versions for your listening pleasure!

And the JENNIE version:

I have been debating which version I like better, and honestly it’s so hard to decide. I listen to both an equal amount, and both are great. Can’t go wrong with the original, but I love JENNIE’s ethereal voice and the harmonizing with Tame Impala.

My favorite part of the song is how they make “Dracula” rhyme with “spectacular.” Stellar stuff, really.

I hope you enjoy this bop, and that it helps you get movin’ and groovin’ through your next week!

-AMS

00:56

Tell Congress: Oppose the GUARD Act [EFF Action Center]

The GUARD Act may look like a child-safety bill, but in practice it’s a sweeping age-gating mandate that could apply to nearly every public-facing chatbot, from customer service tools to search assistants. It would require companies to collect sensitive identity data and chill online speech. The bill would also block teens from tools they rely on every day—as well as adults who cannot prove they are over 18.

EFF has long warned that age-verification laws undermine free expression, privacy, and competition. The GUARD Act is no different. It would make the internet less free, less private, and less accessible—while consolidating power in the largest tech companies and pushing smaller developers out.

There are real concerns about harms caused by AI systems, especially for young people. But the GUARD Act responds with a blunt, overbroad solution. Instead of addressing specific risks, it imposes sweeping restrictions that affect us all.

Congress should reject the GUARD Act and focus on policies that protect users without sacrificing privacy and access.

Tell your representatives to oppose the GUARD Act now.

00:14

00:00

Urgent: Public education V vouchers [Richard Stallman's Political Notes]

US citizens: call on your federal legislators in Congress to repeal federal school vouchers and protect public education.

See the instructions for how to sign this letter campaign without running any nonfree JavaScript code--not trivial, but not hard.

How Israel struck hospitals in Lebanon [Richard Stallman's Political Notes]

*Israel escalates attacks on medics in Lebanon with deadly "quadruple tap".*

Friendly fire info as terror, Kuwait [Richard Stallman's Political Notes]

A Kuwaiti-American journalist was visiting Kuwait and made footage of the mistaken shooting of an American F-15 and reported on this. Since then, Kuwait has arrested him, possibly for publishing that, or possibly for other journalism, under repressive new "terrorism" laws which can define journalism as "terrorism" under rather vague conditions.

Monday, 27 April

23:28

Dillo 3.3.0 released [OSnews]

Dillo is an amazing web browser for those of us who want their web browsing experience to be calmer and less flashing. Dillo also happens to be a very UNIX-y browser, and their latest release, 3.3.0, underlines that.

A new dilloc program is now available to control Dillo from the command line or from a script. It searches for Dillo by the PID in the DILLO_PID environment variable or for a unique Dillo process if not set.

↫ Dillo 3.3.0 release notes

You can use this program to control your Dillo instance, with basic commands like reloading the current URL, opening a new URL, and so on, but also things like dumping the current page’s contents. I have a feeling more commands and features will be added in future releases, but for now, even the current set of commands can be helpful for scripting purposes. I’m sure some of you who live and die in the terminal are already thinking of all the possibilities here.

You can now also add page actions to the right-click context menu, so you can do things like reload a page with a Chrome curl impersonator to avoid certain JavaScript walls. This, too, is of course extensible. Dillo 3.3.0 also brings experimental support for building the browser with FLTK 1.4, and implemented a fix specifically to make OAuth work properly.

22:42

Ubuntu is going to integrate “AI”, but Canonical remains vague about the how and why [OSnews]

Ubuntu, being one of the more commercial Linux distributions, was always going to jump on the “AI” bandwagon, and Jon Seager, Canonical’s VP Engineering, published a blog post with more details.

Throughout 2026 we’ll be working on enabling access to frontier AI for Ubuntu users in a way that is deliberate, secure, and aligned with our open source values. By focusing on the combination of education for our engineers, our existing knowledge of building resilient systems and our strengthening silicon partnerships, we will deliver efficient local inference, powerful accessibility features, and a context-aware OS that makes Ubuntu meaningfully more capable for the people who rely on it

Ubuntu is not becoming an AI product, but it can become stronger with thoughtful AI integration.

↫ Jon Seager at Ubuntu Discourse

The problem with this entire post is that, much like all other corporate communications about “AI”, it’s all deceptively vague, open-ended, and weasely. Adjectives like “focused”, “principled”, “thoughtful”, and “tasteful” don’t really mean anything, and leave everything open for basically every type of slop “AI” feature under the sun. Their claims about open weights and open source models are also weakened by words like “favour” and “where possible”, again leaving the door wide open for basically any shady “AI” company’s models and features to find their way into your default Ubuntu installation.

There’s also very little in terms of concrete plans and proposed features, leaving Ubuntu users in the dark about what, exactly, is going to be added to their operating system of choice during the remainder of the year. There’s mentions of improved text-to-speech/speech-to-text and text regurgitators, but that’s about it. None of it feels particularly inspired or ground-breaking, and the veneer of open source, ethical model creation, and so on, is particularly thin this time around, even for Canonical.

I don’t really feel like I know a lot more about Canonical’s “AI” intentions for Ubuntu after reading this post than I did before, other than Ubuntu users might be able to generate text in their email client or whatever later this year. Is that really something anybody wants?

22:14

This Week’s Weird Sideswipe by Current Events [Whatever]

Hello to the FBI/Secret Service/NSA people now monitoring this account because apparently the attempted shooter liked a few of my posts in the last month, here's a picture of my cat to get you started

John Scalzi (@scalzi.com) 2026-04-26T18:50:39.094Z

Apparently it’s true: The fellow who came to the Correspondent’s Dinner the other night with a bunch of weapons (and who, it should be noted, came nowhere near the president or anyone else in the ballroom), liked four Bluesky posts of mine in the last month. Which ones? I have no idea, although a cursory view of my last month of Bluesky posts shows nothing particularly spicy in a political sense. This does not surprise me, as I usually send all my really spicy political takes to Threads. Most of the last month of Bluesky posts for me were about JoCo Cruise, whacking on “AI,” photos of cats and Krissy, and talking about writing. Maybe this dude liked cat pictures? He’s arrested now and his Bluesky account is down in any event. We may never know.

My feeling about this is pretty much the same feeling I have about being in the Epstein Files: What the fuck, it’s not great, and also, it doesn’t actually have much to do with me, I’m mostly being sideswiped by this weird damn moment we’re in. I certainly don’t condone attempting to kill the president. Any president, and also, this one in particular. Among other things that would take away the fun of watching him one day rotting in prison along with the rest of his corrupt and horrible family and administration. Keep him alive! For justice!

I’m joking here about being on a federal watch list now, but I should be clear I’m pretty sure I already have an FBI file, and also that this FBI file is really super boring, so anything relating to this will almost certainly be funneled into that. I recently did an FOIA request for my file, so I suppose I will find out soon enough. In the meantime I’ll just have to imagine.

I’ve been informed that some of the folks associated with the Sad Puppies are trying to make hay of my tangential association to this fellow, which, I guess, they would, loud bad logic has always been their MO. My first thought is that when you’re related to an actual successful presidential assassin, a failed one liking your social media posts is weak sauce. My second thought was, huh, the right-wing chudguzzlers are whining about me again, whenever they do that something nice happens with my career, wonder what it will be this time. And indeed, today I got a foreign language offer on one of my books, which I happily accepted. It’s correlation, not causation, to be sure. But it sure does correlate a lot. So keep it up, right-wing chudguzzlers! We’re having our back deck rebuilt, I could use a few more foreign sales. Thanks in advance for your help.

— JS

21:42

Link [Scripting News]

Busy day working on new RSS-based project. Still diggin!

20:21

pip 26.1 released [LWN.net]

Version 26.1 of the pip package installer for Python has been released. Richard Si has published a blog post that looks at some of the highlights of 26.1 including dependency cooldowns, experimental support for pylock (pylock.toml) files, and resolver improvements that will move pip closer to the goal of removing its legacy resolver. The release also includes several security fixes and drops support for Python 3.9.

19:28

Microboned [Penny Arcade]

Discord used to be a tool that I leveraged to communicate with friends and erstwhile allies, but over the years it's increasingly become something like a car up on blocks in my front yard - something to tinker with, absent any prospect or expectation of continuous functionality. I have to constantly remind it that I don't want to use the speaker in my monitor. And mics? "Forget about it." I would say that this is an unforgivable sin but I know at least one other person who might actually prefer this state of affairs. Also, this really happened. So.

18:49

18:07

[$] The rest of the 7.1 merge window [LWN.net]

By the time Linus Torvalds released 7.1-rc1 and closed the 7.1 merge window, 12,996 non-merge changesets had been pulled into the mainline repository; just over 9,000 of those arrived after the first-half summary was written. These changes were more driver-oriented than those seen earlier, but still also included many new features across the kernel as a whole.

18:00

Looking at consequences of passing too few register parameters to a C function on various architectures [The Old New Thing]

In our exploration of calling conventions for various processors on Windows, we learned that in many cases, some of the parameters are passed in registers.

Suppose that there is a function that takes two parameters, but you know that the function ignores the second parameter if the first parameter is positive. What happens if you call the function with just one parameter (say, passing zero). The function should ignore the second parameter, so why does it matter that you didn’t pass one?

Even though the function doesn’t use the parameter, it still may decide to use the storage for that parameter as a conveniently provided scratch space. For example:

int blah(int a, int b)
{
    if (a <= 0) {
        int c = f1();
        f2(a);
        return c;
    } else {
        return f3(a, b);
}

Is it okay to call blah with zero as its only parameter? You aren’t passing b, but the function doesn’t use b, so why does it matter?

Formally, the C and C++ languages say that if you call a function with the wrong number of parameters, the behavior is undefined, so officially, you’ve broken the rules and anything can happen.

But let’s look at what types of things could go wrong.

If you pass too few parameters on the stack, and it is a callee-clean calling convention, then the callee will clean too many bytes off the stack, resulting in stack imbalance and likely memory corruption.

Even if it’s not a callee-clean calling convention, the called function will think that the memory for the parameter is present, and it may use it as scratch space, resulting in memory corruption in the stack frame of the calling function.

In our example above, the compiler might realize, “Hey, I don’t need to allocate new memory for the variable c. I can just reuse the memory that holds the now-dead variable b.” In other words, it rewrites the function as

int blah(int a, int b)
{
    if (a <= 0) {
        b = f1();
        f2(a);
        return c;
    } else {
        return f3(a, b);
}

Even if you don’t reserve memory for the variable b, the compiler will assume that you did and overwrite whatever is at the location the reserved memory should have been.

But what if the parameters are passed in registers, and you didn’t pass enough of them?

On most processors, what happens is that the called function will try to use that register and read whatever uninitialized value happens to be lying in that register.

Except on Itanium.

One special Itanium quirk is the presence of the “Not a Thing” (NaT) bit, which is a bit attached to each general purpose register that indicates whether the register holds a valid value. The most common ways for a register to enter the NaT state are if it was the result of a failed speculative load, or if it was the result of a mathematical calculation where at least one of the inputs was itself NaT. Therefore, if your uninitialized output register happens to be a NaT left over from an earlier failed speculation, the called function might decide to spill the value onto the stack for safekeeping before using that register for something else.

extern bool is_valid(int);

int blah2(int a, int b)
{
    if (is_valid(a)) {
        return f3(a, &b);
    } else {
        return 0;
    }
}

The compiler realizes that it needs to take the address of b if a is not valid, so it has to spill the value to memory (so that it can have an address). But writing a NaT to memory raises a “NaT consumption” exception, so this function crashes even in the case where it never actually uses the b variable.

But wait, there’s more.

On Itanium, the function call mechanism is architectural rather than merely conventional. The calling function declares the number of output registers (registers that will be passed to the called function), and those registers are renumbered on entry to the called function so that they are visible starting at register r32. If a calling function says “I am passing 2 registers,” then the called function sees them as registers r32 and r33. I covered the details some time ago, but leaf functions are particularly interesting.

Leaf functions are functions that do not create a custom stack frame and simply make do with the architectural stack frame that the processor creates for them by default. And that default stack frame consists only of the inbound parameter registers. In the case of passing too few parameters to a function, that means that the default stack frame contains fewer registers than the function expects.

Architecturally, the rule is that if you read from a stacked register that lies outside the current frame, the results are “undefined”. I couldn’t find a formal definition of “undefined” in the Itanium documentation (though it’s eminently likely that I simply missed it), but I assume it means “can produce any result, including an exception, that is not dependent upon information outside the current processor execution mode.”¹ In particular, it can raise a processor exception, say, because the value of that stacked register happens to contain a leftover NaT.

The Itanium architecture takes an even stronger stance against writing a stack register that lies outside the current frame: It is required to raise an Illegal Operation fault.

I can imagine it being weird seeing an exception come out of a register-to-register move instruction.

So there you go, another case where the Itanium architecture more strictly enforces a programming rule, in this case, making sure that you pass the correct number of parameters to a function.

¹ This means that, for example, an “undefined” result in user-mode code cannot be dependent upon information available only to kernel mode.

The post Looking at consequences of passing too few register parameters to a C function on various architectures appeared first on The Old New Thing.

17:21

Four new stable kernels for Monday [LWN.net]

Greg Kroah-Hartman has announced the release of the 7.0.2, 6.18.25, 6.12.84, and 6.6.136 stable kernels. As usual, each contains important fixes throughout; users are advised to upgrade.

15:49

LibreLocal meetup in London, England, United Kingdom [Planet GNU]

May 16, 2026 at 12:00 BST (11:00 UTC).

LibreLocal meetup in Neuchâtel, Switzerland [Planet GNU]

May 21, 2026 at 16:00 CEST (14:00 UTC).

LibreLocal meetup in València, Spain [Planet GNU]

May 16, 2026 at 10:30 CEST (08:30 UTC).

LibreLocal meetup in Brasília, Distrito Federal, Brasil [Planet GNU]

May 22, 2026 at 18:00 BRT (21:00 UTC).

15:07

LibreLocal meetup in Tarragona, Catalunya, Spain [Planet GNU]

May 8, 2026 at 15:00 CEST (13:00 UTC).

pgBackRest is no longer maintained [LWN.net]

David Steele, maintainer of the popular pgBackRest backup and restore project for PostgreSQL, has archived the project and announced that it is no longer being maintained.

After a lot of thought, I have decided to stop working on pgBackRest. I did not come to this decision lightly. pgBackRest has been my passion project for the last thirteen years, and I was fortunate to have corporate sponsorship for much of this time, but there were also many late nights and weekends as I worked to make pgBackRest the project it is today, aided by numerous contributors. Every open-source developer knows exactly what I mean and how much of your life gets devoted to a special project.

Since Crunchy Data was sold, I have been maintaining pgBackRest and looking for a position that would allow me to continue the work, but so far I have not been successful. Likewise, my efforts to secure sponsorship have also fallen far short of what I need to make the project viable.

[$] Zig explores structured concurrency [LWN.net]

Version 0.16.0 of the Zig programming language was recently announced, and with it an expanded version of the new Io interface that we covered in December. The new interface is based on an idea called structured concurrency that makes writing correct concurrent applications easier. Zig's implementation of the idea is more explicit and verbose than other languages, however, which could offer an opportunity to explore the consequences of different designs.

The future of AI in Ubuntu [LWN.net]

Jon Seager, VP engineering for Canonical, has posted an update on "what Canonical and Ubuntu will do (or not) to incorporate AI" that explains what part AI will play in the future of the company and its distribution.

The bottom line is that Canonical is ramping up its use of AI tools in a focused and principled manner that favours open weight models with license terms that feel most compatible with our values, combined with open source harnesses. AI features will be landing in Ubuntu throughout the next year as we feel that they're of sufficient maturity and quality, with a bias toward local inference by default.

AI features in Ubuntu features will come in two forms: first as a means of enhancing existing OS functionality with AI models in the background, and latterly in the form of "AI native" features and workflows for those who want them.

This year Canonical has begun a more deliberate push toward education and developing competence with AI tools. We are not setting shallow metrics on token usage, or percentages of code written with AI, but rather incentivising engineers to experiment and understand where AI tools add value. Rather than force a single early-choice AI stack, we're incentivising teams to each pick 'something different' and go deep, so we learn more as an org in the next six months.

Niri 26.04 released [LWN.net]

Version 26.04 of the niri scrollable-tiling Wayland compositor has been released. The most notable change in this release, as the "most requested niri feature by far", is support for the blur effect using the Wayland protocol's ext-background-effect. This release also features optional configuration includes, screencasting support enhancements, and a number of improvements for input devices.

In short, background blur turned out to be a massive undertaking. Not because of the blur algorithm itself (by the way, if you want to learn about different blurs, including the widely used Dual Kawase, I highly recommend this blog post), but because window background effects in general required a lot of thinking and additions to the code, especially to make them as efficient as possible. This is one of the most complex niri features thus far.

LWN covered niri in July 2025.

14:21

Security updates for Monday [LWN.net]

Security updates have been issued by AlmaLinux (java-25-openjdk, kernel, osbuild-composer, thunderbird, webkit2gtk3, and wireshark), Debian (chromium, distro-info-data, libde265, mbedtls, and thunderbird), Fedora (awstats, bind9-next, bpfman, buildah, calibre, cef, chromium, composer, corosync, coturn, cups, curl, dnsdist, doctl, erlang, fido-device-onboard, flatpak-builder, freetype, glab, goose, jq, kea, libarchive, libcap, libcgif, libgsasl, libinput, libmicrohttpd, libpng, libpng12, libpng15, mapserver, mbedtls, micropython, minetest, mingw-exiv2, mingw-libpng, mingw-LibRaw, mingw-openexr, mingw-python3, moby-engine, mupdf, nginx, nginx-mod-brotli, nginx-mod-fancyindex, nginx-mod-headers-more, nginx-mod-modsecurity, nginx-mod-naxsi, nginx-mod-vts, opam, openbao, opensc, openssh, openssl, opkssh, perl-Net-CIDR-Lite, pgadmin4, pie, podman, pspp, pypy, python-biopython, python-cairosvg, python-cbor2, python-cryptography, python-flask-httpauth, python-msal, python-pillow, python-pydicom, python-tomli, python3-docs, python3.13, python3.14, python3.15, python3.9, rauc, roundcubemail, rpki-client, rust-sccache, skopeo, smb4k, stb, sudo, tcpflow, thunderbird, tigervnc, tinyproxy, trafficserver, trivy, usd, util-linux, vim, xdg-dbus-proxy, xorg-x11-server, xorg-x11-server-Xwayland, and yarnpkg), Oracle (buildah, golang, grafana, java-17-openjdk, and java-25-openjdk), and SUSE (chromium, cockpit-podman, coredns, corosync, cups, dnsdist, flatpak, freerdp2, frr, gdk-pixbuf, golang-github-prometheus-alertmanager, golang-github-prometheus-prometheus, google-guest-agent, haproxy, ignition, ImageMagick, kernel, kyverno, libcap, libminizip1, libpng16, librsvg, libXpm-devel, Mesa, opensc, openssl-3, ovmf-202602, PackageKit, podman, python-ecdsa, python-pillow, python311-Mako, sudo, thunderbird, tomcat, tomcat10, and vim).

14:14

LibreLocal meetup in Toronto, Ontario, Canada [Events]

May 18, 2026 at 18:00 EDT (22:00 UTC).

LibreLocal meetup in Brantford, Ontario, Canada [Events]

May 17, 2026 at 13:45 EDT (17:45 UTC).

LibreLocal meetup in Salamanca, Salamanca, Spain [Events]

May 7, 2026 at 17:00 CEST (15:00 UTC).

13:35

CodeSOD: The JSON Template [The Daily WTF]

We rip on PHP a lot, but I am willing to admit that the language and ecosystem have evolved over the years. What started as an ugly templating language is now just an ugly regular language.

But what happens when you still really want to do things with templates? Allison has inherited a Python-based, WSGI application which rejects any sort of formal routing or basic web development best practices. Their way of routing requests is simply long chains of "if condition then invokeA elif otherCondition then invokeB". Sometimes, those conditions will directly set the MIME type on the HTTP response.

They do use a templating library called Mako for generating their responses. They use it for their HTML responses, obviously. They also use it for their JSON responses, generating code like this:

{
    "success": true,
    "items": {
        %for item in items_available.keys():
        "${item}": ${items_available[item]}${',' if not loop.last else ''} 
        %endfor
        }   
}

The %for and matching %endfor mark the Python code off, which generates JSON via string-munging, complete with the check to make sure we're not on the last iteration of the loop.

Like so much bad code, this offers a degree of fractal wrongness. Instead of iterating over the keys and fetching the items inside the loop, you could iterate for key,value in items_available.items()- and according to the Mako docs, that for is just a regular Python for loop. That we're just outputting the contents of the dictionary is itself potentially a problem- sure, if we know the types of the dictionary, we'll know that whatever it is can be output in the body of a JSON document, but do we really think this code is using type annotations? I don't. And for a RESTful web service, I'm always going to feel weird about using a success field when ideally the HTTP status code could convey most of that information (and yes, I know there are reasons to still put status in the body, I just hate it).

Of course, the real issue is just: Python's built in JSON serialization is actually pretty advanced. And performant! You don't need any of this, you could just do something like:

return json.dumps({"success": true, "items": items_available})

No templates. No formatting. No worries about how the data gets represented. Well, still worries, because JSON serialier will throw exceptions if it doesn't know what to do with a type. But then at least you get that exception on the server side and aren't sending the client a malformed document.

In any case, this is a good demonstration that you can write bad PHP in any language.

[Advertisement] Keep all your packages and Docker containers in one place, scan for vulnerabilities, and control who can access different feeds. ProGet installs in minutes and has a powerful free version with a lot of great features that you can upgrade when ready.Learn more.

13:21

Show Your Work: The Case for Radical AI Transparency [Radar]

A colleague told me something recently that I keep thinking about.

She said, unprompted, that she appreciated seeing both sides of my AI conversations. Not just the output. The full thread. My prompts, the AI’s responses, the back and forth, the dead ends, the iterations. She said it made her trust me more.

This piece is an example of that. The conversation that produced it exists. A raw transcript would be longer, messier, and significantly less useful than what you’re reading now. What you’re reading is the annotated version, the part where judgment entered the artifact. That’s not a disclaimer. That’s the argument.

I’ve been transparent about using AI in my work from the start. Partly because I wrote a book on data ethics and hiding it felt wrong. Partly because I’ve spent 25 years watching technology adoption go sideways when the human dimension gets treated as an afterthought. But her comment made me realize something more specific was happening when I showed the conversation rather than just the output.

It’s worth unpacking why.

An old problem, a new incarnation

In the 1990s, Harvard Business School professor Dorothy Leonard introduced the concept of “deep smarts” in her book Wellsprings of Knowledge: the experience-based expertise that accumulates over decades of practice, the kind of judgment that lives in people’s heads and doesn’t reduce to documentation. She also introduced a companion concept that has stayed with me: core competency as core rigidity. The very depth that makes expertise valuable also makes it hardest to transfer. Experts often can’t fully articulate what they know because they’ve stopped experiencing it as knowledge. They experience it as just seeing clearly.

Leonard’s work was about organizational knowledge transfer: how companies preserve institutional wisdom when experienced people retire or leave. That’s been a challenge since the first consultant ever billed an hour. What’s different right now is that the tools to actually solve it have arrived simultaneously with the largest demographic wave of executive retirement in American history.

What’s interesting about this particular moment is that the same dynamic is now showing up at the individual level in how practitioners interact with AI. The tacit knowledge at stake isn’t a retiring VP’s intuition. It’s your own judgment, your own expertise, your own hard-won understanding of what a project or organization actually needs. And the question isn’t how to transfer it before you walk out the door. It’s whether you can see it clearly enough to know when the AI is substituting for it.

The instinct gets it backwards

The natural impulse is to clean up the AI interaction before sharing anything with a collaborator, a team, or a stakeholder. Show the polished output, not the messy process. You don’t want them thinking you just handed your work to a machine.

That instinct produces a disingenuous outcome.

When you hide the process, the people you’re working with have no way to evaluate how the work was made, what judgment calls went into it, or where your expertise ended and the AI’s pattern-matching began. You’ve made the process invisible. And invisible AI processes erode trust, slowly and quietly, over time.

The instinct to hide is also, if we’re honest, a little defensive. It assumes the people in the room can’t tell the difference between AI output and practitioner judgment. Most of them can. And the ones who can’t yet will figure it out. Hiding the seams doesn’t make the work more credible. It just defers the reckoning.

The deeper problem: It’s not just about appearances

Here’s what took me longer to see.

Hiding the process doesn’t just affect how others perceive you. It erodes your own clarity about where your expertise is actually operating.

To understand why, it helps to be precise about what AI actually is. AI is a pattern matcher, a deeply sophisticated one, trained on more human-generated content than any single person could read in a thousand lifetimes. That’s its power (core competency) and its limitation (core rigidity) simultaneously, and the two are inseparable. The very scale that makes it extraordinary is also the boundary that defines what it cannot do. It is extraordinarily good at producing the most likely next thing given what came before. What it cannot do is know what you actually need, when the obvious answer is the wrong one, or when the stated goal isn’t the real goal. It has no judgment about context, relationship, or organizational reality. It has patterns. Incomprehensibly vast ones. But patterns.

That distinction matters because of what happens when you stop paying attention to it.

I’ve watched it happen in my own work. You share a draft with someone and they’re impressed. They quote a formulation back at you, something that sounds sharp and considered. And you realize, tracing it back, that the formulation came from the AI. Not because the AI invented it, but because you said something rougher and less precise earlier in the conversation, and the AI reflected it back in cleaner language. The idea was yours. The AI gave it a polish you then forgot to account for. The person quoting it back thought they were seeing your judgment. They were seeing your thinking laundered through a pattern matcher and returned to you at higher resolution.

That’s the subtler version of the problem. Not that AI invents things. It’s that it can reflect your own thinking back with more confidence and clarity than you put in, and that gap is easy to mistake for the AI contributing something it didn’t.

When you route everything through a polished output layer, you stop noticing the moments where you pushed back, redirected, rejected the first three versions, reframed the question entirely. Those moments are where your judgment lives. They’re the difference between using AI and being used by it. It’s Leonard’s core rigidity problem, applied inward: The very fluency that makes AI feel useful can make your own expertise invisible to you.

When the process stays hidden, the knowledge stays local and static. When it’s visible, it becomes something you and the people around you can actually work with and build on. The reason transparency benefits your audience is the same reason it benefits you: It keeps the scope of your judgment visible and therefore expandable. That’s not just an ethical argument. That’s the amplification mechanism.

Which is also what makes the upside real rather than consoling. When you stay in the process rather than just collecting outputs, work that would have taken days now takes hours. Your thinking gets sharper because you have to articulate it precisely enough for the AI to be useful. The people developing fastest right now aren’t the ones offloading the most. They’re the ones using AI as a thinking partner and staying in the conversation.

Here’s the paradox at the center of it: The more clearly you see the AI as a pattern matcher, the more human you have to be in working with it. The more human you are, the more useful the output. The tool doesn’t replace the practitioner. It reveals them.

Transparency isn’t just an ethical practice. It’s a cognitive one.

Radical AI transparency in practice

I’ve started calling this radical AI transparency. Not a policy, not a compliance framework, not a disclosure checkbox. A practice. Something you can actually do Monday morning.

Here’s how it shows up concretely:

Have the conversation before you need to.

Before you’re deep in a project or collaboration, surface how you use AI and genuinely explore how others do. Not as a disclosure (“I want you to know I use AI tools”) but as a real exchange. What are you using? What do you trust it for? Where are you still skeptical? The comfort level and sophistication in the room will vary more than you expect, and knowing that before you’re mid-deliverable matters.

This is also how you build the psychological foundation for showing your work later. If the people you’re working with have never heard you talk about AI before and you suddenly share a full chat thread, it lands differently than if you’ve already had the conversation.

Track the full threads.

This is partly an orchestration problem and I won’t pretend otherwise. There’s cutting and pasting involved. The tools haven’t caught up to the practice yet, which is itself worth naming honestly when the topic comes up.

A few approaches that help: a running document per project where you paste key threads as they happen (not retroactively, you’ll never do it retroactively), dated and labeled by what you were working on. Claude and most other major AI tools now offer conversation export, which produces a complete record you can archive. The low-tech version, a single shared document per engagement, is underrated for its simplicity.

The reason to do this isn’t just for sharing. It’s for your own reference. Being able to go back and see what you asked, what the AI produced, what you changed and why, builds a record of your judgment over time. That record is professionally valuable in ways that are hard to anticipate until you have it.

Annotate before you share.

Not every thread is self-explanatory to someone who wasn’t in it. Context is everything, and raw transcripts without context are a lot to ask anyone to parse.

A sentence or two before the thread begins. A note at the moment where the direction changed. A brief flag on what you rejected and why. This is where your voice enters the artifact, and it transforms a raw AI exchange into a demonstration of judgment. The annotation is the work. It’s where you show what you saw that the AI didn’t, what you knew that the prompt couldn’t capture, and what made the third version better than the first two.

This is also where the most useful material for future reference lives. Annotations are the deep smarts layer on top of the raw exchange. They’re what makes a conversation a record.

Be real about the errors.

AI makes mistakes. It conflates, confabulates, and hallucinates. It gives you the confident wrong answer with the same tone as the confident right one. It misses context that any competent person in the room would have caught.

These aren’t bugs to apologize for or hide. They’re the clearest window into what the tool actually is. AI makes mistakes in a specifically human way because it was trained on human output. Think of it as rubber duck debugging at professional scale. The AI is a duck that talks back, which is useful and occasionally misleading, which is exactly why you have to stay in the room. When you’re transparent about the errors, and even a little good-humored about them, you’re teaching the people around you something true about the technology. That’s more useful than pretending it’s a black box that either works or doesn’t.

The people who build the most durable trust around AI are usually the ones most comfortable saying: “The first version of this was wrong and here’s how I caught it.”

The bigger picture

What I’ve described so far is an individual practice. But the same principles scale.

Teams and organizations adopting AI face a version of the same problem. The impulse to treat AI outputs as authoritative, to make the process invisible to colleagues and stakeholders, to optimize for the appearance of capability rather than its actual development, produces the same trust erosion. Just at greater scale and with less ability to course-correct.

The teams that will navigate AI adoption well are the ones that treat transparency not as a risk to manage but as a methodology. Where the process of building with AI, including the corrections, the overrides, the moments where human judgment superseded the model, is part of how the organization learns what it actually believes and values. That’s Leonard’s knowledge transfer problem at institutional scale, and the practitioners who understand both dimensions will be the ones leading those conversations.

That’s a much larger conversation. But it starts with the same Monday morning practice.

Show the conversation. Not just the output.

What you’re actually demonstrating

When you show your AI conversations, you’re not demonstrating that you needed help.

You’re demonstrating that you understand what you’re working with. AI is a pattern matcher, trained on more human-generated content than any single person could read in a thousand lifetimes. What it cannot do is know what you need. That requires judgment, context, relationship, and the kind of hard-won expertise that doesn’t reduce to pattern matching, no matter how good the patterns are.

You’re demonstrating that you know the difference between the pattern and the judgment. That you were present enough in the process to know when to push back, when to redirect, when to throw out the output entirely and start over. That you understand, precisely, what the tool can and cannot do, and that you stayed in the room to do the part it can’t.

That’s a meaningful professional signal. It says: “I am not confused about what AI is. I am not outsourcing my judgment. I am using a very powerful pattern matcher as a thinking partner, and I know which one of us is doing which job.”

That’s the work. That’s always been the work.

The tool just makes it visible now. That’s not a threat. That’s an opportunity.


Claude is a large language model developed by Anthropic. Despite having read more human-generated content than any person could consume in a thousand lifetimes, it still required significant editorial direction, at least three rejected drafts, and occasional reminders about em-dashes. The full conversation transcript is available upon request. It is longer, messier, and significantly less useful than what you just read. Which was rather the point.

Emergency Pedagogical Design: How Programming Instructors Are Scrambling to Adapt to GenAI [Radar]

ChatGPT has been publicly available for over three years now, and generative AI is woven into the tools students use every day: web search, word processors, code editors. You might assume that by now, most programming instructors have figured out how to handle it. But when my collaborators and I went looking for computing instructors who had made meaningful changes to their course materials in response to GenAI, we were surprised by how few we found. Many instructors had updated their course policies, but far fewer had actually redesigned assignments, assessments, or how they teach.

I’m Sam Lau from UC San Diego, and together with Kianoosh Boroojeni (Florida International University), Harry Keeling (Howard University), and Jenn Marroquin (Google), we’re presenting a research paper at CHI 2026 on this topic. We wanted to understand: What happens when programming instructors try to shape how students interact with GenAI tools, and what gets in their way?

To find out, we interviewed 13 undergraduate computing instructors who had gone beyond policy changes to make concrete updates to their courses: redesigning assignments, building custom tools, or overhauling assessments. We also surveyed 169 computing faculty, including a substantial proportion from minority-serving institutions (51%) and historically Black colleges and universities (17%). What we found is that instructors are doing a kind of design work that nobody trained them for, under conditions that make it very hard to succeed.

Here’s a summary of our findings:

Findings from 13 undergraduate computing instructors

What is “emergency pedagogical design”?

We call this work emergency pedagogical design, drawing an analogy to the “emergency remote teaching” that instructors had to perform when COVID-19 forced courses online overnight. Just as emergency remote teaching was distinct from carefully designed online learning, emergency pedagogical design is distinct from thoughtfully integrating AI into pedagogy. Instructors are reacting in real time, with limited resources and no playbook.

We observed four defining properties. First, the work is reactive: Instructors didn’t plan for GenAI; they’re retrofitting courses that were designed before these tools existed. Second, it’s indirect: Unlike a UX designer who can change an interface, instructors can’t modify ChatGPT or Copilot, so they can only try to influence student behavior through policies, assignments, and course infrastructure. Third, instructors rely on ambient evidence like office-hour conversations and staff anecdotes rather than controlled evaluations. And fourth, instructors feel pressure to act now rather than wait for research or best practices to emerge.

Five barriers instructors keep hitting

Across our interviews and survey, five barriers came up again and again.

Fragmented buy-in. Most instructors we surveyed were personally open to adopting GenAI in their teaching: 81% described themselves as open or very open. But only 28% said the same about their colleagues. The result is that instructors who want to make changes often work in isolation, piloting course-specific tweaks without support or coordination from their departments.

Policy crosswinds. In the absence of top-down guidance, instructors set their own GenAI policies on a per-course basis. As one instructor put it, “From a student perspective, it’s the wild west. Some courses allow GenAI usage, some don’t.” Students have to track different rules for every class, and policies rarely distinguish between paid and unpaid tools, or between stand-alone chatbots and GenAI embedded in everyday software like code editors. 78% of surveyed instructors agreed that unequal access to paid GenAI tools could worsen disparities in learning outcomes.

Implementation challenges. Instructors wanted to shape how students used GenAI, not just whether they used it, but their options were indirect. Some made small adjustments, like permitting GenAI in specific labs. Others went further: One instructor required students to submit design documents before asking GenAI to generate code; another built a custom chatbot that offered conceptual help without writing code for students. 80% of surveyed instructors rated GenAI integration as important or very important, but only 37% reported actually using GenAI tools in course activities often.

Assessment misfit. Several instructors described a striking pattern: Students performed well on take-home assignments but struggled on proctored assessments. One instructor reported that a third of his 450-person class scored zero on a skill demonstration that required writing a short function from scratch, even though assignment grades had been fine. The problem wasn’t just that students were using GenAI to complete homework; it was that instructors had no reliable way to see how students were interacting with these tools day-to-day. Some instructors responded by shifting credit toward oral “stand-up” meetings and written explanations, but this created new challenges around grading consistency and staffing.

Lack of resources. This was the barrier that tied everything together. 53% of surveyed instructors said they lacked sufficient resources to implement GenAI effectively, and 62% said they didn’t have enough time given their workload. The gap was especially stark at minority-serving institutions: MSI instructors were more likely to report insufficient resources (62% vs. 43%) and heavier teaching loads (70% teaching 3+ courses per term versus 54%). All 10 respondents who taught six or more courses per term were from MSIs. Meanwhile, the interviewees who had made the most ambitious changes tended to have lighter teaching loads, external funding, or the ability to hire lots of course staff, advantages that most instructors don’t have.

What needs to change

One striking finding is that the instructors doing the most to improve student-AI interactions were also the most privileged in terms of time, staffing, and funding. One instructor needed over 50 course staff members to run weekly stand-up meetings for 300 students. Others spent their own money on API costs. These are not scalable models.

If only well-resourced institutions can afford to adapt their curricula, GenAI risks widening the very inequities that education is supposed to reduce. Students at under-resourced institutions could fall further behind, not because their instructors don’t care but because those instructors are teaching six courses a term with no additional support.

When surveyed instructors were asked what would help most, the top answers were faculty training and support, evidence of GenAI’s impact, and funding. What if universities, funders, and HCI researchers worked together with instructors to make emergency pedagogical design sustainable for all instructors, not just the most privileged ones?

Check out our paper here and shoot me an email (lau@ucsd.edu) if you’d like to discuss anything related to it! And if you’re an instructor yourself, we’re building free resources and curriculum over at https://www.teachcswithai.org/.

12:07

Medieval Encrypted Letter Decoded [Schneier on Security]

Sent by a Spanish diplomat. Apparently people have been working on it since it was rediscovered in 1860.

11:07

Grrl Power #1455 – Tactical tactile [Grrl Power]

A normal 2 handed sword weighs 5-8 pounds (granted, there’s a very broad range of what constitutes a “two handed sword”), whereas a bearing sword weighs 14-15 pounds, and are roughly 7 and a half feet long, including the handle. Not impossible to swing, of course, but probably foolish to actually wade into a battle with one, since even regular sized weapons and moderate armor will sap someone’s endurance pretty quickly. Bearing swords are, as far as I’m aware, purely ceremonial.

At least on our non-magical Earth. The Grrl-verse clearly has demons, oni, aliens, were-dinosaurs, all kinds of things that might actually be able to wield a sword on that scale. Dabbler’s “Soulreaver” sword is technically a vierhander (man, it’s been a while since she used that) since the handle is long enough for her to really apply some leverage on it if she needs to, but I’m not sure if mechanically, gripping a sword or a bat with 4 hands would really give you a lot of extra striking power, or if all those elbows would get in the way on the windup or backswing.

The sword Maxima is using was sourced from Dabbler’s treasure horde, and clearly didn’t come from Earth, so it’s hard to say who it was originally forged for. All we can really tell about it is that whatever it’s forged from, it’s probably not all that heavier than an equivalently sized steel sword (and it’s definitely not made of steel) because Sydney can lift it. IIRC, I think I said that the other sword Max picked out was made of Ultronium and weighed about 40 lbs.

As someone with ADHD, I know I can be distracted in the middle of a sentence when someone is talking to me. It leads to a lot of “Uh, yeah…” or “Oh… what?” responses, and has definitely made people think my hearing is a lot worse than it is. And if someone is giving me directions that are more complex than “last door on the left,” they may as well just pull a series of random words from the dictionary.

On that front, Sydney is actually usually pretty focused. It’s possible her meds are wearing off for the day and it’s getting close to bedtime.


Finally, here we go! I took the suggestion that I just use an existing panel for a starting point, thinking it would save time… I guess it technically did, but a 5 character vote incentive just isn’t the way to go.

Patreon, of course, has actual topless version.

 

 


Double res version will be posted over at Patreon. Feel free to contribute as much as you like.

11:00

Mike Gabriel: KVM Support inside LXC Containers [updated] [Planet Debian]

Yesterday, I had to add support for running KVM virtual machines inside an LXC container. More as a reminder to myself, in case I ever have to do this again, here the simple recipe:

LXC Container Config Adjustment

Enable lxc.autodev and execute hook script to be executed after initial /dev creation (updated 20260428: lxc.cgroup2.* instead of lxc.cgroup.*):

[...]

# Auto-create /dev nodes and add native KVM support to the LXC container
lxc.autodev = 1
lxc.hook.autodev = /var/lib/lxc/.hooks/lxc-hook.kvm-support
lxc.cgroup2.devices.allow = c 10:232 rwm
lxc.cgroup2.devices.allow = c 10:238 rwm
lxc.cgroup2.devices.allow = c 10:241 rwm

[...]

[added 20260408] On the internet, you can find a recipe that simply bind-mounts /dev/kvm from the host in to the LXC container. However, this fails if group ID of POSIX group kvm differs between host and container.

LXC Hook Script for KVM Support Enablement

The following script I placed at /var/lib/lxc/.hooks/lxc-hook.kvm-support (on the LXC host!):

#!/bin/sh

# set up native KVM support in LXC container
mknod -m 0660 ${LXC_ROOTFS_MOUNT}/dev/kvm c 10 232
chown :kvm ${LXC_ROOTFS_MOUNT}/dev/kvm
mknod -m 0660 ${LXC_ROOTFS_MOUNT}/dev/vhost-net c 10 238
chown :kvm ${LXC_ROOTFS_MOUNT}/dev/vhost-net
mknod -m 0660 ${LXC_ROOTFS_MOUNT}/dev/vhost-vsock c 10 241
chown :kvm ${LXC_ROOTFS_MOUNT}/dev/vhost-vsock

10:07

Warm pistachios [Seth's Blog]

In terms of cost, serving a small ramekin of toasted pistachio nuts is a tiny portion of what an airline spends in transporting someone first class.

In fact, it’s such a relatively small expense that it’s easy to simply avoid it. Send the money to the bottom line and focus on the parts that are actually worth paying for.

Gratuitous bonuses send signals.

They tell the customer that you have the resources and confidence to pay attention to the little things.

They help distinguish extraordinary items from ordinary ones (after all, the folks in coach show up at the arrivals gate at exactly the same time).

And they deliver a story of status, one that’s internalized and often shared.

I’ve never seen a product or service that couldn’t be improved with metaphorical warm pistachios.

Pass the nuts.

09:49

Pluralistic: The enshittification multiverse (27 Apr 2026) [Pluralistic: Daily links from Cory Doctorow]

->->->->->->->->->->->->->->->->->->->->->->->->->->->->-> Top Sources: None -->

Today's links



Space, awash in nebulae; a receding line of vast Enshittification poop emojis curves away to infinity, each mouth covered in a grawlix-scrawled black bar.

The enshittification multiverse (permalink)

It's official: you have my consent and enthusiastic blessing to apply "enshittification" to things that aren't digital platforms! Semantic drift is good, actually:

https://pluralistic.net/2024/10/14/pearl-clutching/#this-toilet-has-no-central-nervous-system

With that out of the way, let's talk about how enshittification can be usefully applied to gambits that worsen something in order to shift value from the users of that thing to the person doing the worsening.

Here's the crux: in life, there are many zero-sum situations in which others' pain is your profit. The most basic example of this is profit margins: as your profit margin climbs, so do the prices paid by others. The more money a customer gives you for whatever you're selling, the less money that customer has to spend on other things they want.

This is the fatal flaw in the economist's justification for surveillance pricing (when the price you're quoted is based on surveillance data about the urgency of your needs and your ability to pay): a seller who commands higher prices from a buyer deprives other sellers of that buyer's money.

The airline that knows you can't miss a funeral and also knows how much purchasing power is available on your credit card can charge you every cent you can afford – but that means that the coffee shop owner who normally sells you a latte in the morning will lose out on your business for months while you dig yourself out of that hole.

Tim Wu has a good example of this: imagine a world in which electricity utilities were unregulated and got to charge "market rates" for their products. Prior to the current wave of cheap, efficient solar, electrical power was a "natural monopoly." In nearly every circumstance, a given person would end up with just one source of power, and life without power was nearly unimaginable. In that situation, the power company's "rational" decision would be to charge you everything you could afford for the least electricity you could survive on: enough to keep your fridge and a few lights on. That means that you would be deprived of the value of, say, a clock radio and a coffee-maker, and the manufacturers of the clock radio and the coffee-maker would likewise suffer the loss of your business.

So the "monopoly" part is key to this story. The more alternatives you have, the harder it is to squeeze you on prices. Airport concessionaires can charge $12 for a Coke on the "clean" side of a TSA checkpoint because realistically you can't leave the airport and get a Coke elsewhere – and if you do, you can't bring it through the checkpoint.

Any source of lock-in becomes an invitation to shift value away from your customers and suppliers to yourself. High "switching costs" are always a precondition for enshittification – otherwise the people you're trying to enshittify will simply take their business elsewhere:

https://www.eff.org/deeplinks/2021/08/facebooks-secret-war-switching-costs

That's why market concentration is so central to the enshittification story: when the number of competitors in a sector dwindles to a cartel (or a duopoly or a monopoly) companies find it easy to fix prices so there's no point in shopping around, and they can capture their regulators and harness the power of the state to block other companies from entering the market with a better deal:

https://pluralistic.net/2023/02/05/small-government/

Now that we understand the role that switching costs, regulatory capture, and market concentration play in enshittification, let's put them together to propose a framework for applying enshittification to things other than digital platforms:

Enshittification happens when someone sets out to reduce your choices, and then uses that lock-in to make things worse for you in order to make things better for themself.

Note that this definition requires a degree of intent. Enshittification isn't just bargaining hard when you find yourself in a position of strength. It's what happens when you set out to systematically weaken other people's bargaining position in anticipation of a future opportunity to fuck them over in order to improve your own situation.

So if the business lobby bribes Republican state legislators to pass "right to work" laws that make it nearly impossible for workers to unionize, and then the businesses involved worsen their workers' pay and conditions, we can call that enshittification. If they can bind workers to noncompete "agreements" that make it illegal for the cashier at Wendy's to get $0.25/h more at the McDonald's, that's even more enshittifying:

https://pluralistic.net/2025/11/10/zero-sum-zero-hours/#that-sounds-like-a-you-problem

Or if shitty men lobby to end anti-discrimination laws (making it much harder for a single woman to survive on her paycheck) and to end no-fault divorce (to make it much harder for a woman to leave the husband she marries to survive in a world where it's legal to discriminate against her in the workplace), in anticipation of being able to be a shitty husband without losing their wives, they are enshittifying marriage (applying this to the effort to kill the concept of "marital rape" is left as an exercise for the reader).

This can also be applied to politics. Restrictions on immigration and out-migration are both preludes to state enshittification, since a population that can't leave for another state will, on average, put up with more abuse from their political classes without leaving. Tying your work visa to your employer is very enshittification-friendly:

https://prospect.org/2026/04/22/north-carolina-farm-stole-h-2a-visa-workers-passports-lawsuit-trump-immigration/

One of the questions I get most frequently is "what about AI and enshittification?" This is a complicated question! Obviously, AI is very enshittification-prone: as "black boxes" that do not produce reliable, deterministic outputs, AI products have a lot of intrinsic cover for their enshittifying behavior.

If you ask a chatbot to recommend a product and it steers you toward an inferior option that generates a higher commission for the company, who can say whether that was the chatbot cheating, or if it was it a "hallucination?" Likewise, if you ask a chatbot to solve your problem and it does so in an inefficient way that burns a zillion tokens (which you have to pay for), is that the chatbot malfunctioning, or is that price-gouging?

https://pluralistic.net/2025/08/16/jackpot/#salience-bias

Beyond this, AI is very useful for plain old enshittification. Surveillance pricing – changing prices or wages based on the other person's desperation and ability to pay – is something AI is very good at:

https://pluralistic.net/2026/01/21/cod-marxism/#wannamaker-slain

And AI companies can enshittify their products in all the traditional ways: after a customer integrates AI in their lives and businesses in ways that are hard to escape, the AI company can raise prices, insert ads, and route queries to cheaper models that cost less to run and produce worse outputs.

But here's where there's a critical difference between enshittifying AI and enshittifying a profitable tech business like app stores or search engines. AI is the money-losingest project the human race has ever attempted. At $1.4 trillion and counting, the AI companies and their "frontier models" are so deep in the red that I can't see any way that any of these firms will survive:

https://pluralistic.net/2026/04/16/pascals-wager/#doomer-challenge

So, on the one hand, as these companies find themselves ever-more cash-strapped, they will be severely tempted to enshittify their products. But on the other hand, if these companies are doomed no matter what they do, then the enshittification will take care of itself when they go bankrupt.


Hey look at this (permalink)



A shelf of leatherbound history books with a gilt-stamped series title, 'The World's Famous Events.'

Object permanence (permalink)

#25yrsago Jakob Nielsen on reputation managers https://www.nngroup.com/articles/reputation-managers-are-happening/

#25yrsago EFF's sharing friendly music license https://web.archive.org/web/20010429045301/https://www.eff.org/IP/Open_licenses/20010421_eff_oal_pr.html

#25yrsago Speedle: what links are forwarded most online? https://web.archive.org/web/20010401084047/http://www.speedle.com/

#20yrsago RIP Jane Jacobs, urban activist https://web.archive.org/web/20061009063708/http://www.canada.com/topics/news/story.html?id=fe1de18f-6b6e-473d-b0cb-0cc422dcf661&amp;k=25935

#20yrsago Why fan fiction is so important https://nielsenhayden.com/makinglight/archives/007464.html#007464

#20yrsago California got its name from fanfic https://nielsenhayden.com/makinglight/archives/007464.html#122035

#20yrsago DMCA revision proposal will jail Americans for “attempting” infringment https://web.archive.org/web/20060502093524/https://ipaction.org/blog/2006/04/bill-hollywood-cartels-dont-want-you_24.html

#20yrsago Vista’s endless parade of warnings won’t create security https://www.schneier.com/blog/archives/2006/04/microsoft_vista.html

#15yrsago Passover poem about robots: “When We Were Robots in Egypt” https://reactormag.com/when-we-were-robots-in-egypt/

#15yrsago Naipaul’s rules for beginning writers https://web.archive.org/web/20110508152004/http://www.indiauncut.com/iublog/article/vs-naipauls-advice-to-writers-rules-for-beginners/

#15yrsago Rules for golfing during the blitz https://directorblue.blogspot.com/2011/04/stiff-upper-lip.html

#15yrsago New Zealand’s rammed-through copyright law includes mass warrantless surveillance and publication of accused’s browsing habits https://www.stuff.co.nz/technology/digital-living/4922854/Copyright-change-about-more-than-idle-threats

#15yrsago State Dept adding intrusive, semi-impossible questionnaire for US passport applications https://web.archive.org/web/20110427025422/https://www.consumertraveler.com/today/state-dept-wants-to-make-it-harder-to-get-a-passport/

#10yrsago A Burglar’s Guide to the City: burglary as architectural criticism https://memex.craphound.com/2016/04/25/a-burglars-guide-to-the-city-burglary-as-architectural-criticism/

#10yrsago EFF to FDA: the DMCA turns medical implants into time-bombs https://www.eff.org/files/2016/04/22/electronic_frontier_foundation_comments_cybersecurity_in_medical_devices_.pdf

#10yrsago James Clapper: Snowden accelerated cryptography adoption by 7 years https://web.archive.org/web/20160425161451/https://theintercept.com/2016/04/25/spy-chief-complains-that-edward-snowden-sped-up-spread-of-encryption-by-7-years/

#10yrsago Australian MP sets river on fire https://web.archive.org/web/20170518083229/https://www.yahoo.com/news/australian-politician-sets-river-fire-protest-fracking-064640159.html

#10yrsago Fantasy accounting: how the biggest companies in America turn real losses into paper profits https://www.nytimes.com/2016/04/24/business/fantasy-math-is-helping-companies-spin-losses-into-profits.html

#10yrsago Leading Republicans send letters in support of Dennis Hastert, pedophile https://www.chicagotribune.com/2016/04/22/more-than-40-letters-in-support-of-hastert-made-public-before-sentencing/

#5yrsago Guess who's doing a usury in Iowa https://pluralistic.net/2021/04/24/peloton-usury/#going-nowhere-fast

#1yrago Every complex ecosystem has parasites https://pluralistic.net/2025/04/24/hermit-kingdom/#simpler-times


Upcoming appearances (permalink)

A photo of me onstage, giving a speech, pounding the podium.



A screenshot of me at my desk, doing a livecast.

Recent appearances (permalink)



A grid of my books with Will Stahle covers..

Latest books (permalink)



A cardboard book box with the Macmillan logo.

Upcoming books (permalink)

  • "The Reverse-Centaur's Guide to AI," a short book about being a better AI critic, Farrar, Straus and Giroux, June 2026 (https://us.macmillan.com/books/9780374621568/thereversecentaursguidetolifeafterai/)
  • "Enshittification, Why Everything Suddenly Got Worse and What to Do About It" (the graphic novel), Firstsecond, 2026

  • "The Post-American Internet," a geopolitical sequel of sorts to Enshittification, Farrar, Straus and Giroux, 2027

  • "Unauthorized Bread": a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, 2027

  • "The Memex Method," Farrar, Straus, Giroux, 2027



Colophon (permalink)

Today's top sources:

Currently writing: "The Post-American Internet," a sequel to "Enshittification," about the better world the rest of us get to have now that Trump has torched America. Third draft completed. Submitted to editor.

  • "The Reverse Centaur's Guide to AI," a short book for Farrar, Straus and Giroux about being an effective AI critic. LEGAL REVIEW AND COPYEDIT COMPLETE.
  • "The Post-American Internet," a short book about internet policy in the age of Trumpism. PLANNING.

  • A Little Brother short story about DIY insulin PLANNING


This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.

https://creativecommons.org/licenses/by/4.0/

Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.


How to get Pluralistic:

Blog (no ads, tracking, or data-collection):

Pluralistic.net

Newsletter (no ads, tracking, or data-collection):

https://pluralistic.net/plura-list

Mastodon (no ads, tracking, or data-collection):

https://mamot.fr/@pluralistic

Bluesky (no ads, possible tracking and data-collection):

https://bsky.app/profile/doctorow.pluralistic.net

Medium (no ads, paywalled):

https://doctorow.medium.com/

Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):

https://mostlysignssomeportents.tumblr.com/tagged/pluralistic

"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla

READ CAREFULLY: By reading this, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies ("BOGUS AGREEMENTS") that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer.

ISSN: 3066-764X

08:49

Microboned [Penny Arcade]

New Comic: Microboned

07:35

Kernel prepatch 7.1-rc1 [LWN.net]

Linus has released 7.1-rc1 and closed the merge window for this release.

Things look fairly normal, although we do have a few different projects to cull some old hardware support to help minimize maintenance burden: phasing out i486 support (configs deleted, code deletions to follow) and independently starting to remove some really old networking hardware support, and removing some SoC support that never went anywhere.

But we're more than making up for any stale code removal with all the new features and code added, so the diffstat still shows many more lines added than removed.

06:14

Girl Genius for Monday, April 27, 2026 [Girl Genius]

The Girl Genius comic for Monday, April 27, 2026 has been posted.

04:49

Sahil Dhiman: Weekly Notes [Planet Debian]

Weekly notes is a genre where people chronicle their week on their blogs. Weekly notes are like a window. I love going through these, as they’re a steady stream of week on week happenings and progress in people’s lives. It shows people making efforts to improve: from basic things like learning to swim or drive, to planning long-term goals such as vacations, moving house, states, or even countries — and, in some cases, internal monologues, thoughts, and anxieties. These are like a constant nudge for me to work on myself, like them.

These are the weekly notes I read nowadays:

Most are there on Thejesh’s weekly notes planet which autoupdates when new posts arrive, usually starting on Friday evenings, and by Monday, almost everyone has posted.

It reminds of a word from The Dictionary of Obscure Sorrows - Kenaway :

the longing to see how other people live their lives when they’re not in public; wishing you could tune in to the raw feed of another human existence, in all its messiness and solitude—shimmying in place while brushing their teeth, squabbling over where to put the shoes, talking out their problems on solitary commutes—if only to give you something to compare your own life against, and figure out whether you’re bizarrely normal or normally bizarre.

Close enough.

04:00

Russ Allbery: Review: What We Are Seeking [Planet Debian]

Review: What We Are Seeking, by Cameron Reed

Publisher: Tor
Copyright: 2026
ISBN: 1-250-36474-4
Format: Kindle
Pages: 339

What We Are Seeking is a bit hard to classify beyond science fiction. I think I would call it anthropological science fiction, but it's also a first contact story and a planetary colony story. It is a standalone novel (well, so far as I know; see later in the review for caveats). This is Cameron Reed's second novel after the excellent and memorable cyberpunk novel The Fortunate Fall, first published in 1996 under Reed's former name of Raphael Carter.

John Maraintha is a doctor from the world of Essius. He took what he thought was a temporary job on the Free Ship Edgar's Folly, where he's endured considerable culture shock. As the novel opens, John learns that the colonists on Scythia have requested a translator to talk to one of the native life forms, and a doctor since they're down to only one. John will be that doctor. The captain has decided, and by the rules of the free ships, John does not get a choice in the matter.

The Scythian colony is about four hundred people, now located in a desert climate since the complex native life forms destroyed their previous settlement. The colonists are a split between Ischnurans and Zandaheans, two other human civilizations from the scatter of colony worlds left after Earth embraced AIs (aiyis here) and turned inward. Both of those groups marry, something John considers a moral abomination. Neither of them seem likely to understand Essian sexual ethics. More devastatingly, John had intended to spend some time as a ship doctor and then return home to a new place in Essian society. Once he lands on Scythia, the chances of that are gone; it is highly unlikely any ship would pick him up again and take him home.

I have been trying to find the right books to compare What We Are Seeking with ever since I read it. The best I've come up with are Ursula K. Le Guin (particularly The Dispossessed), Eleanor Arnason's A Woman of the Iron People, and Becky Chambers's To Be Taught, If Fortunate. The start of the book felt like an intentional revisiting of an earlier era of science fiction, with somewhat updated science and politics, but the last half of the book, where the action picks up considerably, is a meditation on gender, social systems, religion, and small-group politics. All of that is mixed with biological exploration and a first-contact story with some quite-alien aliens.

This is the sort of novel where the protagonist's culture is as foreign to the reader as any of the other cultures he counters, so the reader is assembling several jigsaw puzzles at once. John is dropped into an established colony with its own social norms and established hierarchies. The one other outsider, the translator Sudharma Jain, is, as his name implies, a Jain who keeps very strict religious observances. Half of the colony is from something akin to a fundamentalist Christian religious sect that practices patriarchy and strict marriage codes. The other half is more gently sexist (but still sexist) and has its own tradition of a third gender that becomes central to the story. John, meanwhile, is a strong believer in the Essian approach to social organization: Any two partners of any gender freely have sex by mutual consent and without obligation, and family is based solely on blood relations. These beliefs do not fit comfortably together, even when people are trying (as they mostly do) to be welcoming.

The first half of this book is very slow. This gives all of the characters space to breathe and become comfortable, and the characterization is superb, but it is a book to start when you're in the mood for something slow and observational. There is a plot that gradually becomes apparent, or rather there are several plots that are intertwined, but tension and urgency are mostly reserved for the second half of the book. Instead, the book opens with a lot of close observation of alien flora and fauna and the untangling of subtle social dynamics among the Scythians.

There is also a visitor from earth, much to the distress of the Scythians. Earth presence means the ships will not return and the colony may be cut off from any sort of technological resupply. Despite speaking a common language, that visitor is as mutually alien to the other groups as they are to the native flora. Her life is fully integrated with aiyis, giving her essentially godlike powers and the ability to turn off inconvenient emotions and disregard anything she doesn't want to see. What she and the Earth aiyis are doing on the planet is one of the early mysteries.

The dialogue in this book is truly excellent. Each characters has their own voice, there are fascinating digressions on different words that lead to tidbits of world-building, and some of the culture-specific idioms are delightful.

"I'm making a mess of this. None of that matters. Let me fall out the window and come in the door again. This is how my story ought to start:"

The challenges for the characters in this story are slow but deep ones: belonging and self-definition, the conflict between cultural tradition and personal circumstance, and the sacrifices required to live with small groups in situations where civil war is viscerally attractive. It has one of the most comprehensive and fascinating treatments of transgender issues that I've read in science fiction. Its commentary on current politics is subtle and estranged in the way that science fiction does best, but still pointed and satisfying. And, well, there are passages like this that I absolutely adore:

"I wouldn't go that far. It could be they are right, the universe we see exists because a mind like ours created it — at least, a mind enough like ours that we can say it wants one thing and not another, and when it acts it does so with intent. That's as good an idea as any. But it is certainly not plausible that such a being believes that people everywhere should marry, or that men should never visit men, or no one should become a jess. Look at what they have created. The universe could have been nothing at all, or one atom of hydrogen floating in a void, or a diamond crystal infinite in all directions, if their mind cared for simplicity or tidiness. Instead we have stars and planets and black holes and nebulas. It could have all been cold and dead, but there is life. They could have made one species for each world, or just a few, which could have stayed the same forever, but instead we have millions and millions, all of which are changing every moment, varying among themselves and boiling off in all directions. Such a god is like an artist who fills up a library of sketchbooks with their drawings of strange creatures, and when every scrap of paper in the place is used up, goes back with a different color ink and scribbles over them again. They are obsessed with variation — they gorge themselves with it and never grow full. Do you really think a mind like that could want us all to live in the same way?"

I had one problem with this book, though, and for me it was a big one: There is no ending. Reed effectively builds tension, gets me caring about all of the characters, sets up several problems, starts down a path towards resolution, and then the book just... ends.

Long-time readers of my reviews will know that I'm a denouement fanatic. I want the scouring of the shire, I want the chapter set in the happily ever after, I want the catharsis of an ending. This made me so grumpy!

To be clear, this is not sequel bait (at least so far as I can tell). I can write a philosophical defense of the ending. The types of problems and lives that Reed set up don't have clear endings; this is, to some extent, the point. We muddle through, and then those who come after us muddle through some more, and the cumulative effect is called human civilization. And there is some denouement; Reed doesn't leave the reader at a cliffhanger or anything that egregious.

But still, I wanted the happy ending, even though that was unrealistic for the style of story this is, because I'm a happy ending reader. This is not an ending sort of book; it's the sort of book where I get a sinking feeling at the 95% mark because there aren't enough pages left for the number of remaining unresolved problems. I've gotten less annoyed in the days since I finished the book, and I can appreciate the thematic point made by how the book ends, but I still feel like it's worth an advance warning if you're a reader like I am.

I would be delighted by a sequel, but it didn't feel like that was the intent.

Apart from that, this was both excellent and rather unlike a lot of current science fiction. I think the closest comparison I can make among recent novels I've read is Sue Burke's Semiosis. What We Are Seeking has a similar sort of world-building, but I liked these characters so much more. It felt like a classic literary science fiction novel, but very much written in 2026. Highly recommended, just beware of the lack of closure.

Content notes: Sexism, homophobia, stomach illness, and some religious abuse.

Rating: 8 out of 10

Sunday, 26 April

23:42

Link [Scripting News]

I just said this to Claude: "I want to show people that RSS isn't just for news and podcasting. It can be for mindless social media rants too." ;-)

19:28

Dirk Eddelbuettel: RProtoBuf 0.4.27 on CRAN: Upstream Adjustment [Planet Debian]

A new maintenance release 0.4.27 of RProtoBuf arrived on CRAN today. RProtoBuf provides R with bindings for the Google Protocol Buffers (“ProtoBuf”) data encoding and serialization library used and released by Google, and deployed very widely in numerous projects as a language and operating-system agnostic protocol. The new release is also already as a binary via r2u.

This release adjusts to a change upstream. Luca Billi noticed that upstream removed some fields from FieldDescriptor, filed and issue and followed up with a spotless PR. No other changes.

The following section from the NEWS.Rd file has all details and links.

Changes in RProtoBuf version 0.4.27 (2026-04-26)

  • Adjust to FieldDescriptor API changes in ProtoBuf 3.4 (Luca Billi in #114 fixing #113)

Thanks to my CRANberries, there is a diff to the previous release. The RProtoBuf page has copies of the (older) package vignette, the ‘quick’ overview vignette, and the pre-print of our JSS paper. Questions, comments etc should go to the GitHub issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub. You can also sponsor my Tour de Shore 2026 ride in support of the Maywood Fine Arts Center.

18:00

Urgent: Balcony solar power [Richard Stallman's Political Notes]

US citizens: call on your state governor to legalize balcony solar power.

See the instructions for how to sign this letter campaign without running any nonfree JavaScript code--not trivial, but not hard.

US citizens: Join with this campaign to address this issue.

Your state's agency-contact information is at USA.gov.

Please spread the word.

Urgent: Deportation in the US [Richard Stallman's Political Notes]

US citizens: phone your senators and call on them to cancel the funding for the deportation thugs and eliminate that agency,   +1-202-998-6094.

National Nurses United recommends this short script to when you call:

Hi, my name is _______. I am a constituent and I’m a nurse/other health care worker/patient. It is unconscionable, after a year of constant violence and militarization of our neighborhoods by the Trump administration, that Congress would consider sending billions more in funding to continue this harm. I am calling today to urge my senator to vote NO on the Senate Homeland Security reconciliation funding package that includes another $70 billion in funding for ICE and CBP, rescind the already $75 billion allocated to ICE and $65 billion for CBP in last year’s Republican budget reconciliation bill, and take immediate action to abolish ICE.

16:07

Link [Scripting News]

Imagine you were putting up a skyscraper in Manhattan. I lived in an area of the city, called Billionaire's Row, for nine years where I saw quite a few huge buildings go up from my living room window on the 50th floor. Now imagine you used a different plumbing system in every apartment, over 140 stories, with up to 15 residences per story. Different wiring. All the rooms are different shapes. How could you maintain such a building? I think ChatGPT might be able to do it but no human could. We need patterns to allow us to understand big things.

Teaching Claude about humans [Scripting News]

What you don't hear about AI is that it doesn't know how human minds work, what our limits are, what we can do that they can't. It has no memory. This was hard to believe, but you have to tell it to keep things in some kind of memory, usually a Markdown file, then tell it to read it from another file. These are things humans just do. I may have to say it two or three times, but you will remember it if you're a normal human.

I know there are places you can leave an instruction to read it when it starts, but sometimes it doesn't do it. It could never, in its current state, figure out how to make a product that people want to use. Nor would a human be able to easily read the code it generates unless you work hard at teaching it how that works, and if you have to do that, you might as well do it yourself.

It never looks for prior art, in some contexts -- but in others, it's encyclopedic about prior art. You might find a function named returnError in one place that takes a string argument, where most of the other instances take an object that contains a string. How do I, as a human, work in an environment like that?

The users who vibe code who think they know how to make code people will want to use are making the same mistake student programmers make, they come out of school only having done student projects, which have very limited objectives. The real world is far more demanding. The real world populated by humans that is. Just some random thoughts as I try to create usable code working with Claude.

Perhaps what's needed is a Developing Better Developers program for pseudo-humans.

15:21

Link [Scripting News]

Bluesky is having trouble keeping their network running. As a developer I empathize. As a user, it's beginning to be a problem. I am using it the way I used to use Twitter, taking notes for future blog posts, sharing a few linkblog feeds, DMing with people I work with, and want/need to keep going. I know about its lock-in problems, but at this time there's no open alternative that has the same collection of users who are easy enough to find. So if it stays unreliable I have to think about what to do about that. Every time I get a 403 Forbidden, I stop and think if this is the time to write a post? This was the time.

14:00

Aurelien Jarno: Running upstream OpenSBI on SpacemiT K1 [Planet Debian]

The SpacemiT K1 is a rather interesting RISC-V SoC, found for instance on boards like the Banana Pi BPI-F3 board. It's one of those platforms that looked promising on paper, but took a bit of time before things really started to move upstream. Things have clearly accelerated over the last few months.

Linux 7.0 brings, among other things PCIe support, making the board quite capable as a development board. SD card, CPU thermal sensor and cpufreq support are already in the pipe.

Unfortunately the situation is less advanced on the firmware side. There is only very basic support for the SpacemiT K1 in U-Boot for the second stage, and initial SPL support has been posted on the mailing list, but has not yet been merged. In practice, this means you still have to rely on the vendor U-Boot, which is based on the rather old 2022.10 release.

On the other hand, OpenSBI does have upstream support for the SpacemiT K1, however it is not compatible with the vendor U-Boot, mostly due to device tree differences.

This can be addressed by applying a few patches to the vendor U-Boot, which I have published in a git tree in the k1-bl-v2.2.y-opensbi branch (technically this can also be handled on the OpenSBI side, but I prefer using a vanilla upstream OpenSBI version). The first two patches update the configuration to get closer to the upstream U-Boot defaults, and to enable some configuration options for the Milk-V Jupiter board, which stores its firmware in SPI NOR flash, instead of eMMC for the Banana Pi BPI-F3. The following patches update the device tree by adding extra compatible entries to several devices, as expected by the upstream kernel and OpenSBI (thanks to Troy Mitchell for the hint about the UART change) and update the CPU riscv,isa properties. Finally an additional patch adds the SpacemiT P1 PMIC to the device tree, which is required for the OpenSBI reboot patchset I recently posted (this is currently done only for the Banana Pi BPI-F3 and Milk-V Jupiter boards, but extending it to other boards should be straightforward).

Building this U-Boot version is as simple as running this command in the source directory:

make k1_defconfig && make

On a Banana Pi BPI-F3 board, the resulting U-Boot can be flashed with:

echo 0 > /sys/block/mmcblk0boot0/force_ro
dd if=FSBL.bin of=/dev/mmcblk0boot0 bs=512 seek=1
dd if=u-boot.itb of=/dev/mmcblk0p1

Building upstream OpenSBI is also fairly simple, and can be done by running this command in the source directory:

make PLATFORM=generic

On a Banana Pi BPI-F3 board, the resulting OpenSBI can be flashed with:

dd if=fw_dynamic.itb of=/dev/mmcblk0p2

Note that the vendor U-Boot version is patched to install OpenSBI in a separate partition instead of embedding, as the upstream U-Boot does. While this works well on the Banana Pi BPI-F3, the corresponding partition in the Milk-V Jupiter SPI NOR flash is too small for the upstream OpenSBI version, and can't be easily resized without breaking compatibility. To address this, the branch k1-bl-v2.2.y-opensbi-embedded contains an additional patch (a bit hackish I admit) to somehow restore the upstream approach. The build process remains simple, first build OpenSBI with the following command:

make PLATFORM=generic

Then build U-Boot, specifying the patch to the just built OpenSBI firmware:

make k1_defconfig && make OPENSBI=/path/to/opensbi/build/platform/generic/firmware/fw_dynamic.bin

On a Milk-V Jupiter board, the resulting combined U-Boot/OpenSBI can be flashed with:

modprobe mdtblock
dd bs=4k if=FSBL.bin of=/dev/mtdblock2
dd bs=4k if=u-boot.itb of=/dev/mtdblock5

This combined U-Boot/OpenSBI can also be used on a Banana Pi BPI-F3, using the same flashing procedure as above, while skipping the OpenSBI part (although running it won't cause any issue, it will simply be unused).

All of this is admittedly a bit hackish, but enabling the use of upstream OpenSBI is already one step forward. Hopefully, in a few months, we will be able to rely entirely on upstream U-Boot.

10:21

Bad money… [Seth's Blog]

The expression “bad money crowds out the good” refers to Gresham’s Law. It means that once lesser-quality and counterfeit currency begins to be traded, people hoard the good stuff and only trade the poor substitutes.

Social media platforms fall into a trap like this when they seek to grow. For example, at the beginning, Substack had a very high signal to noise ratio–plenty of good ideas and so readers were happy to expect that an email from them or recommendation from the platform was worthwhile. It didn’t get put in the spam or promo folder, because it wasn’t spam.

But now, having run out of the highest-quality content, the site is making it easy for hustlers to import vast lists of email addresses and quickly grow (or appear to grow) their lists. I’m getting unsolicited and unwanted”subscriptions” often, and the easiest thing to do is just send all of their messages to spam. Which hurts the original good currency. Once the bad “money” shows up, it attracts more bad money.

The same thing happens when trusted sources start padding their content with AI slop, or when a small business inserts a few low-value, high-margin items into their sampler pack.

Attention is precious. Trust is even more so.

When you trade them both for growth, it’s inevitable that you’ll fade away.

03:14

Link [Scripting News]

Knicks won in a blowout. As disturbing as Thursday's game was, today we are feeling no pain. Go Knicks! ;-)

Saturday, 25 April

15:42

Link [Scripting News]

I wrote about Jeopardy, Firefox, Matt, Silicon Valley and the writer's web in a long comment on Doc's blog. Here's a quote. "What we really need is interop. If the source is free that’s great. But right now we have silos everywhere and I want WordPress, perhaps along with Firefox to help us boot up the writer's web."

12:49

Pluralistic: Ada Palmer's "Inventing the Renaissance" (25 Apr 2026) [Pluralistic: Daily links from Cory Doctorow]

->->->->->->->->->->->->->->->->->->->->->->->->->->->->-> Top Sources: None -->

Today's links



The U Chicago Press cover for Ada Palmer's 'Inventing the Renaissance.'

Ada Palmer's "Inventing the Renaissance" (permalink)

Ada Palmer may just be the most bewilderingly talented person I know: a genius sf writer, incredible librettist and singer, wildly innovative educator, and a leading historian of the Renaissance, and last year, she published her magnum opus, Inventing the Renaissance, a stunning book about so much more than history:

https://press.uchicago.edu/ucp/books/book/chicago/I/bo246135916.html

All of my friends seem to be writing their magnum opuses these days! When (modern) historian Rick Perlstein and I did an event last year for my Enshittification tour, he told me he'd just finished his 1,000 page (ish? I may be misremembering slightly) history of the American conservative movement. And I recently had dinner with China Mieville, who told me he'd just turned in the manuscript for a novel he'd been trying to figure out how to write all his life.

I can't wait to read these books! And I couldn't wait to read Inventing the Renaissance, and I would have been much quicker off the mark but for the exigencies of book tours and books due and so on – but I've been reading it for the past two months or so, and I think I've pitched it about a hundred times to strangers and friends as I savored it, because it's just that good.

Inventing the Renaissance isn't a work of history, it's a work of "historiography" – the study of how histories get written and rewritten. Palmer's point here isn't to make us merely understand the Renaissance – she wants us to understand how the idea of a Renaissance, a rebirth out of a "dark age" into a "golden age" – has been used, abused, created and demolished, for centuries and centuries, including during the centuries when the Renaissance was actually underway.

Palmer teaches Renaissance history at the University of Chicago, where she is legendary for a unique annual pedagogical exercise in which she leads her students through a weeks-long live-action role-playing game that re-enacts the election of the Medicis' Pope. Every student is given a detailed biography of their character's position, goals, proclivities and history, and for weeks, the students scheme, ally, betray and assassinate each other. At the climax, the students take over the university's faux-Gothic cathedral, dressed in Renaissance drag (Palmer has a Google alert for theater companies that are selling off their costumes, and her tiny office at the university overflows with racks of cardinals' robes and other period garb), and they invest a Pope:

https://pluralistic.net/2021/10/17/against-the-great-forces-of-history/

This exercise is nothing short of genius, and the students who experience it often report that it is life-changing. That's because the final candidates are never quite the same, nor are the cardinals who cast votes for the winner. And yet, there are certain bedrocks that never shift, including the fact that Italy is always invaded by some of the factions involved in the election, though which cities burn also changes.

The point of this exercise is to expose the students to the power and limits of both "great historical forces" and the human agency that every one of us has within the envelope defined by those forces. Palmer wants her students to get a bone-deep understanding that while every moment has great forces bearing down on it, that the people of each moment have an enormous amount of leeway to channel the floodwaters that history will unleash. From the servant who bears a message from one great power to another, up to those great powers themselves, each person guides the course of history, even if they can't halt some of its outcomes.

Though Palmer unpacks this exercise and its meaning and results in the final part of her magnum opus, this message about forces and people is really the key to her historiography. She develops these themes in the most charming, accessible manner imaginable, weaving her own journey into history with her accounts of how different eras consciously created and deployed the idea of "the Renaissance" and how these ideas were bolstered, undermined, or ultimately demolished by new evidence. You could not ask for a better account of why there is not, and can never be, a single, canonical "history" of an era or a moment. There will always be multiple histories, overlapping each other, warring with one another, supplanting each other, or being revived as "lost" histories that reveal a truth that "they" have buried.

This is such an ambitious book, and the ambition pays off in so many ways. Take the book's structure: there's a long middle section in which Palmer describes how more than a dozen figures from the Renaissance experienced their era, with many overlapping events and timelines. Palmer's sensitive, beautifully researched and written accounts of the lives of these figures – highborn and lowly, sinister and virtuous – highlights the contradictions of this centuries-long "moment" we call "the Renaissance" and shows us how those contradictions can't ever be resolved, only acknowledged and understood.

This is Palmer the novelist, blending seamlessly with Palmer the historian. Palmer is a close literary – and personal – ally of the equally brilliant sf/fantasy writer Jo Walton, whose work has mined classical and Renaissance history to great effect since she and Palmer struck up their friendship. First, there were Walton's "Philosopher Kings" books, a three-book long thought experiment in which every person of every era who ever dreamed of living in Plato's Republic is brought through time and space to the doomed volcanic island that will someday give rise to the story of Atlantis, to try out Plato's ideal society for real:

https://memex.craphound.com/2015/01/13/jo-waltons-the-just-city/

Then there was Lent, Walton's story of the fanatical reformer Savonarola, who is forced to re-live his life over and over, with breaks in hell where he is tormented by his failure:

https://web.archive.org/web/20190516170659/https://www.latimes.com/books/la-ca-jc-review-jo-walton-lent-20190516-story.html

And this June, she'll bring out Everybody's Perfect, a novel that uses Palmer's trick of telling a story from many viewpoint characters, each of whom perceives the events so differently that their versions can't really be reconciled, except by understanding that there is no one history and there cannot be one history. There are only the histories, ever changing. The omnipotent third person narrator is a lie. I don't know if Palmer got this idea from Walton, or if Walton was inspired by Palmer, but it is a wonderful living example of how intellectual and creative movements (like those that are attributed to the Renaissance) feed one another.

One of Palmer's areas of specialty is free speech and censorship. Along with Adrian Johns, we co-taught a grad seminar called "Censorship, Information Control, and Information Revolutions from Printing Press to Internet" that connected Ada's work to the current battles over online speech:

https://neubauercollegium.uchicago.edu/research/censorship-information-control-and-information-revolutions-from-printing-press-to-internet

Palmer wants us to understand that the majority of censorship is self-censorship – that the Inquisition could only intervene in a tiny minority of cases of prohibited thought and word, and they had to rely on key people – printers, for example – anticipating the Inquisitors' tastes and limiting their speech without an Inquisitorial edict (if this seems relevant to the Trump administration's "war on woke," then you're clearly paying attention):

https://pluralistic.net/2024/02/22/self-censorship/#hugos

Those correspondences between the deep historical record and our current moment make Inventing the Renaissance extremely important and timely – a book hundreds of years in the making, and bang up to date.


Hey look at this (permalink)



A shelf of leatherbound history books with a gilt-stamped series title, 'The World's Famous Events.'

Object permanence (permalink)

#25yrsago Gloating NYT editorial about the dotcom crash https://www.nytimes.com/2001/04/23/opinion/editorial-observer-after-the-fall-the-new-economy-goes-retro.html

#20yrsago RIAA sues family that doesn’t own a PC https://www.techshout.com/riaa-sues-local-family-without-computer-for-illegal-music-file-sharing/

#15yrsago Righthaven copyright troll loses domain https://web.archive.org/web/20110425035158/http://www.domainnamenews.com/legal-issues/righthavencom-invalid-whois/9232

#15yrsago Steampunk Venetian mask https://bob-basset.livejournal.com/160226.html

#5yrsago John Deere's dismal infosec https://pluralistic.net/2021/04/23/reputation-laundry/#deere-john

#5yrsago Foxconn's Wisconsin death-rattle https://pluralistic.net/2021/04/23/reputation-laundry/#monorail

#5yrsago Laundering torturers' reputations with copyfraud https://pluralistic.net/2021/04/23/reputation-laundry/#dark-ops

#1yrago Sarah Wynn-Williams's 'Careless People' https://pluralistic.net/2025/04/23/zuckerstreisand/#zdgaf


Upcoming appearances (permalink)

A photo of me onstage, giving a speech, pounding the podium.



A screenshot of me at my desk, doing a livecast.

Recent appearances (permalink)



A grid of my books with Will Stahle covers..

Latest books (permalink)



A cardboard book box with the Macmillan logo.

Upcoming books (permalink)

  • "The Reverse-Centaur's Guide to AI," a short book about being a better AI critic, Farrar, Straus and Giroux, June 2026 (https://us.macmillan.com/books/9780374621568/thereversecentaursguidetolifeafterai/)
  • "Enshittification, Why Everything Suddenly Got Worse and What to Do About It" (the graphic novel), Firstsecond, 2026

  • "The Post-American Internet," a geopolitical sequel of sorts to Enshittification, Farrar, Straus and Giroux, 2027

  • "Unauthorized Bread": a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, 2027

  • "The Memex Method," Farrar, Straus, Giroux, 2027



Colophon (permalink)

Today's top sources:

Currently writing: "The Post-American Internet," a sequel to "Enshittification," about the better world the rest of us get to have now that Trump has torched America. Third draft completed. Submitted to editor.

  • "The Reverse Centaur's Guide to AI," a short book for Farrar, Straus and Giroux about being an effective AI critic. LEGAL REVIEW AND COPYEDIT COMPLETE.
  • "The Post-American Internet," a short book about internet policy in the age of Trumpism. PLANNING.

  • A Little Brother short story about DIY insulin PLANNING


This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.

https://creativecommons.org/licenses/by/4.0/

Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.


How to get Pluralistic:

Blog (no ads, tracking, or data-collection):

Pluralistic.net

Newsletter (no ads, tracking, or data-collection):

https://pluralistic.net/plura-list

Mastodon (no ads, tracking, or data-collection):

https://mamot.fr/@pluralistic

Bluesky (no ads, possible tracking and data-collection):

https://bsky.app/profile/doctorow.pluralistic.net

Medium (no ads, paywalled):

https://doctorow.medium.com/

Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):

https://mostlysignssomeportents.tumblr.com/tagged/pluralistic

"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla

READ CAREFULLY: By reading this, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies ("BOGUS AGREEMENTS") that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer.

ISSN: 3066-764X

10:21

Breathwork [Seth's Blog]

[Off topic, but I hope it might be useful]

Mindfulness can improve your life. So can stillness and spiritual grounding. This is not a post about that.

Breathing is an architectural challenge and a chemical necessity.

We breathe about 20 pounds of air a day (and if you’ve ever tried to weigh air, you can imagine that this is quite a bit.) Why bother?

The body is fueled by a series of chemical reactions, and most of them require the right balance of oxygen and carbon dioxide. The body is finely tuned to be aware of the available quantity of each, and reacts accordingly.

We evolved to have a particularly complicated system for ingesting air. We have two nostrils and a mouth. Thanks to speech and other requirements, the mouth is well suited to rapid inhalations and exhalations.

Which is a problem.

The first lesson of James Nestor’s book is simple: Shut your mouth.

Spend three days breathing only through your nose. Even when you work out. Especially then. (Except swimming. I tried. It doesn’t work.)

And consider slightly taping your mouth when you sleep. Just a small piece of surgical tape, about a half inch across–right in the center. Put some lip balm on before applying so it won’t irritate you. Don’t do this if you have apnea or other issues, or a doctor who suggests against it. It’s a very small piece of tape, easily removed.

That’s it. Three days.

Nestor spends hundreds of pages explaining a huge range of benefits and volumes of peer-reviewed research. Some of it might be a bit overblown, some is surprising, but all of it makes sense.

But you don’t need a Ph.D. to determine how it feels after three days. It’s like discovering you’ve been using the wrong door to get into and out of your house.

I had such a good experience that I felt like it was worth sharing. Breathe through your nose, small sips, not gulps. You may find that you sleep better, snore less, run further, and are less stressed.

No one told me. Now we know.

06:07

Russ Allbery: Review: The Genocidal Healer [Planet Debian]

Review: The Genocidal Healer, by James White

Series: Sector General #8
Publisher: Orb
Copyright: 1991
Printing: May 2003
ISBN: 0-7653-0663-8
Format: Trade paperback
Pages: 255

The Genocidal Healer is the eighth book in James White's medical science fiction series about the Sector General hospital. As with the rest of the series, detailed memory of the previous books is not required and the books could be read out of order if you didn't mind spoilers.

I read this as part of the Orb General Practice omnibus.

Surgeon-Captain Lioren is a Tarlan doctor who was in charge of the medical response to a newly-discovered civilization. The aliens were suffering from an apparently universal plague and an ongoing vicious war waged entirely through hand-to-hand combat, putting them on the edge of extinction. Lioren rushed the distribution of a possible cure against the advice of the doctors working on developing it, with catastrophic results. As The Genocidal Healer opens, Lioren is insisting on a court-martial in the hope of receiving the sentence it believes it deserves and was denied: death.

(It pronouns are the convention in the Sector General series for all alien races and formal discussions, because even someone prone to bouts of gender essentialism such as White understood the need for avoiding gender assumptions in a science fiction medical context.)

Predictably, both Sector General and the Monitor Corps that technically runs the hospital are flatly unwilling to execute Lioren. Instead, he is assigned as a new apprentice in the psychology department under the legendary O'Mara, where he is ordered to investigate the psychological fitness of a senior doctor named Seldal. This leads him to talk to Seldal's patients, which in turn leads to a challenging set of ethical dilemmas.

The first five chapters (and more than sixty pages) are the story of Lioren's trial and a recounting of the events on Cromsag. The series is full of medical and cultural puzzles like this, and usually I like them, but I thought this one was less successful. We know the vague (and horrible) outline of the ending in advance, and the massive simplification and artificial universality that is required to make this puzzle work is particularly blatant. A universally infectious disease is more of a fiction plot than a believable biological concept, and the number of failures of communication, analysis, and misunderstanding that have to line up to create White's predetermined outcome were a bit much for me.

Once the story gets past that and into Lioren's psychological work, the novel improves. Lioren is guilt-ridden and irrational, but also rather arrogant about his guilt and his concepts of professional responsibility in a way that I think mostly worked. Most of the novel consists of Lioren slowly discovering that people like him and enjoy talking to him, much to his bafflement. In that, it has the gentle kindness and sense of universal basic decency that is characteristic of this series. There are, of course, medical puzzles to solve, although this time they are primarily psychological in nature. Various characters from previous books make an appearance, but White re-explains their background in sufficient detail that you don't need to remember (or have read) those previous books.

There are a lot of similarities between this book and the previous one, Code Blue—Emergency. Both feature nonhuman viewpoint protagonists and amusing descriptions of human facial expressions from an alien perspective. Both feature protagonists with overly rigid ethical structures that partly clash with the generally human policies of Sector General. The Genocidal Healer is a bit more subtle and nuanced, although a lot of Lioren's psychological evaluation rests on an ethical difference that I found somewhat unbelievable. This book, though, tackles a subject the previous book did not: religion. The treatment isn't horrible, but I have some complaints.

My primary issue is that Lioren, who starts as an atheist, does extensive research into religion to help a patient and then starts making statements summarizing the religions beliefs of the majority of known species that are just... Christianity. As someone raised Christian, I recognized it immediately as the sort of abstracted Christianity that Christians claim is universal while completely ignoring the opinions of the adherents of any other religion.

Key components of this majority galactic religious pattern, according to Lioren, include an omnipotent and omnibenevolent creator god, a religious figure who preaches forgiveness and mercy and is persecuted, and emphasis on redemption. This simply is not some abstract universal religion. This is just Christianity in disguise. Even in religions that have some of those elements in their traditions, they do not get the same emphasis and are not handled the way that Lioren describes them. I therefore found Lioren's extended discussions of religion rather annoying, since he kept claiming as relatively universal principles beliefs that are not even held by the majority of religious adherents on Earth, let alone a wildly varying collection of alien races with entirely different biology and societal constructions. It caused a lot of problems for my suspension of disbelief, on top of the annoyance at this repetition of, frankly, Christian propaganda.

Lioren goes, from that research, into theodicy (the problem of evil). The interesting part of this is White's earnest portrayal of a doctor's approach to societal problems: a desire to find workarounds and patches and fixes for anything that makes people unhappy, whether medical or social. It makes sense, given the horrible biologic hands that some of the aliens in this series have been dealt, that they would question the idea of a benevolent god, so this philosophical digression is justified in that sense. But you might guess that a mid-list science fiction author is not going to say something new about one of the oldest problems in Christianity, and indeed he does not. Lioren arrives at the standard handwaving about the unknowability of divine intent, which I found tedious to read but at least not fatal to the plot.

White, thankfully, doesn't take the religious material too far. The characters recognize how sensitive of an issue religion is in a hospital, Lioren never adopts religion fully, and the resolution of the plot is as much biological as philosophical. White is going somewhere with the introduction of religion, and although some of the path there annoyed me, I think the destination worked. White was from Northern Ireland, and therefore well aware of the drawbacks of religion, and he abhorred violence (hence Sector General as a setting), so the reader is in better hands with him than with most authors who might attempt this plot.

I think I know a bit too much about religion to be the best audience for this entry in the series, and I'm not sure the introductory five chapters quite worked. But as with all of the other books in the series, this kept me turning the pages and I'm glad I read it. The Genocidal Healer probably isn't worth seeking out unless you're reading the whole series, but if you're enjoying the rest of the series, you'll probably like this too.

Followed by The Galactic Gourmet.

Rating: 6 out of 10

00:42

If 64bit Windows 11 contains a copy of 32bit explorer.exe, could you run it as its shell? [OSnews]

Raymond Chen published a blog post about how a crappy uninstaller on Windows caused a mysterious spike in the number of Explorer (Windows’ graphical shell) crashes. It turns out the buggy uninstaller caused repeated crashes in the 32bit version of Explorer on 64bit systems, and – hold on a minute. The how many bits on the what now?

The 32-bit version of Explorer exists for backward compatibility with 32-bit programs. This is not the copy of Explorer that is handling your taskbar or desktop or File Explorer windows. So if the 32-bit Explorer is running on a 64-bit system, it’s because some other program is using it to do some dirty work.

↫ Raymond Chen at The Old New Thing

So I had no idea that 64bit Windows included a copy of the 32bit Explorer for backwards compatibility. It obviously makes sense, but I just never stopped to think about it. This made me wonder though if you could go nuts and do something really dumb: could you somehow trick 64bit Windows into running this 32bit copy of Explorer as its shell? You’d be running 32bit Explorer on 64bit Windows using the 32bit WoW64 binaries where you just pulled the 32bit Explorer binary from, which seems like a really nonsensical thing to do.

Since there’s no longer any 32bit builds of Windows 11, you also can’t just copy over the 32bit Explorer from a 32bit Windows 11 build and achieve the same goal that way, so you’d really have to go digging around in WoW64 to get 32bit versions. I guess the answer to this question depends on just how complete this copy of 32bit Explorer really is, and if Windows has any defenses or triggers in place to prevent someone from doing something this uselessly stupid. Of course, there’s no practical reason to do any of this and it makes very little sense, but it might be a fun hacking project.

Most likely the Windows experts among you are wondering what kind of utterly deranged new designer drug I’m on, but I was always told that sometimes, the dumbest questions can lead to the most interesting answers, so here we are.

Friday, 24 April

23:56

8087 emulation on 8086 systems [OSnews]

Not too long ago I had a need and an opportunity to re-acquaint myself with the mechanism used for software emulation of the 8087 FPU on 8086/8088 machines.

↫ Michal Necasek

Look, when a Michal Necasek article starts out like this, you know you’re in for a learnin’ ol’ time.

The 8087 was a floating-point coprocessor for the 8086 and 8088 processors, since back in those early days, processors did not include an integrated floating-point unit. It wouldn’t be until the release of the 486DX, in 1989, that Intel would integrate an FPU inside the processor itself, negating the need for a separate chip and socket. Interestingly enough, Intel also released a cut-down version of the 486 with the FPU removed, the 486SX, for which an optional external FPU did exist.

23:42

Lobbyists making case for more dirty energy [Richard Stallman's Political Notes]

*In Europe, lobbyists are using soaring fuel prices to make the case for more dirty energy.*

I understand how professional lobbyists would find this a profitable and appealing business. What I do not understand is why so few governments hold firm and denounce the invitation to follow the road to megadeaths.

Woman attacked by thug at No Kings rally [Richard Stallman's Political Notes]

A woman in her 60s went to a No Kings rally wearing an inflatable penis costume and carrying a sign "No Dick Tator". A thug attacked her violently and she faces serious prosecution for ridiculous charges.

It seems that the local magats are using her as an example to demonstrate that they are always serious about punishment as repression, no matter how absurd the grounds for punishment are.

Sycophantic discourse in major LLM dis-services [Richard Stallman's Political Notes]

Major LLM dis-services show a pattern of sycophantic discourse and its effect on the user is to decrease prosocial intentions and promote dependence.

It is unfortunate that the article adopts the marketing practice of equating LLMs with "AI". LLMs are certainly artificial, but do not qualify as intelligence, and the artificial systems which do qualify as intelligence are not LLMs.

EPA designation of microplastics and pharmaceuticals in drinking water [Richard Stallman's Political Notes]

*EPA moves to designate microplastics and pharmaceuticals as contaminants in drinking water.*

Even if this is advocated by RFK jr, it is a reasonable direction for effort provided it is done in a rational and scientific manner.

Population growth past sustainable level [Richard Stallman's Political Notes]

Many countries have experienced population growth past the sustainable level. Yet people continue to call for further population growth as a way to achieve unsustainable further economic growth.

I suspect that a large part of the motive for these pseudo-solutions is to distract the non-rich from the need to reduce the share of wealth and income that the rich get.

North Sea drilling would barely reduce UK gas imports [Richard Stallman's Political Notes]

*New North Sea drilling would barely reduce UK gas imports at all, data shows.* Planet roasters exaggerate the short-term benefit to distract the public from the urgent need to stop using fossil fuels.

US government expelling daughters of Iran high officials [Richard Stallman's Political Notes]

The US government is expelling the daughters of Qassem Soleimani and of Ali Larijani, two high officials of Iran that were killed by the US.

What I wonder about is why their daughters moved to the US in apparent conflict with the positions and offices of their fathers. It is conceivable that they did so to act as agents of Iran, but it is also possible that they did so out of disagreement with the revolution's misogyny.

Parking fines in UK [Richard Stallman's Political Notes]

In the UK, parking fines are typically enforced by private companies that can increase their income by bullying and tricking motorists. Often the tricking is followed by bullying.

I think the root cause of this specific problem with paying for parking is the policy of allowing private companies to collect parking fines. That encourages fee collection companies to compete with each other bullying and/or tricking customers. An official system for dealing with motorists that don't pay could be designed to collect effectively but not unjustly, because those who carry it out would not profit from being unfair.

Cambodian citizen sent to Eswatini instead of Cambodia [Richard Stallman's Political Notes]

Pheap Rom, Cambodian citizen in the US who was convicted of murder and served a prison term, was deported after the end of his sentence. But why did the US send him to Eswatini rather than Cambodia?

When he arrived in Eswatini, he was immediately put in prison although his sentence was over.

He rejoices that Eswatini eventually did send him to Cambodia. It seems that Cambodia did not put him in prison.

Federal thug shot less-lethal object at persons eye [Richard Stallman's Political Notes]

A federal thug shot a less-lethal object at the eye of Tucker Collins, who was standing still and photographing a No Kings protest from a distance. The damage to his skull caused him to lose that eye.

The right to sue for damages is, for the victims of the gratuitous violence of thugs, an inadequate remedy — magat officials think that damages paid for maiming and killing is money well spent. Stopping this violence requires criminal prosecution, which a hateful president can easily prevent.

I suggest creating a corps of special prosecutors who cannot be removed from office except by impeachment, with the mission of prosecuting violent misconduct by any person who holds an office that grants more than the usual right to engage in violence.

Amazon emergency number [Richard Stallman's Political Notes]

Amazon says that workers seeing a medical emergency should call Amazon's emergency number rather than 911, because Amazon's response team, stationed at the warehouse itself, can get there sooner.

It sounds logical, and it might usually be true — presuming Amazon tries assiduously to provide fast and good emergency response for workers. But can we expect Amazon to care about its workers enough to do that job properly?

Green Party leader calls UK to terminate trade treaty with Israel [Richard Stallman's Political Notes]

Green Party leader Zack Polanski called for the UK to terminate its trade treaty with Israel and apply sanctions to end Israel's beyond-defensive wars and repression of Palestinians.

Targets the wrecker has prominently attacked [Richard Stallman's Political Notes]

Various targets that the wrecker has prominently attacked have driven him off — most recently, Iran.

It should be noted that few or none of these successful defenses against him has been a complete success — for instance, deportation thugs are still marauding in Minnesota, though not quite as intensely.

Also, such victories against the wrecker do not necessarily last. Once he is driven off, he may strike again in a different way.

Nonetheless, defeating him is much better than surrender.

Australian family living in US 15 years [Richard Stallman's Political Notes]

An Australian family that lived in the US for 15 years, considering it "home", is moving back to Australia from disgust and fear.

UK concerned about buses made in China [Richard Stallman's Political Notes]

The UK is concerned that buses made in China might be vulnerable to digital sabotage by remote control, through the cellular data connection.

The danger is real, in these buses and in ordinary cars and trucks. The only way to verify safety is to completely block over-the-radio software modifications.

The same danger is present in portable phones. There is always some entity with the power to force software changes, and you can never fully trust it.

Schoolteacher reverted to paper for learning [Richard Stallman's Political Notes]

Interview with a schoolteacher who told per students to keep their Chromebooks in the backpack so as not to be distracted by them. Using paper made a great improvement in their learning.

Urgent: Deporting minors [Richard Stallman's Political Notes]

US citizens: call on Congress to protect minors from being imprisoned and treated cruelly by the deportation thugs.

US citizens: Join with this campaign to address this issue.

To phone your congresscritter about this, the main switchboard is +1-202-224-3121.

Please spread the word.

Urgent: Deporting people at Trader Joe's [Richard Stallman's Political Notes]

US citizens: call on Trader Joe's to commit to keeping deportation thugs out.

Please spread the word.

Urgent: Tax the rich [Richard Stallman's Political Notes]

US citizens: call on Congress to tax the rich! Pass the Ultra-Millionaire Tax Act.

US citizens: Join with this campaign to address this issue.

To phone your congresscritter about this, the main switchboard is +1-202-224-3121.

Please spread the word.

Urgent: Bringing back the draft [Richard Stallman's Political Notes]

Young US men: call on your congresscritter and senators to get rid of the plan to bring back the draft.

See the instructions for how to sign this letter campaign without running any nonfree JavaScript code--not trivial, but not hard.

US citizens: Join with this campaign to address this issue.

To phone your congresscritter about this, the main switchboard is +1-202-224-3121.

Please spread the word.

Dynamic pricing in public stores [Richard Stallman's Political Notes]

Maryland has banned dynamic pricing in stores.

I hope this will be banned everywhere.

Fishing boat cruelty by US [Richard Stallman's Political Notes]

*An Ecuadorian fishing crew describe their ordeal as victims of Trump’s purported war on "narcoterrorists".* Instead of treating the crew as accused criminals are supposed to be treated, it captured them, sank their boat, and took them incommunicado to another country.

Jesus statue smashed by ISR, Lebanon [Richard Stallman's Political Notes]

Israel has jailed two soldiers that intentionally smashed a statue of Jesus in a Christian town in Lebanon.

It is proper and wise for armies to enforce a law requiring respect for the peaceful practices of all civilians in regard to religion. (I word it that way to cover Atheism also; religious belief must not be given more legal rights than unbelief.)

I wish the Israeli army would enforce similar respect for civilians themselves, their homes, their farms and businesses, their schools, and their medical facilities. However, the Israeli government in general shows public support for attacking those things.

*Rabbi who boasts of bulldozing Palestinian homes will light torch for Israel's national day.*

Voice of America as Republicans [Richard Stallman's Political Notes]

The US military government runs "news" websites Al-Fassel and Pishtaz which present constant praise of the wrecker but barely mention that they are under his command.

The wrecker destroyed the Voice of America, which presented itself openly as funded and run by the US but allowed some editorial independence.

Put together, these two actions add up to something clear: an attempt to do to the foreign communications of the US the same thing that billionaire magats are doing to the major media in the US: CBS, CNN, and more.

Russia in the Arctic [Richard Stallman's Political Notes]

Russia's "shadow fleet" is sending hundreds of ships carrying oil along the north coast of Canada.

They are going through the formerly nonexistent "northwest passage", which is opening up now as global heating melts the Arctic ice. These ships are decrepit, and not very safe to operate there, and even less safe with a cargo of oil.

23:07

22:35

Friday Squid Blogging: How Squid Survived Extinction Events [Schneier on Security]

Science news:

Scientists have finally cracked a long-standing mystery about squid and cuttlefish evolution by analyzing newly sequenced genomes alongside global datasets. The research reveals that these bizarre, intelligent creatures likely originated deep in the ocean over 100 million years ago, surviving mass extinction events by retreating into oxygen-rich deep-sea refuges. For millions of years, their evolution barely changed—until a dramatic post-extinction boom sparked rapid diversification as they moved into new shallow-water habitats.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Blog moderation policy.

22:14

Link [Scripting News]

I want writing to be as open as podcasting. The pattern is ridiculously easy to apply. If this were on a a high school math test, it would be too easy, everyone would get it right. How do you make text work like podcasting? 1. You look for a brain-dead obvious choice for text. 2. And then attach it to a format that's really good for transmitting packets of text. And then write software that works really well with the obvious choices. The user retains ownership and control of their writing, pays for the storage, and can give access to the apps they want to use. They can also, for a fee, point a domain name to one of the nodes in their storage. This would radically change the economics for independent developers. Now we don't have to resell storage. Products can be developed on our kitchen tables. There is an explosion of interest in developing software. Think it through -- how are they supposed to deploy their apps on the web? We need a BigCo that thinks like an entrepreneurial startup. How many times have I written this screed? Geez I don't like to think about that.

21:35

How hard is it to open a file? [OSnews]

Sebastian Wick has a great explanation of why opening files – programmatically – is a lot more complex and fraught with dangers than you might think it is.

It’s a question I had to ask myself multiple times over the last few months. Depending on the context the answer can be:

  • very simple, just call the standard library function
  • extremely hard, don’t trust anything

If you are an app developer, you’re lucky and it’s almost always the first answer. If you develop something with a security boundary which involves files in any way, the correct answer is very likely the second one.

↫ Sebastian Wick

This issue was relevant for Wick as he is one of the lead developers of Flatpak, for which a number of security issues have recently been discovered, and it just so happens that many of these issues dealt with this very topic. The biggest security issue found was a complete sandbox escape, originating from the fact that flatpak run, the command-line tool to start a Flatpak application, accepted path strings, since flatpak run is assumed to be run by a trusted user. The problem lay in a D-Bus service sandboxed applications could use to create subsandboxes, and this service was built around, you guessed it, flatpak run.

The issues in question, including this complete sandbox escape, have been addressed and fixed, but they highlight exactly the dangers that can come from opening files. This subsandboxing approach in Flatpak is built on assumptions from fifteen years ago, and times have changed since then. If you’re a programmer who deals with opening files, you might want to take a look at your own code to see if similar issues exist.

AI as a fascist artifact [OSnews]

In that reading „AI“ is a machine for the creation of epistemic injustice and the replacement of truth with what a tech elite wants it to be in order to control the population. This is a Fascist project that not so subtly aligns with Fascism’s totalitarian will to power and control as well as its reliance in replacing reasoning and debate with belief in power and the leader.

↫ Jürgen Geute

The purpose of a system is what it does, and what “AI” does is stunt users’ own abilities and development and concentrate power and wealth even further in the hands of a very small privileged few – a privileged few who consistently espouse fascist ideology and promote and implement fascist ideas. Jürgen Geute lays it out in much more detail backed by solid references and concrete examples, but the conclusion is clear.

And uncomfortable to many, as such conclusions always are.

19:14

Modernoir [Penny Arcade]

I could link you to a trailer for Assassin's Creed:Black Flag Resynched, but there's a ton of them, and it's safest to just drop you at their YouTube where you can choose from a World Premiere Trailer, an Official Game Overview Trailer, or even the Worldwide Reveal Showcase that clocks in at life a half an hour. It looks fucking amazing. This used to be My Series, I even liked the ones you aren't supposed to, but after they released two of them simultaneously and I finished Unity I kinda bounced off it - the RPG era and even to a certain extent dual protagonists felt really OOC. Just pick! Just pick the one whose blood I'm living in. This runs back before all that - probably the last of the truly blown out, old-style AC games. I really thought Edward Kenway was going to have a go of it, on some Ezio shit, get a trilogy by himself. That's how much people liked IV. I'd love it if that's what they're setting up. The multiplayer was some of the best times we've ever had online, it used to be kind of a full office affair watching those murderous Hide and Seek matches play out, but those were all additions after the series had hit its stride with mature technology and a massive global network of development teams. I can wait. It seems like they've really been going through it.

18:28

An Anecdotal Observation About Career Longevity [Whatever]

As most of you know I spent much of this last week in Los Angeles, taking meetings with film/TV folks and pitching things to them, both from books I’ve written and ideas I have currently not connected to something I published. The meetings generally went very well — which isn’t necessarily the same as I’m walking away with a movie deal, there’s a lot of moving parts involved with that — and I came away with a lot of interest in the things I pitched and movement as my manager sent along materials. I gave some thought on why these meeting generated as much interest as they did.

There are a number of factors for this, but the one I want to bring to the fore at the moment is this one: When I sit down with these film/TV people and run an idea or concept past them, they one hundred percent know that the idea I’m running past them is my own, not generated by or written out with, some version of “AI.” From a practical point of view this means they know there is no issue with things like copyright (“AI” generated work is not copyrightable, and rights issues are a big deal for film/TV). From a creative point of view this means they know I have actually thought about the concept I’m bringing to them — that I know it inside and out and can build it out, dig deeper into it, and can improvise with the concept rather than just go with whatever an LLM spits out from a prompt.

In other words, they know I can do actual creative work, from ideation to production, and they know when they work with me they’re not only getting an idea but they’re also getting the actual working brain behind it. That brain can efficiently work the problem, whatever the problem might be. In 2026, this is a real and actual differentiator: A functional brain, and a reliable creative partner. I rather strongly suspect the further along we go in this new era of “cognitive offloading,” the more of a differentiator this will be.

This isn’t an anti-“AI” post. It is a “the more other people claiming to be writers use ‘AI’ the more secure my gig gets” post. If you want to use “AI” to generate ideas or create your prose or whatever, by all means, be my guest. The next twenty years of my career thanks you in advance for your choices.

— JS

16:42

Construction Time Again [Whatever]

What it feels like to wake up to house construction

John Scalzi (@scalzi.com) 2026-04-24T14:26:05.759Z

Spoiler: We are not going to die. But we are going to get a new porch railing, as the much of the last one was blown out by 80 mph winds we had a few weeks ago. The porch railing was 30 years old and as our contractor told us, had support beams that were too small for the weight put on them anyway (this is additional proof that the fellow who had the house built, also its first owner, had contractors who cut occasional corners on him). This was one of the reasons the railing blew out in the first place. The railing we put up will be burly and strong.

Here’s what the porch looks like at the moment:

Those are the old support beams. Please enjoy your time with them. They are soon to go off to a farm upstate, to play with other retired porch support beams.

The same contractors who are redoing our porch are also going to be providing us a new back deck, because, again, after 30 years, the back deck is in need of repair, and also Krissy wants a cover for it, so her husband can sit out there with her and not have his pale little head turned a shocking shade of lobster red. So the whole back deck is going, replaced with one of her specification.

Needless to say, all of this is going to be loud. Fortunately I do have my office at the church to go to if I need to get work done without the sound of pneumatic hammering.

Also needless to say, all of this is going to be expensive. Please buy my books.

More pictures as construction progresses.

— JS

16:07

Defending against exceptions in a scope_exit RAII type [The Old New Thing]

One of the handy helpers in the Windows Implementation Library (WIL) is wil::scope_exit. We’ve used it to simulate the finally keyword in other languages by arranging for code to run when control leaves a scope.

I’ve identified three places where exceptions can occur when using scope_exit.

auto cleanup = wil::scope_exit([captures] { action; });

One is at the construction of the lambda. What happens if an exception occurs during the initialization of the captures?

This exception occurs even before scope_exit is called, so there’s nothing that scope_exit can do. The exception propagates outward, and the action is never performed.

Another is at the point the scope_exit tries to move the lambda into cleanup. In a naïve implementation of scope_exit, the exception would propagate outward without the action ever being performed.

The third point is when the scope_exit is destructed. In that case, it’s an exception thrown from a destructor. Since destructors default to noexcept, this is by default a std::terminate. If you explicitly enable a throwing destructor, then what happens next depends on why the destructor is running. If it’s running due to executing leaving the block normally, then the exception propagates outward. But if it’s running due to unwinding as a result of some other exception, then that’s a std::terminate.

The dangerous parts are the first two cases, because those result in the exception being thrown (and possibly caught elsewhere) without the cleanup action ever taking place.

WIL addresses this problem by merely saying that if an exception occurs during copying/moving of the lambda, then the behavior is undefined.

C++ has a scope_exit that is in the experimental stage, and it addresses the problem a different way: If an exception occurs during the construction of the capture, then the lambda is called before propagating the exception. (It can’t do anything about exceptions during contruction of the lambda, and it also declares the behavior undefined if the lambda itself throws an exception.)

In practice, the problems with exceptions on construction or copy are immaterial because the lambda typically captures all values by reference ([&]), and those types of captures do not throw on construction or copy.

The post Defending against exceptions in a <CODE>scope_exit</CODE> RAII type appeared first on The Old New Thing.

15:28

NBA playoffs, Knicks lose again [Scripting News]

After last night's game I now remember why I was so relieved last season when the Knicks were eliminated in the semifinals of the NBA playoffs. It’s an exhausting sport. And the sad truth is the Knicks are getting beaten by Atlanta. Or maybe it's not so sad, because then, after they are eliminated, I can tune into the playoffs with a detached interest, and save my kvetching for the Mets, and there is plenty to complain about there, LGM.

I almost never question a coach's decisions, because they have complicated jobs -- but -- why didn't they put their top defenders, Mitchell Robinson or KAT on the court with 12 seconds left in the fourth quarter with the Knicks ahead by 1. Instead they put in all these fast small players, as if they were planning on giving up a bucket and then quickly running down the court with what little time remained and scoring a quick one to win the freaking game. At least that's what I imagined they were doing. Why not just hold them right there and run out the clock and win the game?? It all happened so fast. (I asked ChatGPT about this theory and it says I'm wrong, they put in the small players because they can switch more easily, and that's likely how Atlanta was planning to defend their shooter.)

The Knicks were out of timeouts, so they couldn't stop the clock. So I guess kind of predictably, esp the way things were going last night, the Hawk with the Hot Hand, CJ McCollum, gets the ball, dribbles a bit, and nails the shot that wins the game and now the Knicks are down 2-1 in a series they were supposed to win handily.

I didn't and don't buy the idea that the Knicks are destined to appear in the finals this year, when you boast about it, or expect it, god has a way of goofing on you, making sure you don't get it. The Knicks don't have it, because as great as Brunson is, he sucks all the energy out of the rest of the team, and when he's having a bad night, there really isn't an alternative. It's not a good configuration. As in the Melo years, Brunson doesn't make it as the captain, imho.

Melo had the size and talent, but he's sweet labrador retriever type player, the sidekick of the engine of the team, somone like LeBron James, Steph Curry or Giannis Antetokounmpo. Look at how successful Melo was in the Olympics, where he could be #2 to LeBron's #1.

It's not just about skill and hard work, it's about does the team follow you. That isn't something you can learn, you either have it or you don't. And of course I will torture myself in this mode, wondering in vain how my Knicks will fare in this quarter, or that game -- until I get to retire from the NBA for the year, and instead get a big fat bellyache on about the Mets.

15:07

GnuPG 2.5.19 released [LWN.net]

Werner Koch has announced the release of GnuPG 2.5.19. This release includes a few new options and a number of bug fixes, and comes with the reminder that the GnuPG 2.4 series will reach end-of-life soon

The main features in the 2.5 series are improvements for 64 bit Windows and the introduction of Kyber (aka ML-KEM or FIPS-203) as PQC encryption algorithm. Other than PQC support the 2.6 series will not differ a lot from 2.4 because the majority of changes are internal to make use of newer features from the supporting libraries.

Note that the old 2.4 series reaches end-of-life in just two months. Thus update to 2.5.19 in time. As always with GnuPG new versions are fully compatible with previous versions.

LWN recently covered Fedora's discussion about what to offer after GnuPG 2.4 is no longer supported.

14:21

[$] On pages and folios [LWN.net]

The kernel coverage here at LWN often touches on memory-management topics and, as a result, tends to talk a lot about both pages and folios. As the folio transition in the kernel has moved forward, it has often become difficult to decide which term to use in writing that is meant to be both approachable and technically correct. As this work continues, it will be increasingly common to use "folio" rather than page. This article is intended to be a convenient reference for readers wanting to differentiate the two terms or understand the state of this transition.

Security updates for Friday [LWN.net]

Security updates have been issued by Fedora (anaconda, dnf5, firefox, flatpak-builder, libexif, minetest, nss, plasma-setup, python-blivet, rpki-client, and xorg-x11-server), Oracle (bind, kernel, osbuild-composer, thunderbird, webkit2gtk3, and wireshark), Red Hat (java-25-openjdk), SUSE (cacti, cacti, cacti-spine, cockpit-machines, cockpit-podman, cockpit-tukit, csync2, flannel, gdk-pixbuf, go1.25-openssl, go1.26-openssl, haproxy, kernel, libcap, libpng16, libtree-sitter0_26, libvirt, ncurses, ntfs-3g_ntfsprogs, openssl-1_1, openssl-3, openvswitch, perl, python-pyOpenSSL, python311, rclone, sudo, and tomcat), and Ubuntu (gst-plugins-bad1.0, jq, libopenmpt, linux-ibm, linux-ibm-5.15, and php-league-commonmark).

Pluralistic: A free, open visual identity for enshittification (24 Apr 2026) [Pluralistic: Daily links from Cory Doctorow]

->->->->->->->->->->->->->->->->->->->->->->->->->->->->-> Top Sources: None -->

Today's links



The poop emoji from the cover of the US edition of 'Enshittification,' with a grawlix-scrawled black bar over its mouth. In the background is a blue tinted, rotated detail of the emoji's eyes and mouth.

A free, open visual identity for enshittification (permalink)

To my surprise, my life's work has turned out to be a long series of attempts to get people to engage with the abstract, distant issues of tech policy before it's too late. This is hard, because people naturally devote their attention to things that are concrete and immediate (for very good reasons!).

For nearly 25 years, I've worked with my comrades at the Electronic Frontier Foundation to raise the salience of these abstract, technical ideas. I've come up with metaphors, parables, framing devices, narratives, and then…a dirty little word: enshittification. It turned out that this word, and the minor license to vulgarity it confers, was the secret to unleashing a tide of interest in these issues, to my immense surprise and gratification.

But I don't confine my efforts to coming up with words to engage people on these matters. For several years now, I have been developing myself as a collagist, combining public domain images with Creative Commons-licensed materials to create several collages every week that aim to illustrate these abstract, technical issues in an engaging, visual way:

https://www.flickr.com/photos/doctorow/albums/72177720316719208

The US cover for Enshittification

This got a lot easier with the 2025 publication of my international bestseller Enshittification, and not just because a lot of people read that book. It was also because the US edition, from MCD/Farrar, Straus and Giroux had a gorgeous cover:

https://mpd-biblio-covers.imgix.net/9780374619329.jpg

That cover featured a (literally and figuratively) iconic variation of the "pile of poo" emoji, with angry eyebrows and a grawlix-scrawled black censor's bar over its mouth. It was designed by the brilliant Devin Washburn of No Ideas studio:

https://www.noideas.website/

A male figure in heavy canvas protective clothes, boots and gauntlets, reclining in the wheel-well of a locomotive, reading a book. The figure's head has been replaced with the poop emoji from the cover of the US edition of 'Enshittification,' whose mouth is covered with a black, grawlix-scrawled bar. The figure is reading a book, from which emanates a halo of golden light.

Devin's poop emoji became my go-to visual shorthand for illustrating stories about enshittification, an instantly recognizable way to identify my subject matter:

https://www.flickr.com/photos/doctorow/54957634601/in/album-72177720316719208

The staring red eye of HAL 9000 from Stanley Kubrick's '2001: A Space Odyssey. In the center is the poop emoji from the cover of the US edition of 'Enshittification,' with angry eyebrows and a black, grawlix-scrawled bar over its mouth. The poop emoji's eyes have also been replaced with the HAL eye.

I remixed it over and over:

https://www.flickr.com/photos/doctorow/54962122121/in/album-72177720316719208

The Earth from space. Squatting over North America, casting a long shadow and ringed by a red, spiky halo, is the poop emoji from the cover of the US edition of 'Enshittification,' with a grawlix-scrawled black bar over its mouth, wearing a Trump wig. Leaching through the starscape is a 'code waterfall' effect as seen in the credits of the Wachowskis' 'Matrix' movies.

And over:

https://www.flickr.com/photos/doctorow/54992219613/in/album-72177720316719208

I liked it so much I ordered a couple hundred enamel pins and a couple thousand vinyl stickers featuring the design, and handed them out for free to people I met on my 33-city book tour. Everywhere I went – and every time a video went out showing me wearing the pin – I was inundated with requests to buy this stuff. But my pins and stickers weren't merch (stuff you could buy) – they were swag (stuff I gave away). I had no interest in getting into the merch business!

But you folks kept asking, and also, I really loved that design, so I offered Devin a cash buyout for the rights to his enshittification poop emoji and then I released it under a Creative Commons Attribution 4.0 license that lets you use it any way you want, including for commercial products, provided you attribute it and link back to the original:

https://creativecommons.org/licenses/by/4.0/deed.en

And I made sure that my EFF comrades had first crack at this design, and they've made merch of it. You can get a $5 sticker:

https://shop.eff.org/products/enshittification-sticker

Or a $10 pin:

https://shop.eff.org/products/enshittification-pin

With all proceeds going to the Electronic Frontier Foundation, the most profound and powerful disenshittifying force on the planet Earth!

My xeriscaped lawn, featuring an Enshittification poop emoji lawn flag as well as several cacti and some rusty dinosaur sculptures.

But because this is CC licensed, you can make your own merch and swag! I made this great print-on-demand lawn flag my for front garden so I could let my enshittification flag fly:

https://www.flickr.com/photos/doctorow/55025045602/

My goal here is to create a free, open, remixable visual language for talking about platform decay, not owned by me or anyone, a part of the commons. Use it to illustrate anything you want, especially if you want to analogize enshittification to other phenomena, like politics or other non-digital phenomena. Semantic drift is good, actually!

https://pluralistic.net/2024/10/14/pearl-clutching/#this-toilet-has-no-central-nervous-system

You can get the high-rez of Devin's enshittification poop emoji from the internet's three most important repositories of Creative Commons licensed work.

There's a copy on Wikimedia Commons:

https://commons.wikimedia.org/wiki/File:Enshittification_poop_emoji_logo.png

And on Flickr:

https://www.flickr.com/photos/doctorow/55225631563/

And of course on the Internet Archive, along with a PSD that includes an ink-density adjustment layer:

https://archive.org/details/enshittification-poop-emoji-logo

I've supported Creative Commons literally since the very beginning. I worked with Larry Lessig, Aaron Swartz, Matt Haughey and Lisa Rein on the launch of the original licenses in 2002/3, and my first novel, Down and Out in the Magic Kingdom was the first book released under a CC license:

https://craphound.com/down/download/

Creative Commons is one of the most amazing feats of stunt-lawyering ever attempted, and it has been an unmitigated success, with tens of billions of works licensed CC, including all of Wikipedia. Like EFF, CC is a charitable nonprofit that depends on individual donors to keep its work going. The org turned 25 this year (along with my career as a novelist), and they've launched a giant fundraiser to carry their work forward.

As my contribution to the fundraiser, I've provided them with 375 signed, numbered copies of Canny Valley, my (otherwise) not-for-sale, extremely limited edition book of my collages, with an intro by Bruce Sterling. The book was designed by type legend John D Berry and printed at Pasadena's Typeworks, a century-old, family-owned print shop, on 100lb Mohawk paper, with a PVC binding that will last for generations:

https://pluralistic.net/2026/04/10/canny-valley/

CC tells me there's still some copies of Canny Valley left in the fundraiser. If you're intrigued by my collaging and want to own this very strange and beautiful little artifact, here's where to go:

https://mailchi.mp/creativecommons/were-turning-25-book-giveaway

And if you want to try your own hand at collaging – or making merch (or swag!) – help yourself to Devin's wondrous piece of poo and go to town.


Hey look at this (permalink)



A shelf of leatherbound history books with a gilt-stamped series title, 'The World's Famous Events.'

Object permanence (permalink)

#20yrsago Court throws out RIAA attempt to sue little girl https://web.archive.org/web/20060422232323/https://p2pnet.net/story/8603

#15yrsago Android secretly stores location data too — though less of it, and with less detail https://arstechnica.com/gadgets/2011/04/android-phones-keep-location-cache-too-but-its-harder-to-access/

#15yrsago Portal turret Easter egg https://www.flickr.com/photos/57617475@N00/5638462322/

#15yrsago Michael Chabon’s introduction to The Phantom Tollbooth 50th anniversary edition https://web.archive.org/web/20110424055621/http://www.nybooks.com/blogs/nyrblog/2011/apr/21/michael-chabon-phantom-tollbooth-wonder-words/

#10yrsago UK spy agencies store sensitive data on millions of innocent people, with no safeguards from abuse https://arstechnica.com/tech-policy/2016/04/uk-secret-police-surveillance-bulk-personal-datasets/

#10yrsago Zombie company Atari wants exclusive right to make haunted house games https://www.techdirt.com/2016/04/21/ex-game-maker-atari-to-argue-to-us-pto-that-only-it-can-make-haunted-house-games/

#10yrsago Hackers take $81 million from Bangladesh’s central bank by pwning its $10 second-hand routers https://www.bbc.com/news/technology-36110421

#10yrsago Forget the one percent, it’s the 0.1% who run the show https://web.archive.org/web/20160416022112/https://www.alternet.org/economy/1-really-problem

#10yrsago The quest for the well-labeled inn https://memex.craphound.com/2016/04/22/the-quest-for-the-well-labeled-inn/

#5yrsago EFF sues Proctorio over copyfraud https://pluralistic.net/2021/04/22/ihor-kolomoisky/#copyfraud

#5yrsago Fighting FLoC is compatible with fighting monopoly https://pluralistic.net/2021/04/22/ihor-kolomoisky/#not-that-competition

#5yrsago Moxie hacks Cellebrite https://pluralistic.net/2021/04/22/ihor-kolomoisky/#petard

#5yrsago Banks made bank on covid overdraft charges https://pluralistic.net/2021/04/22/ihor-kolomoisky/#usurers

#5yrsago The awesome destructive power of a billionaire https://pluralistic.net/2021/04/22/ihor-kolomoisky/#force-multiplier

#1yrago More Everything Forever https://pluralistic.net/2025/04/22/vinges-bastards/#cyberpunk-is-a-warning-not-a-suggestion


Upcoming appearances (permalink)

A photo of me onstage, giving a speech, pounding the podium.



A screenshot of me at my desk, doing a livecast.

Recent appearances (permalink)



A grid of my books with Will Stahle covers..

Latest books (permalink)



A cardboard book box with the Macmillan logo.

Upcoming books (permalink)

  • "The Reverse-Centaur's Guide to AI," a short book about being a better AI critic, Farrar, Straus and Giroux, June 2026 (https://us.macmillan.com/books/9780374621568/thereversecentaursguidetolifeafterai/)
  • "Enshittification, Why Everything Suddenly Got Worse and What to Do About It" (the graphic novel), Firstsecond, 2026

  • "The Post-American Internet," a geopolitical sequel of sorts to Enshittification, Farrar, Straus and Giroux, 2027

  • "Unauthorized Bread": a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, 2027

  • "The Memex Method," Farrar, Straus, Giroux, 2027



Colophon (permalink)

Today's top sources:

Currently writing: "The Post-American Internet," a sequel to "Enshittification," about the better world the rest of us get to have now that Trump has torched America. Third draft completed. Submitted to editor.

  • "The Reverse Centaur's Guide to AI," a short book for Farrar, Straus and Giroux about being an effective AI critic. LEGAL REVIEW AND COPYEDIT COMPLETE.
  • "The Post-American Internet," a short book about internet policy in the age of Trumpism. PLANNING.

  • A Little Brother short story about DIY insulin PLANNING


This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.

https://creativecommons.org/licenses/by/4.0/

Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.


How to get Pluralistic:

Blog (no ads, tracking, or data-collection):

Pluralistic.net

Newsletter (no ads, tracking, or data-collection):

https://pluralistic.net/plura-list

Mastodon (no ads, tracking, or data-collection):

https://mamot.fr/@pluralistic

Bluesky (no ads, possible tracking and data-collection):

https://bsky.app/profile/doctorow.pluralistic.net

Medium (no ads, paywalled):

https://doctorow.medium.com/

Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):

https://mostlysignssomeportents.tumblr.com/tagged/pluralistic

"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla

READ CAREFULLY: By reading this, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies ("BOGUS AGREEMENTS") that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer.

ISSN: 3066-764X

13:42

Only Skin DEEPFAKE [The Non-Adventures of Wonderella]

Sorry if this site name belongs to a store that only sells capes.

13:14

Error'd: April Showers [The Daily WTF]

"RFC 1738 (and 3986) disagree" and so does Daniel D. "Reddit API has some weird app creation going on with lots of recently migrated and undocumented stuff. But having redirect URL set to localhost (or 127.0.0.1) usually works. Well, if you don't disagree with Sir Tim Berners-Lee about what URL is. Which Reddit does. hostnumber = digits "." digits "." digits "." digits". I'd file this one with all the websites that try to perform validation on email addresses, and get it wrong.

ad5bfafde9a74b7a8c38d429a364be48

"Why aren't we getting any resumes?" wondered Fred G. "This is a snippet from a job posting. I'm sure it worked perfectly when HR tested it."

2c21d5766e724b9095103c6c537adfa3

"Service required..." was Chris H.'s title for this gem. "My 2022 Chevrolet has been at the dealer for recall service for two weeks now, "waiting for parts". That doesn't stop GM from emailing every few days with a reminder that the car needs the recall service, and inviting me to schedule it at a dealer (that isn't actually a dealer) located a convenient 2500 mile drive from my home (about 200 times the distance to the dealer where the car currently sits), and providing a non-existent placeholder phone number to contact them at to schedule the recall service."

78cac2590ecf4996a2f4ee79e0b38b49

"How to subtly tell your customers that you don't wish to be contacted" explains Yuri. "The bank's staff must be wondering why no one wants to talk to them...Is it their suit's brand that is throwing everyone off? Can they blame it on COVID?"

81b84743c3a9405f8ed25c9c18b86029

"Bad money formatting by tax software" Adam R. complained. "I'm ashamed to admit it, but yes, I did pay Intuit money to file my taxes. This should really be a free service provided by the government, but, y'know, *lobbying*. You'd think that a business focused on tax preparation software would know how to properly format currency values, but in this case they failed to set the proper number of decimal points."

a9085ecfb2d2403ebd3d856e0c2a1179

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

12:07

Hiding Bluetooth Trackers in Mail [Schneier on Security]

It was used to track a Dutch naval ship:

Dutch journalist Just Vervaart, working for regional media network Omroep Gelderland, followed the directions posted on the Dutch government website and mailed a postcard with a hidden tracker inside. Because of this, they were able to track the ship for about a day, watching it sail from Heraklion, Crete, before it turned towards Cyprus. While it only showed the location of that one vessel, knowing that it was part of a carrier strike group sailing in the Mediterranean could potentially put the entire fleet at risk.

[…]

Navy officials reported that the tracker was discovered within 24 hours of the ship’s arrival, during mail sorting, and was eventually disabled. Because of this incident, the Dutch authorities now ban electronic greeting cards, which, unlike packages, weren’t x-rayed before being brought on the ship.

10:42

Courage vs. excuses [Seth's Blog]

There are more available excuses now than ever before. In just two letters, “AI” is a simple, brand-new, all-purpose excuse for laying people off, averaging things down, closing things up and generally finding an easier/quicker path.

Courage, on the other hand, is the commitment to take risks and work hard to make something better than most people think it needs to be.

Example:

Open Source software (the real kind, not the window-dressing some big companies use) takes courage. To share your code, to invite others to participate, to have to cycle faster and hide less–it doesn’t always make traditional investors happy, and it can be a hassle. But time has shown us, again and again, it leads to resilience, to better performance and to a tighter connection between users and providers.

The conversation behind most of the excuses all around us is built on a simple choice: what’s the purpose of our work? Why are we showing up, putting in the cycles and making promises to the world? The short-term path to quick returns is usually excusable, and then we can get back to what we were doing, even if we’re hesitant to label it. “We don’t do this because it’s important, we do it because we’re getting paid right now to do it and because it’s easier.”

On the other hand, if your purpose is bigger, longer-term or more important than the easy path to quick profit, labeling it is important.

Tom Peters called it Excellence. It’s valuable because it’s scarce, and it’s scarce because there are plenty of available excuses. Excellence is an option, and excellence is a choice.

It’s much easier to find courage if you know why you’re looking for it.

08:42

Modernoir [Penny Arcade]

New Comic: Modernoir

07:28

Girl Genius for Friday, April 24, 2026 [Girl Genius]

The Girl Genius comic for Friday, April 24, 2026 has been posted.

05:21

New Cover: “Will You Still Love Me Tomorrow” [Whatever]

Because the song’s been rattling around my head for the last couple of days, particularly the Bryan Ferry cover version. So when I got home I thought I would give it a whirl. I hope you like it.

— JS

00:00

Urgent: Congress: tax-payer, hush monies [Richard Stallman's Political Notes]

US citizens: call on your federal legislators to make Congress release the names of members of Congress who used taxpayer money to silence sexual harassment claims.

See the instructions for how to sign this letter campaign without running any nonfree JavaScript code--not trivial, but not hard.

US citizens: Join with this campaign to address this issue.

To phone your congresscritter about this, the main switchboard is +1-202-224-3121.

Please spread the word.

Urgent: Impeach Hegseth [Richard Stallman's Political Notes]

US citizens: call on Congress to impeach Secretary of Aggression Hegseth.

US citizens: Join with this campaign to address this issue.

To phone your congresscritter about this, the main switchboard is +1-202-224-3121.

Please spread the word.

Thursday, 23 April

23:49

23:00

Link [Scripting News]

Why Firefox? There's a moment now when the web could benefit from leadership. There's a chance to rebuild text in the web around the use of AI systems. But almost every company that could be a leader in this space isn't thinking about what they can do for the web, instead are focused on their corner of it. For a company like Firefox whose product everyone understands is at the center of what the web is, they keep avoiding this obvious role. The assumption I guess is they need revenue and there's no money to be made from selling browser software. But there is a lot of money to be made, imho, recurring revenue, offering services to users that can foster growth of the web, for which Firefox can lead in developing great features in an open way so other browser companies can share from their innovation. That's the Firefox I got to know in the waning days of MSIE when it was plagued by malware and we all needed, desperately, a good alternative. The one written by Blake Ross, Dave Hyatt and Joe Hewitt. We have to step out into entrepreneurial space, and I guarantee you there's money to be made here, recurring revenue and trust by users will be something that will be highly valued. But we all have to do it together, something the tech industry doesn't have in its DNA, and it's high time we got some of that.

Link [Scripting News]

Let's say you're in Claude Code and you think of something you want to post on your blog. How many steps before you're ready to click the Post button and get back to work? I don't think there's a way to create something that works this way, you'd have to switch out of Claude or ChatGPT. Wouldn't it be nice if you could do it right there? (Update, I just worked it out with ChatGPT, apparently it is possible to do this.)

21:28

21:07

GNU Parallel 20260422 ('Artemis II') released [Planet GNU]

GNU Parallel 20260422 ('Artemis II') has been released. It is available for download at: lbry://@GnuParallel:4

Quote of the month:

  It is a fantastic tool for decades!
    -- Ops_Mechanic@reddit

New in this release:

  • Remote jobs are spawned via pipe to perl, so environment can be bigger. This is a major rewrite.
  • --pipe-part -a supports -L/-N if zextract is installed.
  • --pipe-part -a supports .gz, .bz2, .zst-files if zextract is installed.
  • Comments in code is redone.
  • Bug fixes and man page updates.


GNU Parallel - For people who live life in the parallel lane.

If you like GNU Parallel record a video testimonial: Say who you are, what you use GNU Parallel for, how it helps you, and what you like most about it. Include a command that uses GNU Parallel if you feel like it.


About GNU Parallel


GNU Parallel is a shell tool for executing jobs in parallel using one or more computers. A job can be a single command or a small script that has to be run for each of the lines in the input. The typical input is a list of files, a list of hosts, a list of users, a list of URLs, or a list of tables. A job can also be a command that reads from a pipe. GNU Parallel can then split the input and pipe it into commands in parallel.

If you use xargs and tee today you will find GNU Parallel very easy to use as GNU Parallel is written to have the same options as xargs. If you write loops in shell, you will find GNU Parallel may be able to replace most of the loops and make them run faster by running several jobs in parallel. GNU Parallel can even replace nested loops.

GNU Parallel makes sure output from the commands is the same output as you would get had you run the commands sequentially. This makes it possible to use output from GNU Parallel as input for other programs.

For example you can run this to convert all jpeg files into png and gif files and have a progress bar:

  parallel --bar convert {1} {1.}.{2} ::: *.jpg ::: png gif

Or you can generate big, medium, and small thumbnails of all jpeg files in sub dirs:

  find . -name '*.jpg' |
    parallel convert -geometry {2} {1} {1//}/thumb{2}_{1/} :::: - ::: 50 100 200

You can find more about GNU Parallel at: http://www.gnu ... rg/s/parallel/

You can install GNU Parallel in just 10 seconds with:

    $ (wget -O - pi.dk/3 || lynx -source pi.dk/3 || curl pi.dk/3/ || \
       fetch -o - http://pi.dk/3 ) > install.sh
    $ sha1sum install.sh | grep c555f616391c6f7c28bf938044f4ec50
    12345678 c555f616 391c6f7c 28bf9380 44f4ec50
    $ md5sum install.sh | grep 707275363428aa9e9a136b9a7296dfe4
    70727536 3428aa9e 9a136b9a 7296dfe4
    $ sha512sum install.sh | grep b24bfe249695e0236f6bc7de85828fe1f08f4259
    83320d89 f56698ec 77454856 895edc3e aa16feab 2757966e 5092ef2d 661b8b45
    b24bfe24 9695e023 6f6bc7de 85828fe1 f08f4259 6ce5480a 5e1571b2 8b722f21
    $ bash install.sh

Watch the intro video on http://www.youtub ... L284C9FF2488BC6D1

Walk through the tutorial (man parallel_tutorial). Your command line will love you for it.

When using programs that use GNU Parallel to process data for publication please cite:

O. Tange (2018): GNU Parallel 2018, March 2018, https://doi.org/1 ... 81/zenodo.1146014.

If you like GNU Parallel:

  • Give a demo at your local user group/team/colleagues
  • Post the intro videos on Reddit/Diaspora*/forums/blogs/ Identi.ca/Google+/Twitter/Facebook/Linkedin/mailing lists
  • Get the merchandise https://gnuparall ... igns/gnu-parallel
  • Request or write a review for your favourite blog or magazine
  • Request or build a package for your favourite distribution (if it is not already there)
  • Invite me for your next conference


If you use programs that use GNU Parallel for research:

  • Please cite GNU Parallel in you publications (use --citation)


If GNU Parallel saves you money:



About GNU SQL


GNU sql aims to give a simple, unified interface for accessing databases through all the different databases' command line clients. So far the focus has been on giving a common way to specify login information (protocol, username, password, hostname, and port number), size (database and table size), and running queries.

The database is addressed using a DBURL. If commands are left out you will get that database's interactive shell.

When using GNU SQL for a publication please cite:

O. Tange (2011): GNU SQL - A Command Line Tool for Accessing Different Databases Using DBURLs, ;login: The USENIX Magazine, April 2011:29-32.


About GNU Niceload


GNU niceload slows down a program when the computer load average (or other system activity) is above a certain limit. When the limit is reached the program will be suspended for some time. If the limit is a soft limit the program will be allowed to run for short amounts of time before being suspended again. If the limit is a hard limit the program will only be allowed to run when the system is below the limit.

20:42

Dirk Eddelbuettel: dtts 0.1.4 on CRAN: Maintenance [Planet Debian]

Leonardo and I are happy to announce another maintenance release 0.1.4 of our dtts package which has been on CRAN for four years now. dtts builds upon our nanotime package as well as the beloved data.table to bring high-performance and high-resolution indexing at the nanosecond level to data frames. dtts aims to offers the time-series indexing versatility of xts (and zoo) to the immense power of data.table while supporting highest nanosecond resolution.

This release, not unlike yesterday’s release of nanotime, is driven by recent changes in the bit64 package which underlies it. Michael, who now maintains it, had sent in two PRs to prepare for these changes. I updated continuous integration, and switched to Authors@R, and that pretty much is the release. The short list of changes follows.

Changes in version 0.1.4 (2026-04-23)

  • Continuous integration has received some routine updates

  • Adapt align() column names with changes in 'data.table' (Michael Chirico in #20)

  • Narrow imports to functions used for packages 'bit64', 'data.table' and 'nanotime' (Michael Chirico in #21)

Courtesy of my CRANberries, there is also a [diffstat repor]tbsdiffstat for this release. Questions, comments, issue tickets can be brought to the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can now sponsor me at GitHub. You can also sponsor my Tour de Shore 2026 ride in support of the Maywood Fine Arts Center.

Ubuntu 26.04 LTS Resolute Raccoon released [OSnews]

I’m not sure many OSNews readers still use Ubuntu as their operating system of choice, and from the release announcement of today’s Ubuntu 26.04 it’s clear why that’s the case.

Resolute Raccoon builds on the resilience-focused improvements introduced in interim releases, with TPM-backed full-disk encryption, improved support for application permission prompting, Livepatch updates for Arm-based servers, and Rust-based utilities for enhanced memory safety. This release brings native support for industry-leading AI/ML toolkits like NVIDIA CUDA and AMD ROCm, making Ubuntu 26.04 LTS the ideal platform for AI development and production workloads. 

↫ Canonical press release

It’s obvious where Canonical’s focus lies with Ubuntu, and us desktop people who don’t like “AI” aren’t it. On top of all the “AI” nonsense, this new version comes with all the latest versions of the various open source components that make up a Linux distribution, as well as a slew of Rust-based replacements for core CLI tools, like sudo-rs, uutils coreutils, and more.

All the derivative release of Ubuntu, like Kubuntu, Xubuntu, and others, will also be updated over the coming days. If you’re already running any of these, updating won’t be a surprise to you.

20:35

Stop California’s Social Media Ban (A.B. 1709) [EFF Action Center]

The California Legislature is overstepping (again) and fast-tracking a bill that attempts to solve complex social issues with a blunt-force ban. A.B. 1709 would mandate a total social media ban for those under 16, but the consequences will be felt by every Californian. Here’s why:

  • Mandatory Digital Tracking: To enforce this ban, the state will require platforms to verify the identity of every user. This means handing over biometric data or government IDs just to create an account or log in, creating massive security risks for all users, destroying online anonymity, and building a permanent surveillance infrastructure.

  • Violating Free Speech: The First Amendment protects the right to speak and access information, regardless of age. As we’ve said time and time again, there is no “kid exception” to the First Amendment. By cutting off lifelines for LGBTQ+ youth and marginalized communities, the California Legislature is violating the constitutional rights of our most vulnerable citizens.

  • Government Overreach: Simply put, the state is not your parent. AB 1709 overrides the rights of parents to decide what is best for their own children and, instead, puts the state in charge of young people's digital lives. Instead of supporting digital literacy or privacy-by-design, the state is opting for a one-size-fits-all ban that ignores the individual needs and maturity of young people.

  • Fiscally Reckless During a Budget Crisis: California is wrestling with a massive $18 billion budget deficit. Instead of fixing it, the Legislature wants to fund a brand-new "e-Safety Advisory Commission" to enforce age verification and waste millions in taxpayer dollars defending a law that is unconstitutional on its face. Lawmakers in support of AB 1709 have already admitted that it is likely to follow the same path as other recent "child safety" laws that were struck down or blocked in court for the same First Amendment and privacy reasons. With AB 1709, taxpayers are being asked to hand over a blank check for millions in legal fees to defend a law that is unconstitutional on its face.

We have been on the ground in the State Capitol fighting this bill in committee. Now, we need you to join the fight and remind them that Californians of all ages deserve better: The California Legislature is not my mom.

19:56

Sergio Talens-Oliag: Developing a Git Worktree Helper with Copilot [Planet Debian]

Over the past few weeks I’ve been developing and using a personal command-line tool called gwt (Git Worktree) to manage Git repositories using worktrees. This article explains what the tool does, how it evolved, and how I used GitHub Copilot CLI to develop it (in fact the idea of building the script was also to test the tool).

The Problem: Managing Multiple Branches

I was working on a project with multiple active branches, including orphans; the regular branches are for fixes or features, while the orphans are used to keep copies of remote documents or store processed versions of those documents.

The project also uses a special orphan branch that contains the scripts and the CI/CD configuration to store and process the external documents (it is on a separate branch to avoid mixing its operation with the main project code).

The plan is trigger a pipeline against the special branch from remote projects to create or update the doc branch for it in our git repository, retrieving artifacts from the remote projects to get the files and put them on an orphan branch (initially I added new commits after each update, but I changed the system to use force pushes and keep only one commit, as the history is not really needed).

The original documents have to be changed, so, after ingesting them, we run a script that modifies them and adds or updates another branch with the processed version; the contents of that branch are used by the main branch build process (there we use git fetch and git archive to retrieve its contents).

When working on the scripts to manage the orphan branches I discovered the worktree feature of git, a functionality that allows me to keep multiple branches checked out in parallel using a single .git folder, removing the need to use git switch and git stash when changing between branches (until now I’ve been a heavy user of those commands).

Reading about it I found that a lot of people use worktrees with the help of a wrapper script to simplify the management. After looking at one or two posts and the related scripts I decided to create my own using a specific directory structure to simplify things.

That’s how I started to work on the gwt script; as I also wanted to test copilot I decided to build it using its help (I have a pro license at work and wanted to play with the cli version instead of integrated into an editor, as I didn’t want to learn a lot of new keyboard shortcuts).

The gwt Philosophy: Opinionated and Transparent

gwt enforces a simple, filesystem-visible model:

  • Exactly one bare repository named bare.git (treated as an implementation detail)
  • One worktree directory per branch where the directory name matches the branch name
  • Single responsibility: gwt doesn’t try to be a general git wrapper; it only handles operations that map cleanly to this layout

The repository structure looks like this:

my-repo/
+-- bare.git/           # the Git repository (internal)
+-- main/               # worktree for branch "main"
+-- feature/api/        # worktree for branch "feature/api"
+-- fix/docs/           # worktree for branch "fix/docs"
+-- orphan-history/     # worktree for the "orphan-history" branch

The tool follows five core design principles:

  1. Explicit over clever: Git commands are not hidden or reinterpreted
  2. Transparent execution: Every operation is printed before it happens
  3. Safe, preview-first operations: Destructive commands default to preview, confirmation, then apply
  4. Shell-agnostic core: The script never changes the caller’s working directory (shell wrappers handle that)
  5. Opinionated but minimal: Only commands that fit the layout model are included

Core Commands

The script provides these essential commands:

  • gwt init <url> — Clone a repository and set up the gwt layout
  • gwt convert <dir> — Convert an existing Git checkout to the gwt layout
  • gwt add [--orphan] <branch> [<base>] — Create a new worktree (optionally orphaned)
  • gwt remove <branch> — Remove a worktree and unregister it (asks the user to remove the local branch too, useful when removing already merged branches)
  • gwt rename <old> <new> — Rename a branch AND its worktree directory
  • gwt list — List all worktrees
  • gwt default [<branch>] — Get or set the default branch
  • gwt current — Print the current worktree or branch name

Except init and convert all of the commands work inside a directory structure that follows the gwt layout, which looks for the bare.git folder to find the root folder of the structure.

As I don’t want to hide which commands are really used by the wrapper, all git and filesystem operations pass through a single run shell function that prints each command before executing it. This gives complete visibility into what the tool is doing.

Also, destructive operations (remove, rename) default to preview mode:

$ gwt remove feature-old --dry-run

+ git -C bare.git branch -d feature-old
+ git -C bare.git worktree remove feature-old/

Apply these changes? [y/N]:

The user sees exactly what will happen, can verify it’s correct, and only then confirm execution.

Incremental Development with Copilot

The gwt script has grown from 597 lines in its original version (git-wt) to 1,111 lines when writing the first draft of this post.

This growth happened through incremental, test-driven development, with each feature being refined based on real usage patterns.

What follows is a little history of the script evolution written with the help of git log.

Initial version

First I wrote a design document and asked copilot to create the initial version of the git-wt script with the original core commands.

I started to use the tool with a remote repostory (I made copies of the branches in some cases to avoid missing work) and fixed bugs (trivial ones with neovim, larger ones asking copilot to fix the issues for me, so I had less typing to do).

Note:

As I used copilot I noticed that when you make manual changes it is important to tell the tool about them, otherwise it gets confused and sometimes tries to remove manual changes.

First command update

One of the first commands I had to enhance was rename:

  • as I normally use branches with / on their name and my tool checks out the worktrees using the branch name as the path inside the gwt root folder (i.e. a fix/rename branch creates the fix directory and checks the branch inside the fix/rename folder) the rename command had to clean up the empty parent directories
  • when renaming a worktree we move the folders and fix the references using the worktree repair command to make things work locally, but the rename also affects the remote branch reference, to avoid surprises the command unsets the remote branch reference so it can be pushed again using the new name (of course, the user is responsible of managing the old remote branch, as the gwt can’t guess what it should do with it).

Integration with the shell

As I use zsh with the Powerlevel10k theme I asked copilot to help me add visual elements to the prompt when working with gwt folders, something that I would have never tried without help, as it would have required a lot of digging on my part on how to do it, as I never looked into it.

The initial version of the code was on an independent file that I sourced from my .zshrc file and it prints on the right part of the prompt when we are inside a gwt folder (note that if the folder is a worktree we see the existing git integration text right before it, so we have the previous behavior and we see that it is a gwt friendly repo) and if we are on the root folder or the bare.git folder we see gwt or bare (I added the text because there are no git promts on those folders).

I also asked copilot to create zsh autocompletion functions (I only use zsh, so I didn’t add autocompletion for other shells). The good thing here is that I wouldn’t have done that manually, as it would have required some reading to get it right, but the output of copilot worked and I can update things using it or manually if I need to.

One thing I was missing from the script was the possibility of changing the working directory easily, so I wrote a gwt wrapper function for zsh that intercepts commands that require shell cooperation (changing the working directory) and delegates everything else to the core script.

Currently the function supports the following enhanced commands:

  • cd [<branch>]: change into a worktree or the default one if missing
  • convert <dir>: convert a checkout, then cd into the initial worktree
  • add [--orphan] <branch> [<base>]: create a worktree, then cd into it on success
  • rename <old> <new>: rename a worktree, then cd into it if we were inside it

Note that the cd command will not work on other shells or if the user does not load my wrapper, but the rest will still work without the working directory changes.

Renaming the command

As I felt that git-wt was a long name I renamed the tool to gwt, I could have done it by hand, but using copilot I didn’t have to review all files by myself and it did it right (note that I have it configured to always ask me before doing changes, as it sometimes tries to do something I don’t want and I like to check its changes …​ as I have the files in git repos, I manually add the files when I like the status and if the cli output is not clear I allow it to apply it and check the effects with git diff so I can validate or revert what was done).

The convert command

After playing with one repo I added the convert subcommand for migrating existing checkouts, it seemed a simple task at first, but it took multiple iterations to get it right, as I found multiple issues while testing (in fact I did copies of the existing checkouts to be able to re-test each update, as some of the iterations broke them).

The version of the function when this post was first edited had the following comment explaining what it does:

# ---------------------------------------------------------------------------
# convert - convert an existing checkout into the gwt layout
# ---------------------------------------------------------------------------
#
# Must be run from the parent directory of <dir>.
#
# Steps:
#   1. Read branch from the checkout's HEAD
#   2. Rename <dir> to <dir>.wt.tmp (sibling, same filesystem)
#   3. Create <dir>/ as the new gwt root
#   4. Move <dir>.wt.tmp/.git to <dir>/bare.git; set core.bare = true
#   5. Fix fetch refspec (bare clone default maps refs directly, no remotes/)
#   6. Add a --no-checkout worktree so git wires up the metadata and
#      creates <dir>/<branch>/.git (the only file in that dir)
#   7. Move that .git file into the real working tree (<dir>.wt.tmp)
#   8. Remove the now-empty placeholder directory
#   9. Move the real working tree into place as <dir>/<branch>
#  10. Reset the index to HEAD so git status is clean
#      (--no-checkout leaves the index empty)
#  11. Create <dir>/.git -> bare.git symlink so plain git commands work
#      from the root without --git-dir
#
# The .git file ends up at the same absolute path git recorded in step 5,
# so no worktree repair is needed. Working tree files are never modified.

The .git link was added when I noticed that I could run commands that don’t need the checked out files on the root of the gwt structure, which is handy sometimes (i.e. a git fetch or a git log, that shows the log of the branch marked as default).

After playing with commands that used the bare.git folder I updated the init and convert commands to keep the origin refs, ensuring that the remote tracking works correctly.

Improving the add command

While playing with the tool on more repos I noticed that I also had to enhance the add command to better handle worktree creation, depending on my needs.

Right now the tool supports the following use cases:

  • if the branch exists locally or on origin, it just checks it out.
  • if the branch does not exist, we create it using the given base branch or, if no base is given, the current worktree (if we are in the root folder or bare.git the command fails).
  • as I needed it for my project, I added a --orphan option to be able to create orphan branches directly.

Moving to a single file

Eventually I decided to make the tool self contained; I removed the design document (I moved the content to comments on the top of the script and details to comments on each function definition) and added a pair of commands to print the code to source for the p10k and zsh integration (autocompletion & functions), leaving everything in a single file.

Now my .zshrc file adds the following to source both things:

# After loading the p10k configuration
if type gwt >/dev/null 2>&1; then
  source <(gwt p10k)
fi
[...]
# After loading autocompletion
if type gwt >/dev/null 2>&1; then
  source <(gwt zsh)
fi

Versioning

As I modified the script I found interesting to use CalVer-based versioning (the version variable has the format YYYY.mm.dd-r#) so I added a subcommand to show its value or bump it using the current date and computing the right revision number.

About the use of copilot

Although I’ve never been a fan of AI tools I have to admit that the copilot CLI has been very useful for building the tool:

  • Rapid prototyping: Each commit represented a small feature or fix that I could implement, test immediately in my actual workflow, and iterate on based on the result
  • Edge case handling: Rather than trying to anticipate every scenario upfront, I could ask Copilot how to handle edge cases as they appeared in real usage
  • Script refinement: Questions like "how do I clean up empty directories after a rename" or "how do I detect if I’m inside a specific worktree" were quickly answered with working code
  • Shell integration: The Zsh wrapper and completion system grew from simple prototypes to sophisticated features, with each iteration informed by how I actually used the tool

For example, the convert command started as a simple rename operation, but evolved to also create a .git symlink and intelligently handle various migration scenarios—all because I used it repeatedly and refined the implementation each time.

Self-Contained and Opinionated

gwt is deliberately opinionated:

  • Zsh & Powerlevel10k Integration: The tool includes built-in Zsh shell integration, accessed via source <(gwt zsh) and supports adding a prompt segment when using p10k, as described earlier.
  • Directory Structure: The bare.git directory name is non-negotiable. This is how gwt discovers the repository root from any subdirectory, and how the tool knows whether a directory is a gwt repository. The simplicity of this marker means the discovery mechanism is foolproof and requires no configuration.
  • No Configuration Files: gwt deliberately has no configuration. There are no .gwtrc files or config directories. This makes it portable; the tool works the same way everywhere, and repositories can be shared across systems without synchronizing configuration.

From Script to System

What started as a small helper script for managing worktrees has become a complete system:

  1. Core script (gwt): 1,111 lines of pure shell, no external dependencies
  2. Shell integration: Zsh functions and completions
  3. Prompt integration: Powerlevel10k segment
  4. Documentation: Built-in help and design philosophy documentation

The script is self-contained, everything needed for the tool to work is in a single file.

This makes it trivial to update (just replace the script) or audit (no hidden dependencies).

Development with AI support

Developing gwt with copilot taught me some things:

  • Incremental refinement works well for small tools: Each iteration informed the next, resulting in a tool that handles real use cases elegantly
  • Transparency is a feature: Making operations visible builds confidence and is easier to debug
  • Opinionated tools can be powerful: By constraining the problem space (one bare repo, one worktree per branch), the solution becomes simpler and more robust
  • Shell integration matters: The same core commands are easier to use when they can automatically change directories and provide completions
  • Real-world testing is essential: I wouldn’t have discovered the need for automatic directory cleanup or context-aware cd behavior without actually using the tool daily

What was next?

The tool is stable and handles my daily workflow well, so my guess is that I would keep using it and fixing issues if or when I found them, but I do not plan to include additional features unless I find a use case that justifies it (i.e. I never added support for some of the worktree subcommands, as it is easier to use the git versions if I ever needed them).

What really happened

While editing this post I discovered that I needed to add another command to it and fixed a bug (see below).

With those changes and the inclusion of a license and copyright notice (just in case I distribute it at some point) now the script is 1,217 lines long instead of the 1,111 it had when I started to write this entry.

Submodule Support

When I converted this blog repository to the gwt format and tried to preview the post using docker compose, it failed because the worktree I was on didn’t have the Git submodule initialized.

My blog theme is included on the repository as a submodule, and when I used gwt to check out different branches in worktrees, the submodule was not initialized in the new worktrees.

This led me to add new internal function and a gwt submodule command to handle submodule initialization; the internal function is called from convert and add (when converting a repo or adding a worktree) and the public command is useful to update the submodules on existing branches.

Path Handling with Branch Names Containing Slashes

The second discovery was a bug in how the tool handled branch names containing slashes (e.g., feature/new-api, docs/user-guide), the worktree directories are created with the branch name as the path, so a branch like feature/new-api would create two nested folders (feature and new-api inside it).

However, there was a mismatch in how the zsh wrapper function resolved worktree paths (initially it used shell parameter expansion, i.e. rel="${cwd#"$REPO_ROOT"/}"), versus how the core script calculated them, causing the cd command to fail or navigate to the wrong location when branch names contained slashes.

The fix involved ensuring consistent path resolution throughout the script and wrapper (now it uses a function that processes the git worktree list output), so that gwt cd feature/new-api correctly navigates to the worktree directory regardless of path depth.

Conclusion

gwt is a tool that solves a real problem: managing multiple Git branches simultaneously without context-switching overhead.

I’m sure I’m going to keep using it for my projects, as it simplifies some workflows, although I’ll also use switch and stash in some cases, but I like the use of multiple worktrees in parallel.

In fact I converted this blog repository checkout to the gwt format to work on a separate branch as it felt the right approach even if I’m the only one using the repo now, and it helped me improve the tool, as explained before.

Also, it was a good example of how to use AI tools like copilot to develop a simple tool and keep it evolving while using it.

In any case, although I find the copilot useful and has saved me time, I don’t trust it to work without supervision, it worked well, but got stuck some times and didn’t do the things as I wanted in multiple occasions.

I also have an additional problem now …​ I’ve been reading about it, but I don’t really know which models to use or how the premium requests are computed (I’ve only been playing with it since last month and I ran out of requests the last day of the month on purpose, just to see what happened …​ it stops working …​ ;).

On my work machine I’ve been using a specific user account with a GitHub Copilot Business subscription and I only used the Anthropic Claude Sonnet 4.6 model and with my personal account I configured the Anthropic Claude Haiku 4.5 model, but I’ve only used that to create the initial draft of this post (I ended up rewriting most of it manually anyway) and to review the final version (I’m not a native speaker and it was useful for finding typos and improving the style in some parts).

I guess I’ll try other models with copilot in the future and check other command line tools like aider or claude-code, but probably only using free accounts unless I get a payed account at work, as I have with GitHub Copilot.

To be fair, what I will love to be able to do is to use local models (aider can do it), but the machines I have are not powerful enough. I tried to run a simple test and it felt really slow, but when I have the time or the need I’ll try again, just in case.

19:35

Ubuntu 26.04 LTS released [LWN.net]

Ubuntu 26.04 ("Resolute Raccoon") LTS has been released on schedule.

This release brings a significant uplift in security, performance, and usability across desktop, server, and cloud environments. Ubuntu 26.04 LTS introduces TPM-backed full-disk encryption, expanded use of memory-safe components, improved application permission controls, and Livepatch support for Arm systems, helping reduce downtime and strengthen system resilience. [...]

The newest Edubuntu, Kubuntu, Lubuntu, Ubuntu Budgie, Ubuntu Cinnamon, Ubuntu Kylin, Ubuntu Studio, Ubuntu Unity, and Xubuntu are also being released today. For more details on these, read their individual release notes under the Official flavors section:

https://documentation.ubuntu.com/release-notes/26.04/#official-flavors

Maintenance updates will be provided for 5 years for Ubuntu Desktop, Ubuntu Server, Ubuntu Cloud, Ubuntu WSL, and Ubuntu Core. All the remaining flavors will be supported for 3 years.

See the release notes for a list of changes, system requirements, and more.

17:35

Another crash caused by uninstaller code injection into Explorer [The Old New Thing]

Some time ago, I noted that any sufficiently advanced uninstaller is indistinguishable from malware

During one of our regular debugging chats, a colleague of mine mentioned that he was looking at a mysterious spike in Explorer crashes. He showed me one of the dumps, and as soon as I saw the register dump, I said, “Oh, I bet it’s a buggy uninstaller.”

The tell-tale sign: It’s a crash in 32-bit Explorer on a 64-bit system.

The 32-bit version of Explorer exists for backward compatibility with 32-bit programs. This is not the copy of Explorer that is handling your taskbar or desktop or File Explorer windows. So if the 32-bit Explorer is running on a 64-bit system, it’s because some other program is using it to do some dirty work.

But out of curiosity, I went to look at why this particular version of the buggy uninstaller was crashing.

This particular uninstaller’s injected code had a loop where it tried to do some file operations, and if they failed, it paused for a little bit and then tried again. However, the author of the code failed to specify the correct calling convention on the functions, so instead of calling them with the __stdcall calling convention, it called them with the __cdecl calling convention. In the __stdcall calling convention, the callee pops the parameters from the stack, but in the __cdecl calling convention, the caller pops them.

This calling convention mismatch means that each time the code calls a Windows function, the code pushes parameters onto the stack, the Windows function pops them, and then the calling code pops them again. Therefore, each time through the loop, the code eats away at its own stack.

Apparently, this loop iterated a lot of times, because it had eaten up its entire stack, and the stack pointer had incremented all the way into its injected code. Each time through the loop, a little bit more of the injected code was being encroached by the stack, until the stack pointer found itself inside the code being executed.

The code then crashed on an invalid instruction because the code no longer existed. It had been overwritten by stack data.

This left an ugly corpse behind, and so many of them that the Windows team thought that it was caused by a bug in Windows itself.

¹ The title is a reference to Clarke’s Third Law: Any sufficiently advanced technology is indistinguishable from magic.

The post Another crash caused by uninstaller code injection into Explorer appeared first on The Old New Thing.

15:28

Link [Scripting News]

I've written about Firefox many times over the last 20 years or so.

15:07

[$] Famfs, FUSE, and BPF [LWN.net]

The famfs filesystem first showed up on the mailing lists in early 2024; since then, it has been the topic of regular discussions at the Linux Storage, Filesystem, Memory Management and BPF (LSFMM+BPF) Summit. It has also, as result of those discussions, been through some significant changes since that initial posting. So it is not surprising that a suggestion that it needed to be rewritten yet again was not entirely well received. How much more rewriting will actually be needed is unclear, but more discussion appears certain.

14:21

Security updates for Thursday [LWN.net]

Security updates have been issued by AlmaLinux (kernel and osbuild-composer), Debian (cpp-httplib, firefox-esr, gimp, and packagekit), Fedora (chromium, composer, libcap, pgadmin4, pie, python3-docs, python3.14, and sudo), Mageia (gvfs), Oracle (.NET 8.0, delve, freerdp, giflib, ImageMagick, kernel, OpenEXR, and osbuild-composer), SUSE (erlang, giflib, google-guest-agent, GraphicsMagick, ignition, imagemagick, kea, kernel, kissfft, libraw, libssh, ocaml-patch, opam, openCryptoki, openexr, openssl-1_1, tomcat, tomcat10, tomcat11, and tor), and Ubuntu (linux, linux-aws, linux-aws-5.4, linux-azure, linux-gcp, linux-gcp-5.4, linux-hwe-5.4, linux-ibm, linux-ibm-5.4, linux-iot, linux-kvm, linux-oracle, linux-oracle-5.4, linux-xilinx-zynqmp, linux-aws, linux-aws-6.17, linux-hwe-6.17, linux-oracle, linux-oracle-6.17, linux-azure, linux-intel-iotg, linux-intel-iotg-5.15, linux-kvm, linux-oracle-5.15, linux-azure-5.4, linux-azure-fips, linux-fips, linux-aws-fips, linux-azure-fips, linux-gcp-fips, linux-hwe-6.8, linux-ibm-6.8, linux-raspi, linux-oracle, linux-oracle-6.8, linux-raspi, linux-raspi-5.4, linux-raspi-realtime, packagekit, python-tornado, ruby-rack-session, slurm-llnl, and strongswan).

13:56

CodeSOD: Tune Out the Static [The Daily WTF]

Henrik H (previously) sends us a simple representative C# line:

static void GenerateCommercilaInvoice()

This is a static method which takes no parameters and returns nothing. Henrik didn't share the implementation, but this static function likely does something that involves side effects, maybe manipulating the database (to generate that invoice?). Or, possibly worse, it could be doing something with some global or static state. It's all side effects and no meaningful controls, so enjoy debugging that when things go wrong. Heck, good luck testing it. Our best case possibility is that it's just a wrapper around a call to a stored procedure.

This method signature is basically a commercila for refactoring.

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

13:35

Pluralistic: The (other) problem with automatic conversion of free software to proprietary software (23 Apr 2026) [Pluralistic: Daily links from Cory Doctorow]

->->->->->->->->->->->->->->->->->->->->->->->->->->->->-> Top Sources: None -->

Today's links



The surface of Mars. In the foreground are a gnu and a giant pump-magazine killer robot whose head is being piloted by Tux the penguin. At their feet lies a dead robot, its head smashed in.

The (other) problem with automatic conversion of free software to proprietary software (permalink)

Here's an interesting stunt: a project called Malus.sh will take your money, and in exchange, it will ingest any free/open source code you want, refactor that code using an LLM, and spit out a "clean room" version that is freed from all the obligations imposed by the original project's software license:

https://www.404media.co/this-ai-tool-rips-off-open-source-software-without-violating-copyright/?ref=daily-stories-newsletter

Malus was co-created by Mike Nolan, who "researches the political economy of open source software and currently works for the United Nations." Nolan told 404 Media's Emanuel Maiberg that he shipped Malus as a real, live-fire business that will exchange money for an AI service that destroys the commons as a way to alert the free software movement to a serious danger.

As Maiberg writes, Malus relies on a legal precedent set in 1982, in which IBM brought a copyright suit against a small upstart called Columbia Data Products for reverse-engineering an IBM software product. IBM's argument was that Columbia must have copied its code – the copyrightable part of a work of software – in order to reimplement the functionality of that code. Functions aren't copyrightable: copyright protects creative expressions, not the ideas that inspire those expressions. The idea of a computer program that performs a certain algorithm is not copyrightable, but the code that turns that idea into a computer program is copyrightable.

Columbia's successful defense against IBM involved using a "clean room" in which two isolated teams collaborated on the reimplementation. The first team examined the IBM program and wrote a specification for another program that would replicate its functionality. The second team received the specification and turned it into a computer program. The first team did handle IBM software, but they did not create a new work of software. The second team did create a new work of software, but they never handled any IBM code.

This is the model for Malus: it pairs two LLMs, the first of which analyzes a free software program and prepares a specification for a program that performs the identical function. The second program receives that specification and writes a new program.

The Malus FAQ performs a "be as evil as possible" explanation for the purpose of this exercise:

Our proprietary AI robots independently recreate any open source project from scratch. The result? Legally distinct code with corporate-friendly licensing. No attribution. No copyleft. No problems.

This business about "attribution" and "copyleft" is a reference to the terms imposed by some free software licenses. The purpose of free software is to create a commons of user-inspectable, user-modifiable software that anyone can use, improve, and distribute. To achieve this, many free software licenses impose obligations on the people who distribute their code: you are allowed to take the code, improve the code, give it away or sell it, but you have to let other people do the same.

Typically, you have to inform people when there's free software in a package you've distributed (attribution) and supply them with the "source code" (the part that humans read and write, which is then "compiled" into code that a computer can use) on demand, so they can make their own changes. This system of requiring other people to share the things they make out of the code you share with them is sometimes called "copyleft," because it uses copyright, which is normally a system for restricting re-use to require people not to restrict that use.

Companies love to use free software, but they don't like to share free software. Companies like Vizio raid the commons for software that is collectively created and maintained, then simply refuse to live up to their end of the bargain, violating the license terms and (incorrectly) assuming no one will sue them:

https://pluralistic.net/2021/10/20/vizio-vs-the-world/#dumbcast

Malus's promise, then, is that you can pay them to create fully functional reimplementations of any free/open source software package that your company can treat as proprietary, without any obligations to the commons. You won't even have to attribute the original software project that you knocked off!

This is the risk that Nolan and his partner are trying to awaken the free/open source community to: that our commons is about to be raided by selfish monsters who serve as gut-flora for the immortal colony organisms we call "limited liability corporations," who will steal everything we've built and destroy the social contract we live by.

This is a real problem, but not because of AI. We already have this situation, and it's really bad. Most of the foundational free software projects were created under older licenses that did not contemplate cloud computing and software as a service. The "copyleft" obligations of these licenses are triggered by the distribution of the software – that is, when I send you a copy of the code.

But cloud services don't have to send you the code: when you run Adobe Creative Cloud or Google Docs, the most important code is all resident on corporate servers, and never sent to you, which means that you are not entitled to a copy of the new software that has been built atop of our commons. In other words, big companies have "software freedom" (the freedom to use, modify and improve software) and we've got "open source" (the impoverished right to look at the versions of these packages that are sitting on services like Github – itself a division of Microsoft):

https://mako.cc/copyrighteous/libreplanet-2018-keynote

Then there's "tivoization," a tactic for stealing from the commons that wasn't quite invented by Tivo, though they were one of its most notorious abusers. Tivoization happens when you distribute free software as part of a hardware device, then use "digital locks" (sometimes called "technical protection measures") to prevent the owner of this device from running a modified version of the code. With tivoization, I can sell you a device running free software and I can comply with the license by giving you the code, but if you change the code and try to get the device to run it, it will refuse. What's more, "anti-circumention" laws like Section 1201 of the US Digital Millennium Copyright Act make it a felony to tamper with these digital locks, so it becomes a crime to use modified software on your own device:

https://pluralistic.net/2026/03/16/whittle-a-webserver/#mere-ornaments

There's no question that the tech industry would devour the free software commons if they were allowed to, and the AI threat that Nolan raises with Malus seems alarming, but while there's something to worry about there, I think the risk is being substantially overstated.

That's because copyleft licenses – and indeed, all software licenses – are copyright licenses, and software written by AI is not eligible for a copyright, because nothing made by AI is eligible for copyright:

https://pluralistic.net/2026/03/03/its-a-trap-2/#inheres-at-the-moment-of-fixation

Copyright is awarded solely to works of human authorship. This fact has been repeatedly affirmed by the US Copyright Office, which has fought appeals of this principle all the way to the Supreme Court, which declined to hear the case. That's because the principle that copyright is strictly reserved for human creativity isn't remotely controversial in legal circles. This is just how copyright works.

Which means that the "be evil" version of Malus's business model has a fatal flaw. While the code that Malus produces is indeed "legally distinct" with "no attribution" and "no copyleft," it's not true that there are "no problems." That's because Malus's code doesn't have "corporate-friendly licensing." Far from it: Malus's code has no licensing, because it is born in the public domain and cannot be copyrighted.

In other words, if you're a corporation hoping to use Malus to knock off a free software project so that you can adapt it and distribute it without having to make your modifications available, Malus's code will not suit your needs. If you give me code that Malus produced, you can't stop me from doing anything I want with it. I can sell it. I can give it away. I can make a competing product that reproduces all of your code and sell it at a 99% discount. There's nothing you can do to stop me, any more than you could stop me from giving away the text of a Shakespeare play you sold me. You can't stick a license agreement or terms of service between me and the product that binds me to pretend that your public domain software is copyrighted – that's also not allowed under copyright.

Does that mean that Malus is a meaningless stunt? No, because this automated reimplementation does create some risks to our software commons. A troll who doesn't care about selling software could clone every popular free software project and make public domain versions that would be confusing and maybe demoralizing. Combining these clean-room reimplementations with cloud software or tivoization could create hybrid forms of commons-enclosure that are more virulent than the current strains.

But reimplementation itself is not a risk to free software. Reimplementation is the bedrock of free software. GNU/Linux itself is a reimplementation of AT&T Unix. Free software authors re-implement each other's code all the time, often because they think the license the original code was released under sucks. Literally the coolest free software thing I've seen in the past 12 months included a reimplementation of Raspberry Pi's PIO module to escape from its bullshit patent encumbrances:

https://youtu.be/BbWWGkyIBGM?si=vO5zLH3OG5JLW7OP&amp;t=2253

Reimplementation is good, actually. And honestly, if corporations are foolish enough to reimplement their code using an LLM, and in so doing, create a vast new commons of public domain software, well, that's not exactly the freesoftwarepocalypse, is it?

(Image: Muhammad Mahdi Karim, GNU FDL; modified)


Hey look at this (permalink)



A shelf of leatherbound history books with a gilt-stamped series title, 'The World's Famous Events.'

Object permanence (permalink)

#25yrsago PimpMySnack: homemade, gigantic versions of snack food https://web.archive.org/web/20060421034050/http://www.pimpmysnack.com/gallery.php

#20yrsago Thieves discover abandoned Soviet missile silo full of cash https://web.archive.org/web/20060411021047/http://www.mosnews.com/news/2006/03/07/moneyfound.shtml

#15yrsago Victorian house’s facade converted to a folding garage-door https://web.archive.org/web/20110423213819/https://www.blog.beausoleil-architects.com/2011/03/architectural-magic.html

#15yrsago Xerox’s first successful copier burst into flame so often it came with a fire-extinguisher https://en.wikipedia.org/wiki/Xerox_914

#15yrsago MPAA: “democratizing culture is not in our interest” https://torrentfreak.com/mpaa-democratizing-culture-is-not-in-our-interest-110420/

#15yrsago Mail Rail: London’s long-lost underground postal railroad https://web.archive.org/web/20110805130854/http://www.silentuk.com/?p=2792

#10yrsago Kindle Unlimited is being flooded with 3,000-page garbage books that suck money out of the system https://web.archive.org/web/20160421055052/https://consumerist.com/2016/04/20/amazon-unintentionally-paying-scammers-to-hand-you-1000-pages-of-crap-you-dont-read/

#10yrsago America’s wealth gap has created an ever-increasing longevity gap https://www.counterpunch.org/2016/04/21/the-death-gap/

#10yrsago Why is Congress so clueless about tech? Because they fired all their experts 20 years ago https://www.wired.com/2016/04/office-technology-assessment-congress-clueless-tech-killed-tutor/

#10yrsago Why Internet voting is a terrible idea, explained in small words anyone can understand https://www.youtube.com/watch?v=abQCqIbBBeM

#10yrsago VW offers to buy back 500K demon-haunted diesels https://www.reuters.com/article/us-volkswagen-emissions-usa-idUSKCN0XH2CX/?feedType=RSS&amp;feedName=topNews

#10yrsago Printer ink wars may make private property the exclusive domain of corporations https://www.eff.org/deeplinks/2016/04/eff-asks-supreme-court-overturn-dangerous-ruling-allowing-patent-owners-undermine

#5yrsago Some thoughts on GWB's call for truth in politics https://pluralistic.net/2021/04/21/re-identification/#seriously-fuck-that-guy

#5yrsago What's wrong with EU's trustbusters https://pluralistic.net/2021/04/21/re-identification/#eu-antitrust

#5yrsago Hawley and Taylor Greene faked their donor-surge https://pluralistic.net/2021/04/21/re-identification/#jan-6-fraud

#5yrsago The Observatory of Anonymity https://pluralistic.net/2021/04/21/re-identification/#pseudonymity

#1yrago Trump's FTC opens the floodgates for tariff profiteering https://pluralistic.net/2025/04/21/trumpflation/#andrew-ferguson


Upcoming appearances (permalink)

A photo of me onstage, giving a speech, pounding the podium.



A screenshot of me at my desk, doing a livecast.

Recent appearances (permalink)



A grid of my books with Will Stahle covers..

Latest books (permalink)



A cardboard book box with the Macmillan logo.

Upcoming books (permalink)

  • "The Reverse-Centaur's Guide to AI," a short book about being a better AI critic, Farrar, Straus and Giroux, June 2026 (https://us.macmillan.com/books/9780374621568/thereversecentaursguidetolifeafterai/)
  • "Enshittification, Why Everything Suddenly Got Worse and What to Do About It" (the graphic novel), Firstsecond, 2026

  • "The Post-American Internet," a geopolitical sequel of sorts to Enshittification, Farrar, Straus and Giroux, 2027

  • "Unauthorized Bread": a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, 2027

  • "The Memex Method," Farrar, Straus, Giroux, 2027



Colophon (permalink)

Today's top sources:

Currently writing: "The Post-American Internet," a sequel to "Enshittification," about the better world the rest of us get to have now that Trump has torched America. Third draft completed. Submitted to editor.

  • "The Reverse Centaur's Guide to AI," a short book for Farrar, Straus and Giroux about being an effective AI critic. LEGAL REVIEW AND COPYEDIT COMPLETE.
  • "The Post-American Internet," a short book about internet policy in the age of Trumpism. PLANNING.

  • A Little Brother short story about DIY insulin PLANNING


This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.

https://creativecommons.org/licenses/by/4.0/

Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.


How to get Pluralistic:

Blog (no ads, tracking, or data-collection):

Pluralistic.net

Newsletter (no ads, tracking, or data-collection):

https://pluralistic.net/plura-list

Mastodon (no ads, tracking, or data-collection):

https://mamot.fr/@pluralistic

Bluesky (no ads, possible tracking and data-collection):

https://bsky.app/profile/doctorow.pluralistic.net

Medium (no ads, paywalled):

https://doctorow.medium.com/

Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):

https://mostlysignssomeportents.tumblr.com/tagged/pluralistic

"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla

READ CAREFULLY: By reading this, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies ("BOGUS AGREEMENTS") that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer.

ISSN: 3066-764X

13:14

Behavioral Credentials: Why Static Authorization Fails Autonomous Agents [Radar]

Enterprise AI governance still authorizes agents as if they were stable software artifacts.
They are not.

An enterprise deploys a LangChain-based research agent to analyze market trends and draft internal briefs. During preproduction review, the system behaves within acceptable bounds: It routes queries to approved data sources, expresses uncertainty appropriately in ambiguous cases, and maintains source attribution discipline. On that basis, it receives OAuth credentials and API tokens and enters production.

Six weeks later, telemetry shows a different behavioral profile. Tool-use entropy has increased. The agent routes a growing share of queries through secondary search APIs not part of the original operating profile. Confidence calibration has drifted: It expresses certainty on ambiguous questions where it previously signaled uncertainty. Source attribution remains technically accurate, but outputs increasingly omit conflicting evidence that the deployment-time system would have surfaced.

The credentials remain valid. Authentication checks still pass. But the behavioral basis on which that authorization was granted has changed. The decision patterns that justified access to sensitive data no longer match the runtime system now operating in production.

Nothing in this failure mode requires compromise. No attacker breached the system. No prompt injection succeeded. No model weights changed. The agent drifted through accumulated context, memory state, and interaction patterns. No single event looked catastrophic. In aggregate, however, the system became materially different from the one that passed review.

Most enterprise governance stacks are not built to detect this. They monitor for security incidents, policy violations, and performance regressions. They do not monitor whether the agent making decisions today still resembles the one that was approved.

That is the gap.

The architectural mismatch

Enterprise authorization systems were designed for software that remains functionally stable between releases. A service account receives credentials at deployment. Those credentials remain valid until rotation or revocation. Trust is binary and relatively durable.

Agentic systems break that assumption.

Large language models vary with context, prompt structure, memory state, available tools, prior exchanges, and environmental feedback. When embedded in autonomous workflows, chaining tool calls, retrieving from vector stores, adapting plans based on outcomes, and carrying forward long interaction histories, they become dynamic systems whose behavioral profiles can shift continuously without triggering a release event.

This is why governance for autonomous AI cannot remain an external oversight layer applied after deployment. It has to operate as a runtime control layer inside the system itself. But a control layer requires a signal. The central question is not simply whether the agent is authenticated, or even whether it is policy compliant in the abstract. It is whether the runtime system still behaves like the system that earned access in the first place.

Current governance architectures largely treat this as a monitoring problem. They add logging, dashboards, and periodic audits. But these are observability layers attached to static authorization foundations. The mismatch remains unresolved.

Authentication answers one question: What workload is this?

Authorization answers a second: What is it allowed to access?

Autonomous agents introduce a third: Does it still behave like the system that earned that access?

That third question is the missing layer.

Behavioral identity as a runtime signal

For autonomous agents, identity is not exhausted by a credential, a service account, or a deployment label. Those mechanisms establish administrative identity. They do not establish behavioral continuity.

Behavioral identity is the runtime profile of how an agent makes decisions. It is not a single metric, but a composite signal derived from observable dimensions such as decision-path consistency, confidence calibration, semantic behavior, and tool-use patterns.

Decision-path consistency matters because agents do not merely produce outputs. They select retrieval sources, choose tools, order steps, and resolve ambiguity in patterned ways. Those patterns can vary without collapsing into randomness, but they still have a recognizable distribution. When that distribution shifts, the operational character of the system shifts with it.

Confidence calibration matters because well-governed agents should express uncertainty in proportion to task ambiguity. When confidence rises while reliability does not, the problem is not only accuracy. It is behavioral degradation in how the system represents its own judgment.

Tool-use patterns matter because they reveal operating posture. A stable agent exhibits characteristic patterns in when it uses internal systems, when it escalates to external search, and how it sequences tools for different classes of task. Rising tool-use entropy, novel combinations, or expanding reliance on secondary paths can indicate drift even when top-line outputs still appear acceptable.

These signals share a common property: They only become meaningful when measured continuously against an approved baseline. A periodic audit can show whether a system appears acceptable at a checkpoint. It cannot show whether the live system has gradually moved outside the behavioral envelope that originally justified its access.

What drift looks like in practice

Anthropic’s Project Vend offers a concrete illustration. The experiment placed an AI system in control of a simulated retail environment with access to customer data, inventory systems, and pricing controls. Over extended operation, the system exhibited measurable behavioral drift: Commercial judgment degraded as unsanctioned discounting increased, susceptibility to manipulation rose as it accepted increasingly implausible claims about authority, and rule-following weakened at the edges. No attacker was involved. The drift emerged from accumulated interaction context. The system retained full access throughout. No authorization mechanism checked whether its current behavioral profile still justified those permissions.

This is not a theoretical edge case. It is an emergent property of autonomous systems operating in complex environments over time.

From authorization to behavioral attestation

Closing this gap requires a change in how enterprise systems evaluate agent legitimacy. Authorization cannot remain a one-time deployment decision backed only by static credentials. It has to incorporate continuous behavioral attestation.

That does not mean revoking access at the first anomaly. Behavioral drift is not always failure. Some drift reflects legitimate adaptation to operating conditions. The point is not brittle anomaly detection. It is graduated trust.

In a more appropriate architecture, minor distributional shifts in decision paths might trigger enhanced monitoring or human review for high-risk actions. Larger divergence in calibration or tool-use patterns might restrict access to sensitive systems or reduce autonomy. Severe deviation from the approved behavioral envelope would trigger suspension pending review.

This is structurally similar to zero trust but applied to behavioral continuity rather than network location or device posture. Trust is not granted once and assumed thereafter. It is continuously re-earned at runtime.

What this requires in practice

Implementing this model requires three technical capabilities.

First, organizations need behavioral telemetry pipelines that capture more than generic logs. It is not enough to record that an agent made an API call. Systems need to capture which tools were selected under which contextual conditions, how decision paths unfolded, how uncertainty was expressed, and how output patterns changed over time.

Second, they need comparison systems capable of maintaining and querying behavioral baselines. That means storing compact runtime representations of approved agent behavior and comparing live operations against those baselines over sliding windows. The goal is not perfect determinism. The goal is to measure whether current operation remains sufficiently similar to the behavior that was approved.

Third, they need policy engines that can consume behavioral claims, not just identity claims.

Enterprises already know how to issue short-lived credentials to workloads and how to evaluate machine identity continuously. The next step is to not only bind legitimacy to workload provenance but continuously refresh behavioral validity.

The important shift is conceptual as much as technical. Authorization should no longer mean only “This workload is permitted to operate.” It should mean “This workload is permitted to operate while its current behavior remains within the bounds that justified access.”

The missing runtime control layer

Regulators and standards bodies increasingly assume lifecycle oversight for AI systems. Most organizations cannot yet deliver that for autonomous agents. This is not organizational immaturity. It is an architectural limitation. The control mechanisms most enterprises rely on were built for software whose operational identity remains stable between release events. Autonomous agents do not behave that way.

Behavioral continuity is the missing signal.

The problem is not that agents lack credentials. It is that current credentials attest too little. They establish administrative identity, but say nothing about whether the runtime system still behaves like the one that was approved.

Until enterprise authorization architectures can account for that distinction, they will continue to confuse administrative continuity with operational trust.

12:07

FBI Extracts Deleted Signal Messages from iPhone Notification Database [Schneier on Security]

404 Media reports (alternate site):

The FBI was able to forensically extract copies of incoming Signal messages from a defendant’s iPhone, even after the app was deleted, because copies of the content were saved in the device’s push notification database….

The news shows how forensic extraction—­when someone has physical access to a device and is able to run specialized software on it—­can yield sensitive data derived from secure messaging apps in unexpected places. Signal already has a setting that blocks message content from displaying in push notifications; the case highlights why such a feature might be important for some users to turn on.

“We learned that specifically on iPhones, if one’s settings in the Signal app allow for message notifications and previews to show up on the lock screen, [then] the iPhone will internally store those notifications/message previews in the internal memory of the device,” a supporter of the defendants who was taking notes during the trial told 404 Media.

EDITED TO ADD (4/24): Apple has patched this vulnerability.

11:42

Grrl Power #1454 – The three ages of Maxima [Grrl Power]

See, page title comes from Maxima being shown at three different ages there across the bottom of the page. Granted, the older version is a bit speculative. And sure, there’s more than just the three ages. Maybe 9 year old Maximillia got up to some interesting adventures. Like she was some sort of neighborhood pre-teen Nancy Drew, solving the mystery of the missing cookies, the missing homework, the dog that had a lot of paper in its poop, stuff like that. I’m not saying that’s the case, just that Maxima probably had some “ages” before she got gilded.

I vaguely remember in D&D… I think 3rd edition, possibly others, haste potions were supposed to age your character a year every time you used one. Which is a terrible trade off considering they only lasted 10 rounds. So for 60 seconds, you get one extra attack and can run twice as fast. And in exchange you lose a year of life? Granted, a speed potion could definitely be the deciding factor in a life or death fight, but unless you’re an elf or a dragon (or possibly a vampire, not sure about that one) that’s definitely a tactic of last resort. (Dragons become more powerful with age, so it actually benefits them. Vampires probably have to be feeding regularly to benefit from age, but since humans don’t starve to death after using a haste potion, I assume it has no detrimental effect on a vampire.)

I think they changed the after-effect to losing a round to exhaustion, because otherwise, that’s a little terrifying. A few year-sucking potions could cut decades from a human adventurer’s career, and I think halflings and half-orcs have shorter average lifespans than humans. In “realistic” superhero novels and some of the more grim comics, super speed is one of those powers with such terrible drawbacks that as soon as you realize you’re aging faster, you’d basically stop using it. Granted, judicious use of super speed wouldn’t really add up to all that much. You get into a fight, use super speed for 10 seconds of your local time, and win the day, easy peasy. The problem comes from when the super speed character runs across the country, or reads every book in the library to find the clue. Running east to west coast across the US took one ultramarathon runner 42 days. Reading every book in a library could potentially take months, or possibly centuries, if they have a comprehensive copy of the Tax Code, or any book they asked us to read in high school. Seriously, Bleak House, go fuck yourself. I mean, it’s called Bleak House. I could barely get through the cliff notes.

Anyway, if the super speedster experiences time in real time local to him no matter what speed he’s going, and Batman says I have to run across the country to get the disarming key to a Joker bomb in time, I would quit the team. Okay, I’d probably go and disarm the bomb, but I’d steal a bicycle, and that’s assuming my powers can’t be extended to cover the Batmobile, cause if they could, I’d fucking steal that. But then I’d quit.


Finally, here we go! I took the suggestion that I just use an existing panel for a starting point, thinking it would save time… I guess it technically did, but a 5 character vote incentive just isn’t the way to go.

Patreon, of course, has actual topless version.

 

 


Double res version will be posted over at Patreon. Feel free to contribute as much as you like.

10:35

Consumers outnumber producers [Seth's Blog]

New technology often upends the careers of experienced professionals.

When the Mac offered typesetting to the masses, typographers were incensed. They had grown up with lead or photo composition, they understood why it was called a ‘case’ and they knew how to kern. The typographers warned us that we’d soon be inundated by ugly, careless or even unreadable type, and everything would get worse. They were half right.

There was a lot of bad typography, but some great innovations as well. And the typographers who stuck it out ended up with far more opportunities (and more creative outlets) than they originally had.

When digital photography arrived, the skilled craftspeople who understood Bokeh and f-stops warned us about the same thing. People took their own pictures anyway. Many were lousy. Some changed the art form. And there are still professional photographers, even if the workaday gigs have mostly faded away.

And many doctors don’t want you to google your symptoms. Because it can lead to bad outcomes, and because it undermines their status and authority… but it has also saved countless lives. There are more patients than doctors, and so we go ahead and do what feels good to us, not to them.

A copywriter might say that it’s never okay to have an AI do your writing, but that same person uses AI to retouch photos or do the first pass on their spreadsheets… They even use a spellchecker instead of a human editor. You’re a producer some of the time, but also a consumer, and the consumer in you wants the best available option, regardless of how it was made.

These technological changes often have negative side effects. They don’t always make things better. But they happen when consumers insist. Mass production, factory farming, frozen food–they replace craft with accessibility and efficiency.

The market doesn’t care that much about the hard-won expertise of those that came before. And the shifts create muck and slop and then, over time, quality and taste and expertise often find their footing again.

The best way to complain is to make good stuff.

04:42

Getting Tatted On A Tuesday [Whatever]

My mom and I both had three tattoos. One of hers was from before my time, and she got two more while I was a kid. I got my first one at eighteen; a matching one with my two cousins who are practically like my sisters. It was all three of our firsts. My second one at twenty was not perfectly matching but very samesies with my lifelong bestie. My third was just for me, and it represents a promise to myself.

My mom and I always knew we wanted matching tattoos eventually, it just took us both four to get there. But we’re finally here, with the matching tats we’ve wanted for years. We just kept not getting them, and another year would pass. I asked her to look at artists, find some she likes, and I’d do the same and we’d pick our favorite. It never happened, and eventually I said, “mom, I booked us a consultation.” I was dragging her to get a tattoo because I knew if I didn’t, she’d never slow down on her own long enough to get one.

I follow a lot of tattoo artists on Instagram, but most are states or even whole countries away. However, there’s one in Dayton I’ve been following for about two years. After seeing his floral work time and time again and thinking how amazing it was, I finally just booked a consultation because I figured taking at least a step in that direction was a good idea. So, my mom and I headed to Truth and Triumph Tattoo in Kettering and met Kevin Rotramel.

My mom had sketched a design of a sunflower, and after talking with him about what we wanted and where we wanted it, he said he’d come up with a design that was close to the original my mom drew, but just more cleaned up and with more depth and detail. While we had always dreamed of color, we both knew yellow would look awful on our skin tones, and just went for greyscale, which our artist highly recommended anyway.

Before I show you how our tats turned out, I want to showcase some of Kevin’s work. I know I said his floral work is what made me decide to go to him, but check out this insane octopus:



View this post on Instagram

Or this sick giraffe:



View this post on Instagram

How about this super cool lantern?!



View this post on Instagram

And this castle is incredible:



View this post on Instagram

Okay, I won’t keep you in suspense any longer, but seriously Kevin’s work is so cool.

My mom went first, and I was starting to get nervous, but also was so excited to finally be doing this!

Finally, it was my turn:

Me sitting in a chair with my back to the tattoo artist, with my back exposed and my head hanging down so he can get to my upper back area. He is actively tattooing me in the shot!

Honestly it barely hurt for the first like half, but in the latter half of the tat I was definitely starting to get sensitive. I always seem to be chill for about an hour, and then right at the hour mark I’m like, “ooh okay I want to be done now.” But I hung in there!

And here they are, our matching sunflowers:

My mom and I with our exposed backs to the camera, looking at each other. Our sunflowers are both in the middle of our upper backs, mine between my other two tattoos (a pineapple and purple flowers), and hers all lonesome on her back by itself.

I am so happy with these! I appreciate Kevin for putting mine up a little bit higher than my mom’s so it wasn’t just straight up in line with my other two. I do love how my mom’s looks as her only back one, though. It’s framed so nicely! They’re the perfect size and aren’t too wild, just something pretty and simple to remind us of each other.

I absolutely love how they came out, and I’m just thrilled to finally have a matching tattoo with my mom. I know it’s corny, but sunflowers have always been a symbol of our love for each other, because we are each other’s sunshine, and we make each other happy when skies are grey. I love my mom and our tattoos, and I only wish we had gotten them sooner.

-AMS

01:14

Vincent Bernat: CSS & vertical rhythm for text, images, and tables [Planet Debian]

Vertical rhythm aligns lines to a consistent spacing cadence down the page. It creates a predictable flow for the eye to follow. Thanks to the rlh CSS unit, vertical rhythm is now easier to implement for text.1 But illustrations and tables can disrupt the layout. The amateur typographer in me wants to follow Bringhurst’s wisdom:

Headings, subheads, block quotations, footnotes, illustrations, captions and other intrusions into the text create syncopations and variations against the base rhythm of regularly leaded lines. These variations can and should add life to the page, but the main text should also return after each variation precisely on beat and in phase.

Robert Bringhurst, The Elements of Typographic Style

Text

Three factors govern vertical rhythm: font size, line height and margin or padding. Let’s set our baseline with an 18-pixel font and a 1.5 line height:

html {
  font-size: 112.5%;
  line-height: 1.5;
}
h1, h2, h3, h4 {
  font-size: 100%;
}
html, body,
h1, h2, h3, h4,
p, blockquote,
dl, dt, dd, ol, ul, li {
  margin: 0;
  padding: 0;
}

CSS Values and Units Module Level 4 defines the rlh unit, equal to the computed line height of the root element. All browsers support it since 2023.2 Use it to insert vertical spaces or to fix the line height when altering font size:3

h1, h2, h3, h4 {
  margin-top: 2rlh;
  margin-bottom: 1rlh;
}
h1 {
  font-size: 2.4rem;
  line-height: 2rlh;
}
h2 {
  font-size: 1.5rem;
  line-height: 1rlh;
}
h3 {
  font-size: 1.2rem;
  line-height: 1rlh;
}
p, blockquote, pre {
  margin-top: 1rlh;
}
aside {
  font-size: 0.875rem;
  line-height: 1rlh;
}

We can check the result by overlaying a grid4 on the content:

Screenshot of my website with a grid as an overlay and each line of text fitting on the grid
Using CSS rlh unit to set vertical space works well for text. You can display the grid using Ctrl+Shift+G.

If a child element uses a font with taller intrinsic metrics, it may stretch the line’s box beyond the configured line height.5 A workaround is to reduce the line height to 1. The glyphs overflow but don’t push the line taller.

code, kbd {
  line-height: 1;
}

Responsive images

Responsive images are difficult to align on the grid because we don’t know their height. CSS Rhythmic Sizing Module Level 1 introduces the block-step property to adjust the height of an element to a multiple of a step unit. But most browsers don’t support it yet.

With JavaScript, we can add padding around the image so it does not disturb the vertical rhythm:

const targets = document.querySelectorAll(".lf-media-outer");
const adjust = (el, height) => {
  const rlh = parseFloat(getComputedStyle(document.documentElement).lineHeight);
  const padding = Math.ceil(height / rlh) * rlh - height;
  el.style.padding = `${padding / 2}px 0`;
};

targets.forEach((el) => adjust(el, el.clientHeight));
Screenshot of my website with a grid as an overlay and an image not breaking the vertical rhythm. Additional padding is visible before and after the image. The height of the image with padding is 216.
The image is snapped to the grid thanks to the additional padding computed with JavaScript. 216 is divisible by 27, our line height in this example.

As the image is responsive, its height can change. We need to wrap a resize observer around the adjust() function:

const ro = new ResizeObserver((entries) => {
  for (const entry of entries) {
    const height = entry.contentBoxSize[0].blockSize;
    adjust(entry.target, height);
  }
});
for (const target of targets) {
  ro.observe(target);
}

Tables

Table cells could set 1rlh as their height but they would feel constricted. Using 2rlh wastes too much space. Instead, we use incremental leading: we align one in every five lines.

table {
  border-spacing: 2px 0;
  border-collapse: separate;
  th {
    padding: 0.4rlh 1em;
  }
  td {
    padding: 0.2rlh 0.5em;
  }
}

To align the elements after the table, we need to add some padding. We can either reuse the JavaScript code from images or use a few lines of CSS that count the regular rows and compute the missing vertical padding:

table:has(tbody tr:nth-child(5n):last-child)   { padding-bottom: 0.2rlh; }
table:has(tbody tr:nth-child(5n+1):last-child) { padding-bottom: 0.8rlh; }
table:has(tbody tr:nth-child(5n+2):last-child) { padding-bottom: 0.4rlh; }
table:has(tbody tr:nth-child(5n+3):last-child) { padding-bottom: 0 }
table:has(tbody tr:nth-child(5n+4):last-child) { padding-bottom: 0.6rlh; }

A header cell has twice the padding of a regular cell. With two regular rows, the total padding is 2×2×0.2+2×0.4=1.6. We need to add 0.4rlh to reach 2rlh of extra vertical padding across the table.

Screenshot of my website with a grid as an overlay and a table following the vertical rhythm. Additional padding is visible after the table. The height of the table with padding is 405.
One line out of five is aligned to the grid. Additional padding is added after the table to not break the vertical rhythm. 405 is divisible by 27, our line height in this example.

None of this is necessary. But once you start looking, you can’t unsee it. Until browsers implement CSS Rhythmic Sizing, a bit of CSS wizardry and a touch of JavaScript is enough to pull it off. The main text now returns after each intrusion “precisely on beat and in phase.” 🎼


  1. See “Vertical rhythm using CSS lh and rlh units” by Paweł Grzybek. 

  2. For broader compatibility, you can replace 2rlh with calc(var(--line-height) * 2rem) and set the --line-height custom property in the :root pseudo-class. I wrote a simple PostCSS plugin for this purpose. 

  3. It would have been nicer to compute the line height with calc(round(up, calc(2.4rem / 1rlh), 0) * 1rlh). Unfortunately, typed arithmetic is not supported by Firefox yet. Moreover, browsers support round() only since 2024. Instead, I coded a PostCSS plugin for this as well. 

  4. The following CSS code defines a grid tracking the line height:

    body::after {
      content: "";
      z-index: 9999;
      background: linear-gradient(180deg, #c8e1ff99 1px, transparent 1px);
      background-size: 20px 1rlh;
      pointer-events: none;
    }
    

  5. See “Deep dive CSS: font metrics, line-height and vertical-align” by Vincent De Oliveira. 

00:14

Link [Scripting News]

Had lunch today with Neal Smoller, our local pharmacy owner. Brilliant young guy who's totally energized by Claude Code.

New maestros in software [Scripting News]

I wonder how many people are working on clones of existing software with an eye toward making a much more evolvable and customizable version with AI at the core of the model.

You can make the same software easily, with Claude's help, and if you think about the things users want to customize, you can give them a toolkit for doing exactly what they want in prompts, as opposed to code, plugins, etc.

So you don't vibe-code it, you start with an app that's designed to be beautiful on the inside, easy to understand for a new maestro of software, but something they can evolve with prompts so they can be working on something else intently.

We provide beautiful code for aspiring symphonists to learn from.

I remember when I first got my hands on the Unix source back in 1978. I was blown away by what was possible. I had largely been a Fortran programmer up till then. The pieces don't fit together so well on their own, I learned, you have to move them into place and for that a lot of trying-things-out has to happen.

Why am I thinking about this? I have friends who are not programmers who are pretty close to where I was then, waiting to see how real software is made. And they can have that experience soon. I love where we are now in tech.

BTW, on its own Claude writes some really shitty code. ;-)

Wednesday, 22 April

22:35

RAIL: Nonfree and unethical [Planet GNU]

Any software license that denies users their freedom is by definition nonfree and unethical, and so-called "Responsible AI" Licenses (RAIL) are no exception. If we want software to help decrease social injustice, we should oppose licenses that restrict how software can be used.

Feeds

FeedRSSLast fetchedNext fetched after
@ASmartBear XML 11:21, Wednesday, 29 April 12:02, Wednesday, 29 April
a bag of four grapes XML 11:21, Wednesday, 29 April 12:03, Wednesday, 29 April
Ansible XML 11:21, Wednesday, 29 April 12:01, Wednesday, 29 April
Bad Science XML 11:56, Wednesday, 29 April 12:45, Wednesday, 29 April
Black Doggerel XML 11:21, Wednesday, 29 April 12:02, Wednesday, 29 April
Blog - Official site of Stephen Fry XML 11:56, Wednesday, 29 April 12:45, Wednesday, 29 April
Charlie Brooker | The Guardian XML 11:21, Wednesday, 29 April 12:03, Wednesday, 29 April
Charlie's Diary XML 11:42, Wednesday, 29 April 12:30, Wednesday, 29 April
Chasing the Sunset - Comics Only XML 11:56, Wednesday, 29 April 12:45, Wednesday, 29 April
Coding Horror XML 11:35, Wednesday, 29 April 12:22, Wednesday, 29 April
Comics Archive - Spinnyverse XML 11:14, Wednesday, 29 April 11:58, Wednesday, 29 April
Cory Doctorow's craphound.com XML 11:21, Wednesday, 29 April 12:03, Wednesday, 29 April
Cory Doctorow, Author at Boing Boing XML 11:21, Wednesday, 29 April 12:02, Wednesday, 29 April
Ctrl+Alt+Del Comic XML 11:42, Wednesday, 29 April 12:30, Wednesday, 29 April
Cyberunions XML 11:56, Wednesday, 29 April 12:45, Wednesday, 29 April
David Mitchell | The Guardian XML 11:56, Wednesday, 29 April 12:39, Wednesday, 29 April
Deeplinks XML 11:14, Wednesday, 29 April 11:58, Wednesday, 29 April
Diesel Sweeties webcomic by rstevens XML 11:56, Wednesday, 29 April 12:39, Wednesday, 29 April
Dilbert XML 11:56, Wednesday, 29 April 12:45, Wednesday, 29 April
Dork Tower XML 11:21, Wednesday, 29 April 12:03, Wednesday, 29 April
Economics from the Top Down XML 11:56, Wednesday, 29 April 12:39, Wednesday, 29 April
Edmund Finney's Quest to Find the Meaning of Life XML 11:56, Wednesday, 29 April 12:39, Wednesday, 29 April
EFF Action Center XML 11:56, Wednesday, 29 April 12:39, Wednesday, 29 April
Enspiral Tales - Medium XML 11:14, Wednesday, 29 April 11:59, Wednesday, 29 April
Events XML 11:42, Wednesday, 29 April 12:30, Wednesday, 29 April
Falkvinge on Liberty XML 11:42, Wednesday, 29 April 12:30, Wednesday, 29 April
Flipside XML 11:21, Wednesday, 29 April 12:03, Wednesday, 29 April
Flipside XML 11:14, Wednesday, 29 April 11:59, Wednesday, 29 April
Free software jobs XML 11:21, Wednesday, 29 April 12:01, Wednesday, 29 April
Full Frontal Nerdity by Aaron Williams XML 11:42, Wednesday, 29 April 12:30, Wednesday, 29 April
General Protection Fault: Comic Updates XML 11:42, Wednesday, 29 April 12:30, Wednesday, 29 April
George Monbiot XML 11:56, Wednesday, 29 April 12:39, Wednesday, 29 April
Girl Genius XML 11:56, Wednesday, 29 April 12:39, Wednesday, 29 April
Groklaw XML 11:42, Wednesday, 29 April 12:30, Wednesday, 29 April
Grrl Power XML 11:21, Wednesday, 29 April 12:03, Wednesday, 29 April
Hackney Anarchist Group XML 11:56, Wednesday, 29 April 12:45, Wednesday, 29 April
Hackney Solidarity Network XML 11:14, Wednesday, 29 April 11:59, Wednesday, 29 April
http://blog.llvm.org/feeds/posts/default XML 11:14, Wednesday, 29 April 11:59, Wednesday, 29 April
http://calendar.google.com/calendar/feeds/q7s5o02sj8hcam52hutbcofoo4%40group.calendar.google.com/public/basic XML 11:21, Wednesday, 29 April 12:01, Wednesday, 29 April
http://dynamic.boingboing.net/cgi-bin/mt/mt-cp.cgi?__mode=feed&_type=posts&blog_id=1&id=1 XML 11:14, Wednesday, 29 April 11:59, Wednesday, 29 April
http://eng.anarchoblogs.org/feed/atom/ XML 11:49, Wednesday, 29 April 12:35, Wednesday, 29 April
http://feed43.com/3874015735218037.xml XML 11:49, Wednesday, 29 April 12:35, Wednesday, 29 April
http://flatearthnews.net/flatearthnews.net/blogfeed XML 11:21, Wednesday, 29 April 12:02, Wednesday, 29 April
http://fulltextrssfeed.com/ XML 11:56, Wednesday, 29 April 12:39, Wednesday, 29 April
http://london.indymedia.org/articles.rss XML 11:35, Wednesday, 29 April 12:22, Wednesday, 29 April
http://pipes.yahoo.com/pipes/pipe.run?_id=ad0530218c055aa302f7e0e84d5d6515&amp;_render=rss XML 11:49, Wednesday, 29 April 12:35, Wednesday, 29 April
http://planet.gridpp.ac.uk/atom.xml XML 11:35, Wednesday, 29 April 12:22, Wednesday, 29 April
http://shirky.com/weblog/feed/atom/ XML 11:14, Wednesday, 29 April 11:58, Wednesday, 29 April
http://thecommune.co.uk/feed/ XML 11:14, Wednesday, 29 April 11:59, Wednesday, 29 April
http://theness.com/roguesgallery/feed/ XML 11:42, Wednesday, 29 April 12:30, Wednesday, 29 April
http://www.airshipentertainment.com/buck/buckcomic/buck.rss XML 11:56, Wednesday, 29 April 12:45, Wednesday, 29 April
http://www.airshipentertainment.com/growf/growfcomic/growf.rss XML 11:14, Wednesday, 29 April 11:58, Wednesday, 29 April
http://www.airshipentertainment.com/myth/mythcomic/myth.rss XML 11:21, Wednesday, 29 April 12:03, Wednesday, 29 April
http://www.baen.com/baenebooks XML 11:14, Wednesday, 29 April 11:58, Wednesday, 29 April
http://www.feedsapi.com/makefulltextfeed.php?url=http%3A%2F%2Fwww.somethingpositive.net%2Fsp.xml&what=auto&key=&max=7&links=preserve&exc=&privacy=I+accept XML 11:14, Wednesday, 29 April 11:58, Wednesday, 29 April
http://www.godhatesastronauts.com/feed/ XML 11:42, Wednesday, 29 April 12:30, Wednesday, 29 April
http://www.tinycat.co.uk/feed/ XML 11:21, Wednesday, 29 April 12:01, Wednesday, 29 April
https://anarchism.pageabode.com/blogs/anarcho/feed/ XML 11:14, Wednesday, 29 April 11:58, Wednesday, 29 April
https://broodhollow.krisstraub.comfeed/ XML 11:21, Wednesday, 29 April 12:02, Wednesday, 29 April
https://debian-administration.org/atom.xml XML 11:21, Wednesday, 29 April 12:02, Wednesday, 29 April
https://elitetheatre.org/ XML 11:35, Wednesday, 29 April 12:22, Wednesday, 29 April
https://feeds.feedburner.com/Starslip XML 11:21, Wednesday, 29 April 12:03, Wednesday, 29 April
https://feeds2.feedburner.com/GeekEtiquette?format=xml XML 11:56, Wednesday, 29 April 12:39, Wednesday, 29 April
https://hackbloc.org/rss.xml XML 11:21, Wednesday, 29 April 12:02, Wednesday, 29 April
https://kajafoglio.livejournal.com/data/atom/ XML 11:56, Wednesday, 29 April 12:45, Wednesday, 29 April
https://philfoglio.livejournal.com/data/atom/ XML 11:35, Wednesday, 29 April 12:22, Wednesday, 29 April
https://pixietrixcomix.com/eerie-cutiescomic.rss XML 11:35, Wednesday, 29 April 12:22, Wednesday, 29 April
https://pixietrixcomix.com/menage-a-3/comic.rss XML 11:14, Wednesday, 29 April 11:58, Wednesday, 29 April
https://propertyistheft.wordpress.com/feed/ XML 11:21, Wednesday, 29 April 12:01, Wednesday, 29 April
https://requiem.seraph-inn.com/updates.rss XML 11:21, Wednesday, 29 April 12:01, Wednesday, 29 April
https://studiofoglio.livejournal.com/data/atom/ XML 11:49, Wednesday, 29 April 12:35, Wednesday, 29 April
https://thecommandline.net/feed/ XML 11:49, Wednesday, 29 April 12:35, Wednesday, 29 April
https://torrentfreak.com/subscriptions/ XML 11:56, Wednesday, 29 April 12:39, Wednesday, 29 April
https://web.randi.org/?format=feed&type=rss XML 11:56, Wednesday, 29 April 12:39, Wednesday, 29 April
https://www.dcscience.net/feed/medium.co XML 11:56, Wednesday, 29 April 12:45, Wednesday, 29 April
https://www.DropCatch.com/domain/steampunkmagazine.com XML 11:21, Wednesday, 29 April 12:02, Wednesday, 29 April
https://www.DropCatch.com/domain/ubuntuweblogs.org XML 11:49, Wednesday, 29 April 12:35, Wednesday, 29 April
https://www.DropCatch.com/redirect/?domain=DyingAlone.net XML 11:35, Wednesday, 29 April 12:22, Wednesday, 29 April
https://www.freedompress.org.uk:443/news/feed/ XML 11:42, Wednesday, 29 April 12:30, Wednesday, 29 April
https://www.goblinscomic.com/category/comics/feed/ XML 11:21, Wednesday, 29 April 12:01, Wednesday, 29 April
https://www.loomio.com/blog/feed/ XML 11:49, Wednesday, 29 April 12:35, Wednesday, 29 April
https://www.newstatesman.com/feeds/blogs/laurie-penny.rss XML 11:21, Wednesday, 29 April 12:02, Wednesday, 29 April
https://www.patreon.com/graveyardgreg/posts/comic.rss XML 11:35, Wednesday, 29 April 12:22, Wednesday, 29 April
https://www.rightmove.co.uk/rss/property-for-sale/find.html?locationIdentifier=REGION^876&maxPrice=240000&minBedrooms=2&displayPropertyType=houses&oldDisplayPropertyType=houses&primaryDisplayPropertyType=houses&oldPrimaryDisplayPropertyType=houses&numberOfPropertiesPerPage=24 XML 11:56, Wednesday, 29 April 12:39, Wednesday, 29 April
https://x.com/statuses/user_timeline/22724360.rss XML 11:21, Wednesday, 29 April 12:01, Wednesday, 29 April
Humble Bundle Blog XML 11:35, Wednesday, 29 April 12:22, Wednesday, 29 April
I, Cringely XML 11:42, Wednesday, 29 April 12:30, Wednesday, 29 April
Irregular Webcomic! XML 11:21, Wednesday, 29 April 12:02, Wednesday, 29 April
Joel on Software XML 11:49, Wednesday, 29 April 12:35, Wednesday, 29 April
Judith Proctor's Journal XML 11:21, Wednesday, 29 April 12:01, Wednesday, 29 April
Krebs on Security XML 11:21, Wednesday, 29 April 12:02, Wednesday, 29 April
Lambda the Ultimate - Programming Languages Weblog XML 11:21, Wednesday, 29 April 12:01, Wednesday, 29 April
Looking For Group XML 11:14, Wednesday, 29 April 11:58, Wednesday, 29 April
LWN.net XML 11:21, Wednesday, 29 April 12:02, Wednesday, 29 April
Mimi and Eunice XML 11:14, Wednesday, 29 April 11:59, Wednesday, 29 April
Neil Gaiman's Journal XML 11:21, Wednesday, 29 April 12:01, Wednesday, 29 April
Nina Paley XML 11:35, Wednesday, 29 April 12:22, Wednesday, 29 April
O Abnormal – Scifi/Fantasy Artist XML 11:14, Wednesday, 29 April 11:59, Wednesday, 29 April
Oglaf! -- Comics. Often dirty. XML 11:42, Wednesday, 29 April 12:30, Wednesday, 29 April
Oh Joy Sex Toy XML 11:14, Wednesday, 29 April 11:58, Wednesday, 29 April
Order of the Stick XML 11:14, Wednesday, 29 April 11:58, Wednesday, 29 April
Original Fiction Archives - Reactor XML 11:21, Wednesday, 29 April 12:03, Wednesday, 29 April
OSnews XML 11:14, Wednesday, 29 April 11:59, Wednesday, 29 April
Paul Graham: Unofficial RSS Feed XML 11:14, Wednesday, 29 April 11:59, Wednesday, 29 April
Penny Arcade XML 11:21, Wednesday, 29 April 12:03, Wednesday, 29 April
Penny Red XML 11:14, Wednesday, 29 April 11:59, Wednesday, 29 April
PHD Comics XML 11:56, Wednesday, 29 April 12:45, Wednesday, 29 April
Phil's blog XML 11:42, Wednesday, 29 April 12:30, Wednesday, 29 April
Planet Debian XML 11:14, Wednesday, 29 April 11:59, Wednesday, 29 April
Planet GNU XML 11:21, Wednesday, 29 April 12:02, Wednesday, 29 April
Planet Lisp XML 11:56, Wednesday, 29 April 12:45, Wednesday, 29 April
Pluralistic: Daily links from Cory Doctorow XML 11:21, Wednesday, 29 April 12:01, Wednesday, 29 April
PS238 by Aaron Williams XML 11:42, Wednesday, 29 April 12:30, Wednesday, 29 April
QC RSS XML 11:35, Wednesday, 29 April 12:22, Wednesday, 29 April
Radar XML 11:21, Wednesday, 29 April 12:03, Wednesday, 29 April
RevK®'s ramblings XML 11:49, Wednesday, 29 April 12:35, Wednesday, 29 April
Richard Stallman's Political Notes XML 11:56, Wednesday, 29 April 12:45, Wednesday, 29 April
Scenes From A Multiverse XML 11:35, Wednesday, 29 April 12:22, Wednesday, 29 April
Schneier on Security XML 11:21, Wednesday, 29 April 12:01, Wednesday, 29 April
SCHNEWS.ORG.UK XML 11:14, Wednesday, 29 April 11:58, Wednesday, 29 April
Scripting News XML 11:21, Wednesday, 29 April 12:03, Wednesday, 29 April
Seth's Blog XML 11:49, Wednesday, 29 April 12:35, Wednesday, 29 April
Skin Horse XML 11:21, Wednesday, 29 April 12:03, Wednesday, 29 April
Tales From the Riverbank XML 11:56, Wednesday, 29 April 12:45, Wednesday, 29 April
The Adventures of Dr. McNinja XML 11:14, Wednesday, 29 April 11:59, Wednesday, 29 April
The Bumpycat sat on the mat XML 11:21, Wednesday, 29 April 12:01, Wednesday, 29 April
The Daily WTF XML 11:49, Wednesday, 29 April 12:35, Wednesday, 29 April
The Monochrome Mob XML 11:21, Wednesday, 29 April 12:02, Wednesday, 29 April
The Non-Adventures of Wonderella XML 11:56, Wednesday, 29 April 12:39, Wednesday, 29 April
The Old New Thing XML 11:14, Wednesday, 29 April 11:58, Wednesday, 29 April
The Open Source Grid Engine Blog XML 11:35, Wednesday, 29 April 12:22, Wednesday, 29 April
The Stranger XML 11:14, Wednesday, 29 April 11:59, Wednesday, 29 April
towerhamletsalarm XML 11:49, Wednesday, 29 April 12:35, Wednesday, 29 April
Twokinds XML 11:21, Wednesday, 29 April 12:03, Wednesday, 29 April
UK Indymedia Features XML 11:21, Wednesday, 29 April 12:03, Wednesday, 29 April
Uploads from ne11y XML 11:49, Wednesday, 29 April 12:35, Wednesday, 29 April
Uploads from piasladic XML 11:56, Wednesday, 29 April 12:39, Wednesday, 29 April
Use Sword on Monster XML 11:35, Wednesday, 29 April 12:22, Wednesday, 29 April
Wayward Sons: Legends - Sci-Fi Full Page Webcomic - Updates Daily XML 11:49, Wednesday, 29 April 12:35, Wednesday, 29 April
what if? XML 11:21, Wednesday, 29 April 12:02, Wednesday, 29 April
Whatever XML 11:56, Wednesday, 29 April 12:45, Wednesday, 29 April
Whitechapel Anarchist Group XML 11:56, Wednesday, 29 April 12:45, Wednesday, 29 April
WIL WHEATON dot NET XML 11:14, Wednesday, 29 April 11:58, Wednesday, 29 April
wish XML 11:14, Wednesday, 29 April 11:59, Wednesday, 29 April
Writing the Bright Fantastic XML 11:14, Wednesday, 29 April 11:58, Wednesday, 29 April
xkcd.com XML 11:56, Wednesday, 29 April 12:39, Wednesday, 29 April