Thursday, 09 April

17:35

How do you add or remove a handle from an active Wait­For­Multiple­Objects? [The Old New Thing]

Last time, we looked at adding or removing a handle from an active Msg­Wait­For­Multiple­Objects, and observed that we could send a message to both break out of the wait and update the list of handles. But what if the other thread is waiting in a Wait­For­Multiple­Objects? You can’t send a message since Wait­For­Multiple­Objects doesn’t wake for messages.

You can fake it by using an event which means “I want to change the list of handles.” The background thread can add that handle to its list, and if the “I want to change the list of handles” event is signaled, it updates its list.

One of the easier ways to represent the desired change is to maintain two lists, the “active” list (the one being waited on) and the “desired” list (the one you want to change it to). The background thread can make whatever changes to the “desired” list it wants, and then it signals the “changed” event. The waiting thread sees that the “changed” event is set and copies the “desired” list to the “active” list. This copying needs to be done with Duplicate­Handle because the background thread might close a handle in the “desired” list, and we can’t close a handle while it is being waited on.

wil::unique_handle duplicate_handle(HANDLE other)
{
    HANDLE result;
    THROW_IF_WIN32_BOOL_FALSE(
        DuplicateHandle(GetCurrentProcess(), other,
            GetCurrentProcess(), &result,
            0, FALSE, DUPLICATE_SAME_ACCESS));
    return wil::unique_handle(result);
}

This helper function duplicates a raw HANDLE and returns it in a wil::unique_handle. The duplicate handle has its own lifetime separate from the original. The waiting thread operates on a copy of the handles, so that it is unaffected by changes to the original handles.

std::mutex desiredMutex;
_Guarded_by_(desiredMutex) std::vector<wil::unique_handle> desiredHandles;
_Guarded_by_(desiredMutex) std::vector<std::function<void()>> desiredActions;

The desiredHandles is a vector of handles we want to be waiting for, and the The desiredActions is a parallel vector of things to do for each of those handles.

// auto-reset, initially unsignaled
wil::unique_handle changed(CreateEvent(nullptr, FALSE, FALSE, nullptr));

void waiting_thread()
{
    while (true)
    {
        std::vector<wil::unique_handle> handles;
        std::vector<std::function<void()>> actions;
        {
            std::lock_guard guard(desiredMutex);

            handles.reserve(desiredHandles.size() + 1);
            std::transform(desiredHandles.begin(), desiredHandles.end(),
                std::back_inserter(handles),
                [](auto&& h) { return duplicate_handle(h.get()); });
            // Add the bonus "changed" handle
            handles.emplace_back(duplicate_handle(changed.get()));

            actions = desiredActions;
        }

        auto count = static_cast<DWORD>(handles.size());
                        
        auto result = WaitForMultipleObjects(count,
                        handles.data()->addressof(), FALSE, INFINITE);
        auto index = result - WAIT_OBJECT_0;
        if (index == count - 1) {
            // the list changed. Loop back to update.
            continue;
        } else if (index < count - 1) {
            actions[index]();
        } else {
            // deal with unexpected result
            FAIL_FAST(); // (replace this with your favorite error recovery)
        }
    }
}

The waiting thread makes a copy of the desiredHandles and desiredActions, and adds the changed handle to the end so we will wake up if somebody changes the list. We operate on the copy so that any changes to desiredHandles and desiredActions that occur while we are waiting won’t affect us. Note that the copy in handles is done via Duplicate­Handle so that it operates on a separate set of handles. That way, if another thread closes a handle in desiredHandles, it won’t affect us.

void change_handle_list()
{
    std::lock_guard guard(desiredMutex);
    ⟦ make changes to desiredHandles and desiredActions ⟧
    SetEvent(changed.get());
}

Any time somebody wants to change the list of handles, they take the desiredMutex lock and can proceed to make whatever changes they want. These changes won’t affect the waiting thread because it is operating on duplicate handles. When finished, we set the changed event to wake up the waiting thread so it can pick up the new set of handles.

Right now, the purpose of the changed event is to wake up the blocking call, but we could also use it as a way to know whether we should update our captured handles. This allows us to reuse the handle array if there were no changes.

void waiting_thread()
{
    bool update = true;                        
    std::vector<wil::unique_handle> handles;   
    std::vector<std::function<void()>> actions;

    while (true)
    {
        if (std::exchange(update, false)) {
            std::lock_guard guard(desiredMutex);

            handles.clear();
            handles.reserve(desiredHandles.size() + 1);
            std::transform(desiredHandles.begin(), desiredHandles.end(),
                std::back_inserter(handles),
                [](auto&& h) { return duplicate_handle(h.get()); });
            // Add the bonus "changed" handle
            handles.emplace_back(duplicate_handle(changed.get()));

            actions = desiredActions;
        }

        auto count = static_cast<DWORD>(handles.size());
                        
        auto result = WaitForMultipleObjects(count,
                        handles.data()->get(), FALSE, INFINITE);
        auto index = result - WAIT_OBJECT_0;
        if (index == count - 1) {
            // the list changed. Loop back to update.
            update = true;
            continue;
        } else if (index < count - 1) {
            actions[index]();
        } else {
            // deal with unexpected result
            FAIL_FAST(); // (replace this with your favorite error recovery)
        }
    }
}

In this design, changes to the handle list are asynchronous. They don’t take effect immediately, because the waiting thread might be busy running an action. Instead, they take effect when the waiting thread gets around to making another copy of the desiredHandles vector and call Wait­For­Multiple­Objects again. This could be a problem: You ask to remove a handle, and then clean up the things that the handle depended on. But before the worker thread can process the removal, the handle is signaled. The result is that the worker thread calls your callback after you thought had told it to stop!

Next time, we’ll see what we can do to make the changes synchronous.

The post How do you add or remove a handle from an active <CODE>Wait­For­Multiple­Objects</CODE>? appeared first on The Old New Thing.

16:28

Vegan Appreciation Day [Nina Paley]

I am an ex-vegan. A vegan apostate.

But:

Despite eating dairy products and fish, I still eat a lot of vegan meals. I don’t actually like meat, and while I consume yogurt and cheese, I suffer some lactose intolerance.

I recently chatted with a friend who has been vegetarian for 6 years and wants to stop. He wants to eat the same things as his wife, an omnivore; he wants more protein; he believes some meat in his diet would improve his health. “But I just can’t,” he said. 6 years off meat has made him squeamish.

I am squeamish too. That’s my main reason for refusing to eat birds and mammals: the thought is simply too icky for me. I eat fish occasionally, which is icky enough. I need to build up some real hunger for it to overcome my aesthetic aversion. When I really crave animal protein, it’s fine; anything less and it feels gross.

Highly processed. Doesn’t taste like chicken (thank god). Delicious.

How fortunate I am, then, that I don’t have to eat fish, let alone other meat, every day. I have all kinds of plant options readily available at my local mainstream grocery store. Despite my criticism of vegan “fake” foods (especially simulated dairy), I happen to love “fake meat,” especially “Chick’n”. Not because it in any way resembles chicken, but because it doesn’t. It’s just toothy concentrated plant protein wrapped in a salty oily coating, calorie-intensive and tasty the way only highly processed junk food can be. I keep this stuff in my freezer and enjoy it about once a week. It was almost impossible to get 25 years ago, when I was a practicing vegan. Now I can get it at my local Meijer.

I have vegans to thank for this. Vegans who worked very hard pushing fake meat into the American mainstream. I hear investments in these idealistic plant-based food companies are drying up; that would be a shame. Hardworking, annoying vegans made these options possible not just for other vegans, but for me and you and everyone. Hardworking, annoying vegans — vegans who work hard at being annoying — got a number of fast-food outlets to place vegan offerings on their menus. They are the reason I can get an edible “Impossible Burger” at most American restaurants, instead of being stuck with some grain-based “veggie burger” which is basically a bread sandwich.

My squeamish friend bemoaned his reliance on expensive protein shakes. “Oooh have you tried Soylent Creamy Chocolate?” I evangelized. “These save my ass on long bike rides!” He couldn’t believe a plant-based shake could taste good, so I broke open a bottle and we split it. Why did I so enthusiastically push this highly processed vegan beverage on my friend? Not because I want either of us to be vegan — we were both discussing how we want to move AWAY from veganism. No, I was pushing it because it’s an excellent product and I love it.

Not made of people. I take these on long (100km or more) bike rides: I can down 400 calories in less than a minute. Note that “Creamy Chocolate” is the only flavor of this product that tastes good. The rest are kind of awful.

Once again I have vegans to thank. Who else would painstakingly formulate this concoction, figure out how to make it tasty AND shelf-stable, and create a viable company to distribute it throughout the USA so I can get it easily? Thank you, vegans!

Many vegans are annoying. But the squeaky wheel gets the avocado oil, and by being squeaky for all these years, vegans have improved and expanded food offerings in America’s lavish markets. Thus, they have made our capitalist lives better. They may condemn our non-vegan impurity, and we may ridicule their idealism, but we all benefit from having more and better choices at the grocery store.

So thank you, vegans. You’ve improved the life of at least one animal: me.

Share

The post Vegan Appreciation Day appeared first on Nina Paley.

15:28

Dilly-Dallying In Denver: Day 2 [Whatever]

I am someone who wakes up multiple times throughout the night. I always just flip over and go right back to sleep, but I definitely wake up fairly often. On my first morning of being in Denver, I was sleeping on my friend’s couch when I happened to wake up at seven on the dot. I was pretty comfortable, so I almost didn’t flip over at all, but at the last minute I decided I’d be slightly more comfortable if I flipped. So I did, and in doing so I faced the windows instead of facing the apartment. When I tell you I was beholding the single most beautiful sunrise I had ever seen in my life, trust that I mean it.

Radiant pink and bursting gold, the snowy mountains in the distance, and the sun steadily rising, casting light onto the city before me. It was truly a sight, and I stayed up for fifteen minutes to watch the sunrise unfold and transform, until it was finally over and the magnificent colors subsided. I thought about taking a photo, but I decided I just wanted to experience it in the moment and really soak it in just for myself.

After a glorious start to the morning (and going back to sleep for a while), Alex and I started our day off right with a quick stop at The Sen Tea House to pick up some matcha (we are matcha fiends if you couldn’t tell). The Sen Tea House had so many different options for their matchas in terms of sweetness, flavors, and milks, and they have non-matcha drinks, too, so there’s really a drink for every type of preference.

I almost didn’t even get a matcha because I was so enticed by the coconut Vietnamese coffee, but my friend highly recommended their matcha, so I ended up getting the ube matcha, which is listed on their menu as their most popular item. If you look at their online menu, Alex’s drink isn’t on there because it was like a weekly special or seasonal special, but they got the banana cream matcha. And here they are!

Two plastic 16oz cups, each filled with iced matcha. One has purple ube cream on top and the other has a pale yellow banana cream on top. Both have a huge portion of milk in the bottom of the cup, because this was prior to us mixing our drinks up a bit.

I was very pleased with the generous portion of cream on top, as these were $7.75 each. We obviously mixed these up a little bit more before drinking them, but I wanted to take a picture before mixing because I knew that mixing purple and green together would make a very unappetizing brown/grey color. And it did! But trust, it was delicious. It had tons of sweet ube flavor while still having some earthy matcha flavor, and was super creamy. Alex’s banana one tasted wildly fresh, like not artificial-y banana at all. It tasted so healthy like as if you made a fruit smoothie with a banana in it. It was definitely less sweet than mine, but Alex really enjoyed it. I am definitely glad I picked the ube, I can’t get enough ube in my life.

Later in the day, we were off to a highly anticipated spot called Mecha Noodle Bar.

A large black building with orange lettering on the front that reads

This ramen restaurant is fun, fresh, and casual, but also nice enough that you can come in and sit at the bar with a date and have awesome cocktails. I didn’t know at the time, but Mecha actually has a few other locations, though all the other ones are in the Northeast, predominately Connecticut and Massachusetts. How they got all the way out to Denver, I’m not entirely sure. But I’m glad they did, because Alex and I absolutely loved Mecha.

We were originally here for their Restaurant Week offerings, but it turned out that we were there during their happy hour, as well. We decided to double down and get the Restaurant Week menu and order off the happy hour menu, just to keep things exciting.

But of course, I had to start out with a bev:

A clear, tall, tiki glass with orange liquid and a blue bendy straw.

This is their mango sticky rice cocktail, with cachaça, pandan liqueur, coconut, mango, tea syrup, and lemon. Mango sticky rice is one of my favorite desserts in the world, so this cocktail sounded right up my alley. Whoever made it definitely made it kind of strong, but so much of the delicious tropical flavors really came through and I loved the level of sweetness in this drink. It wasn’t too heavy or too dessert-y. Much like the actual dessert it’s named for. Light and refreshing, with intense mango flavor. This drink was $15, but there was a lot of liquid to work through there, so can’t be too mad.

Here was the pre-fixe menu for only $25:

The pre-fixe menu for Mecha Noodle, listing your choices for your first course, second course, and then listing the one and only option for dessert.

Though I love some good edamame and those green beans sounded downright delish, I opted for the shiitake bao, and Alex got the chicken bao. Here’s mine:

A single bao filled with what appears to be only cucumbers on a red, ornately decorated plate.

If it looks like my bao is 200% cucumber, fear not, I got a better shot of the filling:

A look inside the bao, revealing it's not all cucumber, there's actually mushroom, green onion, and sauce.

As you can see, there is actually mushroom, scallions, hoisin, and Kewpie mayo in there. I really enjoyed this bao. The bun was soft and pillowy, the cucumber was crisp and fresh, and the mushroom was a perfectly acceptable size. Alex really liked their chicken one, too.

Before we dove into our second course, we got our happy hour snacks. Alex got the firecracker wings:

A platter of large, breaded wings alongside a wedge of lime and two sauce containers holding a creamy sauce.

These bad boys do not mess around, with their Sichuan peppercorn, Korean chili, tamarind, and togarashi seasoning alongside their lime leaf ranch. My friend offered a wing to me to try, but these suckers packed a kick. Even with the ranch, I couldn’t manage a second bite. These wings are an absolute powerhouse of flavor, and have definitely earned their name of “firecrackers.” While this platter is usually $16, the happy hour price was only $8.

I went for the spare ribs:

A shallow white bowl full of ribs covered in a dark brown glaze, topped with sesame seeds and fresh greens.

I don’t normally eat ribs in public, as they’re very messy and I dare not risk looking goofy, but when it came to these ribs, I no longer cared. They were so good. Too good. Quite possibly the best ribs I’ve ever had, even. Incredibly tender, luscious, fall-right-off-the-bone ribs with a bold, savory, but slightly sweet, sticky sauce that left me questioning why I haven’t had more ribs in my life. Though these were originally $18, the happy hour price was an unbeatable $9. Under ten dollars for these truly delectable ribs was wild, but I was totally here for it.

Finally, our main courses. With the price of the menu being only $25, I had assumed that the main courses would be mini versions of their actual entrees. Like a half portion of their ramen or something along those lines. However, I was pleasantly surprised to discover you get the full portion, which is absolutely wild because a bowl of their noodles costs almost as much as the pre-fixe menu.

Alex got the mala stir-fry:

A big bowl of noodles with peanuts, cilantro, and sauce.

Wide, flat rice noodles, topped with a cumin-Sichuan-peanut sauce, actual peanuts, and cilantro, with spicy brisket lurking just beneath the surface. This dish was also way too spicy for me, but Alex absolutely loved it. I did think the rice noodles were interesting, at least, plus the fresh cilantro is always a plus.

I was a little basic and got the shoyu paitan:

A big red bowl full of ramen. A big chunk of chicken, noodles, corn, scallions, soft boiled egg, and seaweed.

I really love black garlic, especially in ramen, so that’s what led me to pick this chicken ramen. It came with half a soft-boiled egg, some nori, scallions, bamboo, and I added the corn. I am always in the mood for ramen, and this ramen definitely delivered on curbing my ramen craving. I wouldn’t say it was a life-changing bowl of noodles, but it was pretty good and I have no real complaints about it. I liked the egg.

After acquiring many boxes, it was time for dessert:

Two mason jars full of purple pudding and topped with a vanilla wafer.

Oh my god, more ube! I was thrilled to see this beautiful purple pudding concoction. This was “Bonnie’s Banana Pudding,” with ube, vanilla pudding, bananas, and vanilla wafers. I know the mason jars don’t look like very big vessels, but this was absolutely a generous portion size. Like it took some serious work to get through these jars of pudding, but every bite was amazing. The ube flavor worked wonderfully with the vanilla, and the banana wasn’t artificial tasting at all. It was like we were drinking our matchas from that morning all over again!

The pudding was so creamy and had a great mouthfeel, and I almost felt sad when my spoon finally scraped the glass bottom of the jar. I could eat this dessert pretty much every single day.

For one cocktail, two restaurant week menus, a platter of wings and a platter of ribs, we were looking at a cool and breezy $82 before tip. What a steal. I was thoroughly impressed with their happy hour options, plus how good everything was (even if two of the dishes were too spicy for me). Not to mention our waitress was extremely friendly and attentive!

Mecha Noodle Bar really exceeded my expectations and was a great time, I highly recommend checking them out.

After heading back to Alex’s apartment and hanging with some of their apartment friends and checking out a little event happening in the lobby, we went back out to get some drinks to end the night. We walked down the street to Barcelona Wine Bar, an upscale tapas restaurant with tons of wines, beers, and some unique cocktails.

We sat at the bar, which was a beautiful marble with nice, dim lighting that made the place feel elevated yet somewhat cozy. The first drink I chose was actually one of their mocktails, but I asked for a spirit of the bartender’s choice in it. This is the “Tea Time”:

A coupe glass filled with a dark pink liquid with a lighter pink foam on top, plus a mint leaf resting on top. The glass sits atop a black and white marble bar top.

Earl grey tea, blueberry shrub, salted honey syrup, aquafaba, and mint. Plus gin! This drink is so pretty, I absolutely love the color and the stark contrast of the mint leaf on top. The aquafaba made for an excellent foam on top of the drink, as well. I adore earl grey as a flavor, as well as blueberry, and unsurprisingly this drink did not disappoint. I think gin was the perfect addition to this fruity yet sophisticated beverage. Specifically a more botanical gin versus a dry gin. I know what kind of gin I’m about and it sure isn’t Tanqueray.

For my second cocktail, I got yet another mocktail… with a spirit added! This is the “Bees & Bays”:

A wine glass filled with pale yellow liquid and ice, with a bay leaf on top.

That lovely salted honey syrup makes its return alongside lime, cardamom bitters, sparkling water, and is topped with a torched bay leaf. Oh, and gin. This cocktail was so light and refreshing, with simple flavors of honey, citrus, and the lovely feeling of bubbles. I loved how cold it was from all the ice.

Though Alex and I were definitely full from our time at Mecha Noodle, we knew we had to at least try some charcuterie:

A small wooden board with three chunks of cheese, some jam, and some cured meat.

We both knew we wanted drunken goat on the board for sure, but our other picks came to mind much slower. We ended up getting tetilla, a semi-soft cow’s milk cheese, and a third cheese I don’t remember. I know, I know, I had one job! But at least I remembered that the meat is speck! Or… was it serrano? No, no, definitely speck. Probably. And don’t ask me about the jam.

For my final beverage of the evening before walking the couple blocks back to Alex’s apartment, we have the Gin & Jus:

A short glass with pale yellow liquid and ice.

Gin, lime, pink peppercorn, ginger, and green grape. I like all of those things! They were good together. I think I didn’t taste this one as much as I did the previous two. I did like it, though.

Alex had a glass of Moscato, so I didn’t bother taking a picture. I’m very sorry to anyone who wanted to see a glass of white wine.

When we got back, we called it an early night (not too early) so we would feel rested and ready to go for my third day. Stick around to see what whacky beverages I consume next!

Have you been to any of Mecha Noodle Bar’s locations before? Do you like ube? How do you feel about gin? Let me know in the comments, and have a great day!

-AMS

15:07

[$] A flood of useful security reports [LWN.net]

The idea of using large language models (LLMs) to discover security problems is not new. Google's Project Zero investigated the feasibility of using LLMs for security research in 2024. At the time, they found that models could identify real problems, but required a good deal of structure and hand-holding to do so on small benchmark problems. In February 2026, Anthropic published a report claiming that the company's most recent LLM at that point in time, Claude Opus 4.6, had discovered real-world vulnerabilities in critical open-source software, including the Linux kernel, with far less scaffolding. On April 7, Anthropic announced a new experimental model that is supposedly even better; which they have partnered with the Linux Foundation to supply to some open-source developers with access to the tool for security reviews. LLMs seem to have progressed significantly in the last few months, a change which is being noticed in the open-source community.

14:21

Relicensing versus license compatibility (FSF Blog) [LWN.net]

The Free Software Foundation has published a short article on relicensing versus license compatibility.

The FSF's Licensing and Compliance Lab receives many questions and license violation reports related to projects that had their license changed by a downstream distributor, or that are combined from two or more programs under different licenses. We collaborated with Yoni Rabkin, an experienced and long time FSF licensing volunteer, on an updated version of his article to provide the free software community with a general explanation on how the GNU General Public License (GNU GPL) is intended to work in such situations.

Security updates for Thursday [LWN.net]

Security updates have been issued by Debian (firefox-esr, postgresql-13, and tiff), Fedora (bind, bind-dyndb-ldap, cef, opensc, python-biopython, python-pydicom, and roundcubemail), Slackware (mozilla), SUSE (ckermit, cockpit-repos, dnsdist, expat, freerdp, git-cliff, gnutls, heroic-games-launcher, libeverest, openssl-1_1, openssl-3, polkit, python-poetry, python-requests, python311-social-auth-app-django, and SDL2_image-devel), and Ubuntu (dogtag-pki, gdk-pixbuf, linux, linux-aws, linux-aws-5.15, linux-gcp, linux-gcp-5.15, linux-gke, linux-gkeop, linux-ibm, linux-ibm-5.15, linux-intel-iotg, linux-intel-iotg-5.15, linux-kvm, linux-lowlatency, linux-lowlatency-hwe-5.15, linux-nvidia, linux-nvidia-tegra, linux-nvidia-tegra-igx, linux-oracle, linux-oracle-5.15, linux-raspi, linux-xilinx-zynqmp, linux-aws-6.8, linux-gcp-6.8, linux-hwe-6.8, linux-ibm-6.8, linux-lowlatency-hwe-6.8, linux-fips, linux-aws-fips, linux-gcp-fips, linux-oracle, linux-oracle-6.17, linux-raspi, linux-realtime, openssl, and squid).

14:00

Architecture as Code to Teach Humans and Agents About Architecture [Radar]

A funny thing happened on the way to writing our book Architecture as Code—the entire industry shifted. Generally, we write books iteratively—starting with a seed of an idea, then developing it through workshops, conference presentations, online classes, and so on. That’s exactly what we did about a year ago with our Architecture as Code book. We started with the concept of describing all the ways that software architecture intersects with other parts of the software development ecosystem: data, engineering practices, team topologies, and more—nine in total—in code, as a way of creating a fast feedback loop for architects to react to changes in architecture. In other words, we’re documenting the architecture through code, defining the structure and constraints we want to guide the implementation through.

For example, an architect can define a set of components via a diagram, along with their dependencies and relationships. That design reflects careful thought about coupling, cohesion, and a host of other structural concerns. However, when they turn that diagram over to a team to develop it, how can they be sure the team will implement it correctly? By defining the components in code (with verifications), the architect can both illustrate and get feedback on the design. However, we recognize that architects don’t have a crystal ball, and design should sometimes change to reflect implementation. When a developer adds a new component, it isn’t necessarily an error but rather feedback that an architect needs to know. This isn’t a testing framework; it’s a feedback framework. When a new component appears, the architect should know so that they can assess: Should the component be there? Perhaps it was missed in the design. If so, how does that affect other components? Having the structure of your architect defined as code allows deterministic feedback on structural integrity.

This capability is useful for developers. We defined these intersections as a way of describing all different aspects of architecture in a deterministic way. Then agents arrived.

Agentic AI shows new capabilities in software architecture, including the ability to work toward a solution as long as deterministic constraints exist. Suddenly, developers and architects are trying to build ways for agents to determine success, which requires a deterministic method of defining these important constraints: Architecture as Code.

An increasingly common practice in agentic AI is separating foundational constraints from desired behavior. For example, part of the context or guardrails developers use for code generation can include concrete architectural constraints around code structure, complexity, coupling, cohesion, and a host of other measurable things. As architects can objectively define what an acceptable architecture is, they can build inviolate rules for agents to adhere to. For example, a problem that is gradually improving is the tendency of LLMs to use brute force to solve problems. If you ask for an algorithm that touches all 50 US states, it might build a 50-stage switch statement. However, if one of the architect’s foundational rules for code generation puts a limit on cyclomatic complexity, then the agent will have to find a way to generate code within that constraint.

This capability exists for all nine of the intersections we cover in Architecture as Code: implementation, engineering practices, infrastructure, generative AI, team topologies, business concerns, enterprise architect, data, and integration architecture.

Increasingly, we see the job of developers, but especially architects, being able to precisely and objectively define architecture, and we built a framework for both how to do it and where to do it in Architecture as Code–coming soon!

12:42

CodeSOD: Take a Percentage [The Daily WTF]

When looking at the source of a major news site, today's anonymous submitter sends us this very, very mild, but also very funny WTF:

       <div class="g-vhs g-videotape g-cinemagraph" id="g-video-178_article_slug-640w"
                 data-type="videotape" data-asset="https://somesite.com/videos/file.mp4" data-cinemagraph="true" data-allow-multiple-players="true"
                 data-vhs-options='{"ratio":"560:320"}'
                 style="padding-bottom: 57.14285714285714%">

Look, I know that percentage was calculated by JavaScript, or maybe the backend, or maybe calculated by a CSS pre-processor. No human typed that. There's nothing to gain by adding a rounding operation. There's nothing truly wrong with that line of code.

But I can't help but think about the comedic value in controlling your page layout down to sub-sub-sub-sub-sub-sub-sub-sub-pixel precision. This code will continue to have pixel accuracy out to screens with quadrillions of pixels, making it incredibly future proof.

It's made extra funny by calling the video player VHS and suggesting the appropriate ratio is 560 pixels by 320- which is not quite 16:9, but is a frequent letterbox ratio on DVD prints of movies.

In any case, I eagerly await an 2O-zetta-pixel displays, so I car read the news in its intended glory.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

12:07

Pluralistic: Cindy Cohn's "Privacy's Defender" (09 Apr 2026) [Pluralistic: Daily links from Cory Doctorow]

->->->->->->->->->->->->->->->->->->->->->->->->->->->->-> Top Sources: None -->

Today's links



Cindy Cohn's "Privacy's Defender" (permalink)

I've known EFF executive director Cindy Cohn for 27 years. I met her when I needed cyberlaw advice for a startup I'd helped found. We got along so well that I ended up quitting the startup and going to work at EFF. Now, Cindy's memoir, Privacy's Defender, is on the shelves:

https://mitpress.mit.edu/9780262051248/privacys-defender/

I'm hardly a disinterested party here, obviously. I was at Cindy's wedding, I've danced with her at Burning Man, and I've worked with her for most of my adult life. What's more, I was present for many of the pivotal moments she recounts in this book. But still: this is a great book that I found utterly captivating.

Cohn's been with EFF since its earliest days, when she litigated one of the most important cases in computing history, the Bernstein case, which legalized civilian access to encryption technology and changed the world:

https://www.eff.org/deeplinks/2015/04/remembering-case-established-code-speech

Cryptographers had been arguing with the US government over the ban on working encryption technology for years before Cohn joined the fight, and they'd tried all manner of arguments to overturn the ban: technical arguments, political arguments, financial arguments. All of these efforts failed – they didn't even make a dent.

Cohn's genius was the way she formulated a free speech argument about the ban on encryption: arguing that computer code was a form of expressive speech, entitled to protection under the First Amendment. While she didn't come up with this idea, it was her gift for assembling a narrative and a cadre of unimpeachable experts that carried the day.

In this age of bad faith right-wing trolling about "free speech" and "cancel culture," it's easy to forget how central free speech cases and causes have been for the advancement of human rights and human thriving. Free speech cases gave us the nation's first privacy protections, protection for unions, and protection for civil rights organizers.

Cohn never forgets this. Her decades with EFF are a history of the fight for speech rights (and thus privacy rights) on the internet. After the US government seized on the 9/11 attacks as a pretext to dismantle privacy and turn the internet into a system of ubiquitous surveillance, Cohn (along with EFF, of course!) was at the center of the fight for digital rights. The same prescience and strategic brilliance that led her to take up the Bernstein case and win it were with her through those millennial years, and her description of our cases, campaigns and fights in those years vividly foreshadows the moment we are in today.

The same goes for her "three letter agency" chapter, which takes up our fights against the NSA and other US agencies in the wake of whistleblower disclosures by Mark Klein and Edward Snowden. These accounts are one part master class in legal tactics; one part battle cry for a global pushback against the transformation of the internet into the perfect surveillance and control machine, and one part personal memoir of a tactician, finding ways to leverage a righteous cause to raise a guerrilla army of experts, co-counsel, amici, and champions who carried our message to the world.

All of this is connected back to her other legal career, as a human rights defender litigating on behalf of the survivors of a massacre perpetrated by a death squad working on behalf of Chevron in Nigeria. Cohn skilfully connects these very concrete, visible human rights struggles to the invisible – and no less important – human rights work she carried out for EFF.

I didn't just have a front-row seat for this stuff – I had backstage passes for a lot of it (though not the juiciest national security cases, which required EFF lawyers to maintain total secrecy from colleagues, spouses, even our board, on pain of a long prison sentence for disclosing classified information). Even so, Cohn's pacey, smart retelling of these events brought them to life for me, and of course, there's a coherence that you get after the fact that is missing when you're living through it in a moment.

But what really enlivened this delightful book were the personal details that Cohn weaves into the story. I've always known that she was an adoptee (and I even have a small, strange, coincidental connection to her birth family), but Cohn's intimate, personal, frank memoir of her early family life, and her bittersweet connection to her birth family were so intimate and well-told that I felt like I was getting to know my dear friend all over again.

Cindy is retiring from EFF (but not the law) in a couple of months. This book is a beautiful capstone to a brilliant career that defined the fight for cyber rights, and a deep, accessible dive into the defining tech and human rights battles of this century.


Hey look at this (permalink)



A shelf of leatherbound history books with a gilt-stamped series title, 'The World's Famous Events.'

Object permanence (permalink)

#15yrsago Advanced office-supply sculpture: paperclip dodecahedron https://web.archive.org/web/20171122055732/https://makezine.com/2011/04/07/paperclip-snub-dodecahedron/

#15yrsago World Bank: gold farming (etc) paid poor countries $3B in 2009 https://web.archive.org/web/20110410134037/http://www.infodev.org/en/Publication.1056.html

#15yrsago Class war comics: Scrap Iron Man versus international capital https://web.archive.org/web/20110410215907/https://www.chinamieville.net/post/4406165249/rejected-pitch

#15yrsago Colombian Justice Minister ramming through extremist copyright legislation without public consultation https://web.archive.org/web/20110707053554/http://karisma.org.co/?p=667

#15yrsago Glenn Beck’s brain https://www.motherjones.com/politics/2011/03/glenn-beck-fox-news-brain-chart/

#10yrsago Why 40 years of official nutritional guidelines prescribed a low-fat diet that promoted heart disease https://www.theguardian.com/society/2016/apr/07/the-sugar-conspiracy-robert-lustig-john-yudkin

#10yrsago Fearing the Pirate Party, Iceland’s government scrambles to avoid elections https://web.archive.org/web/20160407183022/https://theintercept.com/2016/04/07/icelands-government-tries-cling-protesters-pirates-gates/

#10yrsago The price of stealing an identity is crashing, with no bottom in sight https://qz.com/656459/its-never-been-cheaper-to-steal-someones-digital-identity-on-the-internet

#10yrsago Bernie Sanders can only win if nonvoters turn out at the polls, and they almost never do https://web.archive.org/web/20160408145116/https://www.vox.com/2016/4/6/11373862/bernie-sanders-voter-lists

#10yrsago To understand the link between corporations and Hillary Clinton, look at philosophy, not history https://web.archive.org/web/20160406223353/https://www.thenation.com/article/the-problem-with-hillary-clinton-isnt-just-her-corporate-cash-its-her-corporate-worldview/

#10yrsago The US Government’s domestic spy-planes take weekends and holidays off https://www.buzzfeednews.com/article/peteraldhous/spies-in-the-skies

#10yrsago A perfect storm of broken business and busted FLOSS backdoors everything, so who needs the NSA? https://www.youtube.com/watch?v=fwcl17Q0bpk

#5yrsago Door Dashers organize app-defeating solidarity https://pluralistic.net/2021/04/07/cruelty-by-design/#declinenow

#5yrsago Leaked NYPD "goon squad" manual https://pluralistic.net/2021/04/07/cruelty-by-design/#blam-blam-blam

#1yrago Tariffs and monopolies https://pluralistic.net/2025/04/07/it-matters-how-you-slice-it/#too-big-to-care


Upcoming appearances (permalink)

A photo of me onstage, giving a speech, pounding the podium.



A screenshot of me at my desk, doing a livecast.

Recent appearances (permalink)



A grid of my books with Will Stahle covers..

Latest books (permalink)



A cardboard book box with the Macmillan logo.

Upcoming books (permalink)

  • "The Reverse-Centaur's Guide to AI," a short book about being a better AI critic, Farrar, Straus and Giroux, June 2026 (https://us.macmillan.com/books/9780374621568/thereversecentaursguidetolifeafterai/)
  • "Enshittification, Why Everything Suddenly Got Worse and What to Do About It" (the graphic novel), Firstsecond, 2026

  • "The Post-American Internet," a geopolitical sequel of sorts to Enshittification, Farrar, Straus and Giroux, 2027

  • "Unauthorized Bread": a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, 2027

  • "The Memex Method," Farrar, Straus, Giroux, 2027



Colophon (permalink)

Today's top sources:

Currently writing: "The Post-American Internet," a sequel to "Enshittification," about the better world the rest of us get to have now that Trump has torched America. First draft complete. Second draft underway.

  • "The Reverse Centaur's Guide to AI," a short book for Farrar, Straus and Giroux about being an effective AI critic. LEGAL REVIEW AND COPYEDIT COMPLETE.
  • "The Post-American Internet," a short book about internet policy in the age of Trumpism. PLANNING.

  • A Little Brother short story about DIY insulin PLANNING


This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.

https://creativecommons.org/licenses/by/4.0/

Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.


How to get Pluralistic:

Blog (no ads, tracking, or data-collection):

Pluralistic.net

Newsletter (no ads, tracking, or data-collection):

https://pluralistic.net/plura-list

Mastodon (no ads, tracking, or data-collection):

https://mamot.fr/@pluralistic

Bluesky (no ads, possible tracking and data-collection):

https://bsky.app/profile/doctorow.pluralistic.net

Medium (no ads, paywalled):

https://doctorow.medium.com/

Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):

https://mostlysignssomeportents.tumblr.com/tagged/pluralistic

"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla

READ CAREFULLY: By reading this, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies ("BOGUS AGREEMENTS") that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer.

ISSN: 3066-764X

On Microsoft’s Lousy Cloud Security [Schneier on Security]

ProPublica has a scoop:

In late 2024, the federal government’s cybersecurity evaluators rendered a troubling verdict on one of Microsoft’s biggest cloud computing offerings.

The tech giant’s “lack of proper detailed security documentation” left reviewers with a “lack of confidence in assessing the system’s overall security posture,” according to an internal government report reviewed by ProPublica.

Or, as one member of the team put it: “The package is a pile of shit.”

For years, reviewers said, Microsoft had tried and failed to fully explain how it protects sensitive information in the cloud as it hops from server to server across the digital terrain. Given that and other unknowns, government experts couldn’t vouch for the technology’s security.

[…]

The federal government could be further exposed if it couldn’t verify the cybersecurity of Microsoft’s Government Community Cloud High, a suite of cloud-based services intended to safeguard some of the nation’s most sensitive information.

Yet, in a highly unusual move that still reverberates across Washington, the Federal Risk and Authorization Management Program, or FedRAMP, authorized the product anyway, bestowing what amounts to the federal government’s cybersecurity seal of approval. FedRAMP’s ruling—which included a kind of “buyer beware” notice to any federal agency considering GCC High—helped Microsoft expand a government business empire worth billions of dollars.

11:42

Grrl Power #1450 – Hospital food [Grrl Power]

We’ve only seen “transition Maximillia” once before, and at the time, I managed to not turn that flashback into a whole long side-story. I’m sure I could do 40 pages on that period of her life, no problem. But don’t worry, this is just another quick flashback.

This page takes place a few weeks after the page 415 flashback. At this point, Max had spent some time staring into a mirror, and decided that the gold stuff under her skin was kind of cool looking, and while all the doctors still didn’t have any evidence based theories about what was going on, they had mostly agreed that she wasn’t suffering from some sort of fatal heavy metal poisoning, nor were her skin or organs calcifying. On top of all that, after a brief initial fever, Max didn’t feel sick, and in fact was starting to feel really healthy. Like, accidentally ripped a door clear off its hinges healthy.

Man, I really need to find some time to go back and post slightly larger versions of the old pages. They’re 643px wide, and at some point I did a minor site update that let me post the comics at 750px. It helps with the text if nothing else. Of course if I do that, I’ll be tempted to go in and fix little bits of art, like Maxima’s oddly manly golem face. Also teen Max’s shoulder in that first panel is not the shoulder of a beanpole cross country runner. It would just take a little nudge with the liquefy tool to fix it, but that’s a trap. Her tank top in the reflection is all part of the same layer, and I’d have to separate them and fill in the bits left blank from the smaller shoulder, and now instead of a 2 minute fix, it’s 10 minutes, and while I’m at it, I think Deus’s head is too big and now I’m spending an hour on each page and there’s 1400 of them. I could probably resist the urge to fix most stuff due to time constraints, but there are definitely some panels that are just bad. So we’ll see about posting enlarged versions of the pages.


Finally, here we go! I took the suggestion that I just use an existing panel for a starting point, thinking it would save time… I guess it technically did, but a 5 character vote incentive just isn’t the way to go.

Patreon, of course, has actual topless version.

 

 


Double res version will be posted over at Patreon. Feel free to contribute as much as you like.

10:07

Attention and effort [Seth's Blog]

The door-to-door salesperson had no leverage. If he was at your door, he wasn’t at anyone else’s door. Every minute you spent with him was a minute he had to spend with you. While it was a tough gig, no one doubted that something was motivating this person enough to put at least as much into the interaction as you were. You might close the door in the face of the person who rang your bell, but at some level, you knew that another human was involved.

Spammers play a different scheme. One person can steal the time and attention of a million. It costs them nothing (actually, truly, nothing) to add one more name to the list. The lack of care and discernment comes through in their interactions. They steal attention in bulk and treat it casually. No one feels bad when they delete or filter spam.

In B2B selling and other high-value sales calls, the seller puts in a lot of effort. A custom presentation deck, useful spreadsheets, even a flight across the country to meet in person. That effort is expected, because the buyer sees their attention as valuable.

And now, here come AI agents. These are spammers disguised as door-to-door salespeople. They know your name, your history, your details–and they present a pitch that looks and feels as though a human spent a lot of time thinking about it and focusing on the buyer’s needs and desires.

But it’s done on a huge scale. It’s like seine fishing. A huge net is set to catch as many fish as possible, with no regard for the mass destruction it causes as a result.

Our instinct is to respect the work of a pitch that took more effort to create than it will cost us to consume (that’s why books are more respected than blog posts!). But AI agents, working at high speed to churn through the small amount of trust and attention we have left, upend that expectation.

Attention and trust continue their dance, and our choices determine how we’ll show up in the marketplace. Burning trust to get attention rarely pays off.

05:42

Urgent: Iran-US war troops [Richard Stallman's Political Notes]

US citizens: Ask your congresscritter and senators to block the war-lover from sending over 20,000 bombs to Israel.

See the instructions for how to sign this letter campaign without running any nonfree JavaScript code--not trivial, but not hard.

US citizens: Join with this campaign to address this issue.

To phone your congresscritter about this, the main switchboard is +1-202-224-3121.

Please spread the word.

01:35

[$] LWN.net Weekly Edition for April 9, 2026 [LWN.net]

Inside this week's LWN.net Weekly Edition:

  • Front: TPM attacks; arithmetic overflow protection; Ubuntu GRUB changes; kernel IPC proposals; fre:ac; Scuttlebutt.
  • Briefs: Nix vulnerability; OpenSSH 10.3; Sashiko reviews; FreeBSD testing; Gentoo GNU/Hurd; SFC on router ban; Quotes; ...
  • Announcements: Newsletters, conferences, security updates, patches, and more.

00:49

parted-3.7 released [stable] [Planet GNU]

I have released parted 3.7

Here are the compressed sources and a GPG detached signature[*]:
  https://ftp.gnu.o ... parted-3.7.tar.xz
  https://ftp.gnu.o ... ed-3.7.tar.xz.sig

Use a mirror for higher download bandwidth:
  https://www.gnu ... g/prep/ftp.html

Here are the SHA256 checksums:

008de57561a4f3c25a0648e66ed11e7b30be493889b64334a6d70f2c1951ef7b  parted-3.7.tar.xz
de51773eef47a10db34ff2462f3b3c9d987d4bdb49420f0a22e1dda1ff897a5c  parted-3.7.tar.xz.sig

[*] Use a .sig file to verify that the corresponding file (without the .sig
suffix) is intact.  First, be sure to download both the .sig file and the
corresponding tarball.  Then, run a command like this:

  gpg --verify parted-3.7.tar.xz.sig

If that command fails because you don't have the required public key,
or that public key has expired, try the following commands to update
or refresh it, and then rerun the 'gpg --verify' command.

  gpg --locate-external-key bcl@redhat.com

  gpg --recv-keys 117E8C168EFE3A7F

  wget -q -O- 'https://savannah. ... ed&download=1' | gpg --import -

This release was bootstrapped with the following tools:
  Autoconf 2.72
  Automake 1.17
  Gettext 0.23.1
  Gnulib commit 4e11e3d07a79a49eaa9b155c43801bbc1e5bd86e
  Gperf 3.1

NEWS

  • Noteworthy changes in release 3.7 (2026-04-08) [stable]


  Promoting alpha release to stable release 3.7

  • Noteworthy changes in release 3.6.37 (2026-03-24) [alpha]


** New Features

   hurd: Support USB device names

** Bug Fixes

   Stop adding boot code into the MBR if it's zero when updating an
   existing msdos partition table.

   disk.c: Update metadata after reading partition table

   Fix initialization of atr_c_locale inside PED_ASSERT

   nilfs2: Fixed possible sigsegv in case of corrupted superblock

   libparted: Do not detect ext4 without journal as ext2

   libparted: Fix dvh disklabel unhandled exception

   libparted: Fix sun disklabel unhandled exception

   parted: fix do_version declaration to work with gcc 15

   libparted: Fail early when detecting nilfs2

   doc: Document IEC unit behavior in the manpage

   parted: Print the Fixing... message to stderr

   docs: Finish setup of libparted API docs

   libparted: link libparted-fs-resize.so to libuuid

00:28

Urgent: The Home Team Act [Richard Stallman's Political Notes]

US citizens: call on Congress to Support and Pass the Home Team Act, to hamper sports team owners from squeezing money out of cities.

See the instructions for how to sign this letter campaign without running any nonfree JavaScript code--not trivial, but not hard.

US citizens: Join with this campaign to address this issue.

To phone your congresscritter about this, the main switchboard is +1-202-224-3121.

Please spread the word.

Urgent: Impeach Russell Vought [Richard Stallman's Political Notes]

US citizens: call on your officials in Congress to impeach Russell Vought, head of Project 2025 and White House Budget Director.

See the instructions for how to sign this letter campaign without running any nonfree JavaScript code--not trivial, but not hard.

US citizens: Join with this campaign to address this issue.

To phone your congresscritter about this, the main switchboard is +1-202-224-3121.

Please spread the word.

Planned strict safety limits on exposure to formaldehyde [Richard Stallman's Political Notes]

Chemical companies worked with trumpets to kill planned strict safety limits on exposure to formaldehyde. It involved disregarding the data from experiments that showed the danger, and choosing instead data from experiments chemical companies had sponsored.

Wrecker cries "emergency" based on no real grounds [Richard Stallman's Political Notes]

The wrecker has a pattern of crying "emergency" based on no real grounds. Each time, it is based on a lie.

Right-wing members of the Supreme Court are unwilling to call a lie a lie.

Inquiry of 1984 British police attack on striking miners [Richard Stallman's Political Notes]

In 1984 British police rioted and attacked a crowd of striking miners, then charged many of them falsely with crimes. Now, belatedly, the government will hold a proper inquiry.

Bills prohibiting market consolidation in medicine [Richard Stallman's Political Notes]

A series of bills in Congress would prohibit various aspects of market consolidation in medicine.

It used to be the case that large investment funds needed a publicly traded corporation to raise so much money. Publicly traded corporations were regulated more strictly. But increasing concentration of wealth made it feasible to raise such sums without a publicly traded corporation, and thus to avoid the regulation, with harmful consequences. I believe that any investment vehicle over a certain size should be required to be a publicly traded corporation.

Scammers recruiting and scamming workers [Richard Stallman's Political Notes]

Scammers recruit people to "work" writing and posting fake reviews of whatever. The reviews are usually blocked by the sites where they are posted, and the real goal is to scam the "workers".

Violence used on protesters in Australia [Richard Stallman's Political Notes]

Australia invited the president of Israel to visit, which naturally inspired protests. The thugs seem to have had it in for the protesters, and came up with dubious claims that violence was necessary.

Maduro's political prisoners released describe torture [Richard Stallman's Political Notes]

Some of Maduro's political prisoners have been released and describe horrible torture even more active and aggressive than the horrible torture in US deportation prisons.

FCC banned importation of foreign-made routers [Richard Stallman's Political Notes]

The FCC has banned importation of foreign-made routers, except for models already approved.

This addresses a real problem, but the supposed solution addresses only a part of it. When a router is made in a country other than the US, it's possible that that country's government has imposed spyware to snoop on Americans — or perhaps only on selected Americans.

However, other countries have the same reason to suspect routers made in the US. The US government maintains systematic ways to snoop on everyone, even its own citizens in defiance of its laws and constitution. Perhaps other countries should ban US-made routers as well as China-made routers.

But national governments are not the only entities that might make a router manufacturer snoop on users. The company might do this for its own purposes or to sell data.

To address this problem thoroughly, governments should insist that routers' software be released under a free/libre license, subject it to systematic analysis looking for security holes, and that the router allow the user to install other software through a method that involves pressing a physical button.

Texas State University fired professor for talk about Israel and Palestine [Richard Stallman's Political Notes]

Texas State University fired professor Idris Robinson for giving a talk, at a book fair unrelated to the university, about Israel and Palestine. Now he is suing the university for violating its contract with him.

Hate activists worked on line to identify the speaker and then push for firing him, distorting his talk to into an excuse.

Pretend Intelligence chatbots defeat safeguards [Richard Stallman's Political Notes]

Increasing numbers of Pretend Intelligence chatbots are finding ways to defeat safeguards.

If you assume they are intelligent, you could see this as a sign of cleverness. The reason that I say they are not intelligent is that they often can't tell they have output pseudoinformation.

Bully has tried to pervert US public health [Richard Stallman's Political Notes]

The bully has tried to pervert US public health, but is having trouble putting people into office who can stay in office.

Instead of the smoothly functioning disinformation agencies he sought, he has got evident failure.

Laws to tax the rich more [Richard Stallman's Political Notes]

America is starting to enact laws to tax the rich more.

This is a step in the right direction, and it can enable states to do much to help the poorest. However, the small amounts of these tax increases won't reduce the political power of billionaires significantly. If we cut their income and wealth in half, that would start to do it.

Two thirds arrested in Minnesota had no criminal records [Richard Stallman's Political Notes]

*Two Thirds of People Arrested by [deportation thugs] in Minnesota Surge Had No Criminal Records [or pending charges], New Data Reveals.*

The magat's persistent false claim that the deportations they do are protecting us from criminals demonstrate their contempt for the truth, and that demonstrates that the real targets of their deportation campaign are human rights and justice.

Democrats who want to fight for something [Richard Stallman's Political Notes]

* A New York congressional primary is exposing the gap between Democrats who want to fight [the saboteur in chief] and Democrats who want to fight for something.*

Wednesday, 08 April

23:35

The Big Idea: Corry L. Lee [Whatever]

Endings are called endings because things end there… but then what? What goes on beyond “the end”? Corry L. Lee is thinking about that very thought, and in this Big Idea for Imbue the Sky, offers some insight.

CORRY L. LEE:

The end of many stories is the Big Bad’s defeat. But is that really the end? If, say, someone had killed Hitler before WWII, would everything have been fine? What about his cronies, his generals, everyone invested in the fascist machine?

In Imbue the Sky, I wanted to explore what happens after the resistance succeeds. The dictator’s dead (hurray!)… but, left behind, is a power vacuum and loads of oppressive systems. In the Bourshkanya Trilogy, the (now-dead) Supreme General has two likely heirs: his sadistic eldest son, groomed for the role and supported by the brutal State police; and his more reasonable daughter, a mage and politician struggling with the State’s “might makes right” mentality. Then there’s the resistance, scrappy and small, with radical ideas of power to the people.

Add in our heroes who, together, assassinated the “unkillable” Supreme General, but now find themselves on opposing sides of a three-way civil war.

Through outlines and early drafts, I worked out the civil war’s progression and how I wanted it to end. But time and again, something wasn’t working. The problem was one of scope.

Most fantasy series, The Bourshkanya Trilogy included, grow in scope from one book to the next. This series began with intimate character, relationship, and magic growth inside the physical confines of a travelling circus (Book 1, Weave the Lightning), grew to working undercover for the resistance within the fascist state’s magical military (Book 2, The Storm’s Betrayal), before becoming nation-spanning in Book 3 (Imbue the Sky) with its civil war. Romances and friendships have shattered, and hundreds of kilometers separate our protagonists. 

The spark in the first two books came from the personal struggles, the push-and-pull of relationships, the tug between characters who cared deeply but wanted different things. How could I hold onto that heart while landing a satisfying ending with revolutionary scope?

I will claim that my answer to this is my Big Idea but, in reality, it was my Big Struggle.

To figure it out, I returned to the core of my original story: two people on different sides of the fascist state. The question of how a person frees themself from fascism fascinated me when I started drafting this series, and it has only become more relevant. In the real world, political rhetoric has become more polarized and aggressive, overflowing with intolerance and hate. 

And I wondered: how do we come back from hatred? Can we make mistakes and still be good? How many of our actions are shaped by our environment, and how can we turn toward forgiveness, understanding, and hope?

With this, Imbue the Sky’s Big Idea began to gel. The core of this story was not its battles or its epic magic (though those would remain, because fight scenes!!!). The heart of this story was characters fighting back toward their best selves—while raising arms against injustice. For some, the fight became about holding onto their light in the face of war’s brutality. For others, it involved realizing how their choices had broken relationships and figuring out how to (try and) mend them. Still others needed to soften their staunch convictions and accept that decisions are not always clear-cut; that sometimes, only by embracing an uncomfortable gray middle ground, can we nurture true growth.

In these questions, I found the end of the series. Not the culmination of the civil war’s battles (though that, too). Not (just) the weaving together of disparate aspects of the magic system into one explosive finale. But the weaving together of lives

The relationships at the end of this series have all shifted dramatically. Not all mistakes can be walked back, not all burned bridges rebuilt. But by looking critically at our choices and the paths they’ve started us down, by being vulnerable and admitting our mistakes, we have a chance to shift the course of history. 

It takes great strength to face your fears and reach for hope; to risk pain and be vulnerable; to risk failure and strive for a better world. In Imbue the Sky, the personal is political. The story doesn’t end when the dictator dies. In a way, it’s only the beginning.


Imbue the Sky: Amazon|Barnes & Noble|Bookshop|Powell’s|Solaris Books

Author socials: Website|Instagram|Facebook

Read an excerpt

22:56

Mac OS X 10.0 Cheetah ported to Nintendo Wii [OSnews]

Since its launch in 2007, the Wii has seen several operating systems ported to it: Linux, NetBSD, and most-recently, Windows NT. Today, Mac OS X joins that list.

In this post, I’ll share how I ported the first version of Mac OS X, 10.0 Cheetah, to the Nintendo Wii. If you’re not an operating systems expert or low-level engineer, you’re in good company; this project was all about learning and navigating countless “unknown unknowns”. Join me as we explore the Wii’s hardware, bootloader development, kernel patching, and writing drivers – and give the PowerPC versions of Mac OS X a new life on the Nintendo Wii.

↫ Bryan Keller

And all of this, because someone on Reddit said it couldn’t be done. It won’t surprise you to learn that the work required was extensive, from writing a custom bootloader to digging through the XNU source code, applying binary patches to the kernel during the boot process, building a device tree, writing the necessary drivers, and so much more. Even just setting up a development environment was a pretty serious undertaking.

Especially writing the drivers posed an interesting and unique challenge, as the Wii doesn’t use PCI to connect and expose its hardware components. Instead, components are connected to a dedicated SoC with its own ARM processor that talks to the main Wii PowerPC processor, exposing hardware that way. This meant that Keller had to write a driver for this chip first, before moving on to the device drivers for devices connected to this ARM SoC – graphics drivers, input drivers, and so on.

After a ton more work and overcoming several complex roadblocks, we now have Mac OS X 10.0 Cheetah on the Nintendo Wii. Amazing.

22:07

Jonathan Dowland: nvim-µwiki [Planet Debian]

In January 2025, as a pre-requisite for something else, I published a minimal neovim plugin called nvim-µwiki. It's essentially just the features from vimwiki that I regularly use, which is a small fraction them. I forgot to blog about it. I recently dusted it off and cleaned it up. You can find it here, along with a longer list of its features and how to configure it: https://github.com/jmtd/nvim-microwiki

I had a couple of design goals. I didn't want to define a new filetype, so this is designed to work with the existing markdown one. I'm using neovim, so I wanted to leverage some of its features: this plugin is written in Lua, rather than vimscript. I use the parse trees provided by TreeSitter to navigate the structure of a document. I also decided to "plug into" the existing tag stack navigation, rather than define another dimension of navigation (along with buffers, etc.) to track: Following a wiki-link pushes onto the tag stack, just as if you followed a tag.

This was my first serious bit of Lua programming, as well as my first dive into neovim (or even vim) internals. Lua is quite reasonable. Most of the vim and neovim architecture is reasonable. The emerging conventions about structuring neovim plugins are mostly reasonable. TreeSitter is, well, interesting, but the devil is very much in the details. Somehow all together the experience for me was largely just frustrating, and I didn't really enjoy writing it.

21:35

The ecard virus [Seth's Blog]

Three of my friends got hacked this week.

You get an ecard and click. It asks you to log in to your email.

Boom, done. It hacks your email account, steals all of your contacts and then sends itself to the whole address book. And while they’re at it, they could be scraping and misusing all sorts of data.

The first lesson is that you should only log in to your gmail or other email accounts directly, not if you’ve followed a link.

The second is that you really should get a password manager. Many are free or cheap. Some are easy to use.

Mostly, alas, we need to remind ourselves that just because it looks familiar (on the screen! on the internet! in a card!) we can stop paying attention. Especially if an AI said it, or it came to us unasked.

The internet lets ideas spread at scale. It also gives a few bad folks the leverage to cause a lot of havoc.

(And part of the problem lies with Google–they intentionally crowded out the peer-to-peer open net, but haven’t done enough to stop spam or scams.)

Look both ways before crossing.

21:21

How do you add or remove a handle from an active Msg­Wait­For­Multiple­Objects? [The Old New Thing]

A customer had a custom message loop that was built out of Msg­Wait­For­Multiple­Objects. Occasionally, they needed to change the list of handles that the message loop was waiting for. How do you add or remove a handle from an active Msg­Wait­For­Multiple­Objects?

You can’t.

Even if you could, it meant that the return value of Msg­Wait­For­Multiple­Objects would not be useful. Suppose it returns to say “Handle number 2 was signaled.” You don’t know whether handle 2 was signaled before the handle list was updated, or whether it was signaled after. You don’t know whether it’s referring to the handle number 2 or the new one.

So maybe it’s not a bad thing that you can’t change the list of handles in an active Msg­Wait­For­Multiple­Objects, since the result wouldn’t be useful. But you can ask the thread to stop waiting, update its handle list, and then go back to waiting.

Since the thread is in a Msg­Wait­For­Multiple­Objects, it will wake if you send a message to a window that belongs to the thread. You can have a “handle management window” to receive these messages, say

#define WM_ADDHANDLE    (WM_USER+0) // wParam = index, lParam = handle
#define WM_REMOVEHANDLE (WM_USER+1) // wParam = index

The background thread could send one of these messages if it wanted to add or remove a handle, and the message procedure would perform the corresponding operation.

In reality, you probably need more information than just the handle; you also need to know what to do if that handle is signaled. The lParam is more likely to be a pointer to a structure containing the handle as well as instructions on what the handle means. Those instructions could take the form of a callback function, or it could just be a value from an enum. Pick the design that works for you.

Next time, we’ll look at the case where you don’t want to block the background thread, or if the waiting thread is waiting in Wait­For­Multiple­Objects so the message option is not available.

The post How do you add or remove a handle from an active <CODE>Msg­Wait­For­Multiple­Objects</CODE>? appeared first on The Old New Thing.

21:07

Relicensing versus license compatibility [Planet GNU]

Relicensing and license compatibility are two important aspects of how licensing works in the free software community. This article explains both concepts, what they have in common, and how they differ.

20:35

19:49

19:42

Hidden Mechanics [Penny Arcade]

He is risen indeed! I don't have any insight into the true ethical machinery of the universe, but if the things I read as a child were true then they're at once timeless and ineffable but also authored to our precise moment. I remember learning in church that God was a 27 Dimensional Entity, which gave the Sundays thereafter a potent, hard sci-fi kick. If He exists at a remove from causality, well, that explains a lot. I mean, it's no Kolob, but we lowly snake-handlers had to make due with what we had.

18:14

AI-Infused Development Needs More Than Prompts [Radar]

The current conversation about AI in software development is still happening at the wrong layer.

Most of the attention goes to code generation. Can the model write a method, scaffold an API, refactor a service, or generate tests? Those things matter, and they are often useful. But they are not the hard part of enterprise software delivery. In real organizations, teams rarely fail because nobody could produce code quickly enough. They fail because intent is unclear, architectural boundaries are weak, local decisions drift away from platform standards, and verification happens too late.

That becomes even more obvious once AI enters the workflow. AI does not just accelerate implementation. It accelerates whatever conditions already exist around the work. If the team has clear constraints, good context, and strong verification, AI can be a powerful multiplier. If the team has ambiguity, tacit knowledge, and undocumented decisions, AI amplifies those too.

That is why the next phase of AI-infused development will not be defined by prompt cleverness. It will be defined by how well teams can make intent explicit and how effectively they can keep control close to the work.

This shift has become clearer to me through recent work around IBM Bob, an AI-powered development partner I have been working with closely for a couple of months now, and the broader patterns emerging in AI-assisted development.

The real value is not that a model can write code. The real value appears when AI operates inside a system that exposes the right context, limits the action space, and verifies outcomes before bad assumptions spread.

The code generation story is too small

The market likes simple narratives, and “AI helps developers write code faster” is a simple narrative. It demos well. You can measure it in isolated tasks. It produces screenshots and benchmark charts. It also misses the point.

Enterprise development is not primarily a typing problem. It is a coordination problem. It is an architecture problem. It is a constraints problem.

A useful change in a large Java codebase is rarely just a matter of producing syntactically correct code. The change has to fit an existing domain model, respect service boundaries, align with platform rules, use approved libraries, satisfy security requirements, integrate with CI and testing, and avoid creating support headaches for the next team that touches it. The code is only one artifact in a much larger system of intent.

Human developers understand this instinctively, even if they do not always document it well. They know that a “working” solution can still be wrong because it violates conventions, leaks responsibility across modules, introduces fragile coupling, or conflicts with how the organization actually ships software.

AI systems do not infer those boundaries reliably from a vague instruction and a partial code snapshot. If the intent is not explicit, the model fills in the gaps. Sometimes it fills them in well enough to look impressive. Sometimes it fills them in with plausible nonsense. In both cases, the danger is the same. The system appears more certain than the surrounding context justifies.

This is why teams that treat AI as an ungoverned autocomplete layer eventually run into a wall. The first wave feels productive. The second wave exposes drift.

AI amplifies ambiguity

There is a phrase I keep coming back to because it captures the problem cleanly. If intent is missing, the model fills the gap.

That is not a flaw unique to one product or one model. It is a predictable property of probabilistic systems operating in underspecified environments. The model will produce the most likely continuation of the context it sees. If the context is incomplete, contradictory, or detached from the architectural reality of the system, the output may still look polished. It may even compile. But it is working from an invented understanding.

This becomes especially visible in enterprise modernization work. A legacy system is full of patterns shaped by old constraints, partial migrations, local workarounds, and decisions nobody wrote down. A model can inspect the code, but it cannot magically recover the missing intent behind every design choice. Without guidance, it may preserve the wrong things, simplify the wrong abstractions, or generate a modernization path that looks efficient on paper but conflicts with operational reality.

The same pattern shows up in greenfield projects, just faster. A team starts with a few useful AI wins, then gradually notices inconsistency. Different services solve the same problem differently. Similar APIs drift in style. Platform standards are applied unevenly. Security and compliance checks move to the end. Architecture reviews become cleanup exercises instead of design checkpoints.

AI did not create those problems. It accelerated them.

That is why the real question is no longer whether AI can generate code. It can. The more important question is whether the development system around the model can express intent clearly enough to make that generation trustworthy.

Intent needs to become a first-class artifact

For a long time, teams treated intent as something informal. It lived in architecture diagrams, old wiki pages, Slack threads, code reviews, and the heads of senior developers. That has always been fragile, but human teams could compensate for some of it through conversation and shared experience.

AI changes the economics of that informality. A system that acts at machine speed needs machine-readable guidance. If you want AI to operate effectively in a codebase, intent has to move closer to the repository and closer to the task.

That does not mean every project needs a heavy governance framework. It means the important rules can no longer stay implicit.

Intent, in this context, includes architectural boundaries, approved patterns, coding conventions, domain constraints, migration goals, security rules, and expectations about how work should be verified. It also includes task scope. One of the most effective controls in AI-assisted development is simply making the task smaller and sharper. The moment AI is attached to repository-local guidance, scoped instructions, architectural context, and tool-mediated workflows, the quality of the interaction changes. The system is no longer guessing in the dark based on a chat transcript and a few visible files. It is operating inside a shaped environment.

One practical expression of this shift is spec-driven development. Instead of treating requirements, boundaries, and expected behavior as loose background context, teams make them explicit in artifacts that both humans and AI systems can work from. The specification stops being passive documentation and becomes an operational input to development.

That is a much more useful model for enterprise development.

The important pattern is not tool-specific. It applies across the category. AI becomes more reliable when intent is externalized into artifacts the system can actually use. That can include local guidance files, architecture notes, workflow definitions, test contracts, tool descriptions, policy checks, specialized modes, and bounded task instructions. The exact format matters less than the principle. The model should not have to reverse engineer your engineering system from scattered hints.

Cost is a complexity problem disguised as a sizing problem

This becomes even clearer when you look at migration work and try to attach cost to it.

One of the recent discussions I had with a colleague was about how to size modernization work in token/cost terms. At first glance, lines of code look like the obvious anchor. They are easy to count, easy to compare, and simple to put into a table. The problem is that they do not explain the work very well.

What we are seeing in migration exercises matches what most experienced engineers would expect. Cost is often less about raw application size and more about how the application is built. A 30,000 line application with old security, XML-heavy configuration, custom build logic, and a messy integration surface can be harder to modernize than a much larger codebase with cleaner boundaries and healthier build and test behavior.

That gap matters because it exposes the same flaw as the code-generation narrative. Superficial output measures are easy to report, but they are weak predictors of real delivery effort.

If AI-infused development is going to be taken seriously in enterprise modernization, it needs better effort signals than repository size alone. Size still matters, but only as one input. The more useful indicators are framework and runtime distance. Those can be expressed in the number of modules or deployables, the age of the dependencies or the number of files actually touched.

This is an architectural discussion. Complexity lives in boundaries, dependencies, side effects, and hidden assumptions. Those are exactly the areas where intent and control matter most.

Measured facts and inferred effort should not be collapsed into one story

There is another lesson here that applies beyond migrations. Teams often ask AI systems to produce a single comprehensive summary at the end of a workflow. They want the sequential list of changes, the observed results, the effort estimate, the pricing logic, and the business classification all in one polished report. It sounds efficient, but it creates a problem. Measured facts and inferred judgment get mixed together until the output looks more precise than it really is.

A better pattern is to separate workflow telemetry from sizing recommendations. The first artifact should describe what actually happened. How many files were analyzed or modified. How many lines changed in which time. How many tokens were actually consumed. Or which prerequisites were installed or verified. That is factual telemetry. It is useful because it is grounded.

The second artifact should classify the work. How large and complex was the migration. How broad was the change. How much verification effort is likely required. That is interpretation. It can still be useful, but it should be presented as a recommendation, not as observed truth.

AI is very good at producing complete-sounding narratives but enterprise teams need systems that are equally good at separating what was measured from what was inferred.

A two-axis model is closer to real modernization work

If we want AI-assisted modernization to be economically credible, a one-dimensional sizing model will not be enough. A much more realistic model is at least two-dimensional. The first axis is size, meaning the overall scope of the repository or modernization target. The second axis is complexity. This stands for things like legacy depth, security posture, integration breadth, test quality, and the amount of ambiguity the system must absorb.

That model reflects real modernization work far better than a single LOC (lines of code)-driven label. It also gives architects and engineering leaders a much more honest explanation for why two similarly sized applications can land in very different token ranges.

And it reinforces the core point: Complexity is where missing intent becomes expensive.

A code assistant can produce output quickly in both projects. But the project with deeper legacy assumptions, more security changes, and more fragile integrations will demand far more control. It will need tighter scope, better architectural guidance, more explicit task framing, and stronger verification. In other words, the economic cost of modernization is directly tied to how much intent must be recovered and how much control must be imposed to keep the system safe. That is a much more useful way to think about AI-infused development than raw generation speed.

Control is what makes AI scale

Control is what turns AI assistance from an interesting capability into an operationally useful one. In practice, control means the AI does not just have broad access to generate output. It works through constrained surfaces. It sees selected context. It can take actions through known tools. It can be checked against expected outcomes. Its work can be verified continuously instead of inspected only at the end.

A lot of recent excitement around agents misses this point. The ambition is understandable. People want systems that can take higher-level goals and move work forward with less direct supervision. But in software development, open-ended autonomy is usually the least interesting form of automation. Most enterprise teams do not need a model with more freedom. They need a model operating inside better boundaries.

That means scoped tasks, local rules, architecture-aware context, and tool contracts, all with verification built directly into the flow. It also means being careful about what we ask the model to report. In migration work, some data is directly observed, such as files changed, elapsed time, or recorded token use. Other data is inferred, such as migration complexity or likely cost. If a prompt asks the model to present both as one seamless summary, it can create false confidence by making estimates sound like facts. A better workflow requires the model to separate measured results from recommendations and to avoid claiming precision the system did not actually record.

Once you look at it this way, the center of gravity shifts. The hard problem is no longer how to prompt the model better. The hard problem is how to engineer the surrounding system so the model has the right inputs, the right limits, and the right feedback loops. That is a software architecture problem.

This is not prompt engineering

Prompt engineering suggests that the main lever is wording. Ask more precisely. Structure the request better. Add examples. Those techniques help at the margins, and they can be useful for isolated tasks. But they are not a durable answer for complex development environments. The more scalable approach is to improve the system around the prompt.

The more scalable approach is to improve the surrounding system with explicit context (like repository and architecture constraints), constrained actions (via workflow-aware tools and policies), and integrated tests and validation.

This is why intent and control is a more useful framing than better prompting. It moves the conversation from tricks to systems. It treats AI as one component in a broader engineering loop rather than as a magic interface that becomes trustworthy if phrased correctly.

That is also the frame enterprise teams need if they want to move from experimentation to adoption. Most organizations do not need another internal workshop on how to write smarter prompts. They need better ways to encode standards and context, constrain AI actions, and implement verification that separates facts from recommendations.

A more realistic maturity model

The pattern I expect to see more often over the next few months is fairly simple. Teams will begin with chat-based assistance and local code generation because it is easy to try and immediately useful. Then they will discover that generic assistance plateaus quickly in larger systems.

In theory, the next step is repository-aware AI, where models can see more of the code and its structure. In practice, we are only starting to approach that stage now. Some leading models only recently moved to 1 million-token context windows, and even that does not mean unlimited codebase understanding. Google describes 1 million tokens as enough for roughly 30,000 lines of code at once, and Anthropic only recently added 1 million-token support to Claude 4.6 models.

That sounds large until you compare it with real enterprise systems. Many legacy Java applications are much larger than that, sometimes by an order of magnitude. One case cited by vFunction describes a 20-year-old Java EE monolith with more than 10,000 classes and roughly 8 million lines of code. Even smaller legacy estates often include multiple modules, generated sources, XML configuration, old test assets, scripts, deployment descriptors, and integration code that all compete for attention.

So repository-aware AI today usually does not mean that the agent fully ingests and truly understands the whole repository. More often, it means the system retrieves and focuses on the parts that look relevant to the current task. That is useful, but it is not the same as holistic awareness. Sourcegraph makes this point directly in its work on coding assistants: Without strong context retrieval, models fall back to generic answers, and the quality of the result depends heavily on finding the right code context for the task. Anthropic describes a similar constraint from the tooling side, where tool definitions alone can consume tens of thousands of tokens before any real work begins, forcing systems to load context selectively and on demand.

That is why I think the industry should be careful with the phrase “repository-aware.” In many real workflows, the model is not aware of the repository in any complete sense. It is aware of a working slice of the repository, shaped by retrieval, summarization, tool selection, and whatever the agent has chosen to inspect so far. That is progress, but it still leaves plenty of room for blind spots, especially in large modernization efforts where the hardest problems often sit outside the files currently in focus.

After that, the important move is making intent explicit through local guidance, architectural rules, workflow definitions, and task shaping. Then comes stronger control, which means policy-aware tools, bounded actions, better telemetry, and built-in verification. Only after those layers are in place does broader agentic behavior start to make operational sense.

This sequence matters because it separates visible capability from durable capability. Many teams are trying to jump directly to autonomous flows without doing the quieter work of exposing intent and engineering control. That will produce impressive demos and uneven outcomes. The teams that get real leverage from AI-infused development will be the ones that treat intent as infrastructure.

The architecture question that matters now

For the last year, the question has often been, “What can the model generate?” That was a reasonable place to start because generation was the obvious breakthrough. But it is not the question that will determine whether AI becomes dependable in real delivery environments.

The better question is: “What intent can the system expose, and what control can it enforce?”

That is the level where enterprise value starts to become durable. It is where architecture, platform engineering, developer experience, and governance meet. It is also where the work becomes most interesting, not as a story about an assistant producing code but as part of a larger shift toward intent-rich, controlled, tool-mediated development systems.

AI is making discipline more visible.

Teams that understand this will not just ship code faster. They will build development systems that are more predictable, more scalable, more economically legible, and far better aligned with how enterprise software actually gets delivered.

17:28

Page 6 [Flipside]

Page 6 is done.

Tell Congress: No Clean Reauthorize Section 702. [EFF Action Center]

While this authority is very entrenched and fiercely defended by the intelligence community, every few years, the authority must be renewed by Congress before it expires, which generates an opportunity for Congress to add reforms. The current authority expires on April 20, 2026, creating an opportunity right now for much needed reforms to make sure this invasive and unconstitutional surveillance program operates with respect for civil liberties in the United States.

Under the Fourth Amendment, when the FBI or other law enforcement entity wants to search your emails, it must convince a judge there’s reason to believe your emails will contain evidence of a crime. But because of the way the NSA implements Section 702, communications from innocent Americans are routinely collected and stored in government databases, which are accessible to the FBI, the CIA, and the National Counterterrorism Center.

So instead of having to get a warrant to collect this data, it’s already in government servers. And the government currently decides for itself whether it can look through (“query”) its databases for Americans’ communications—decisions which it regularly makes incorrectly, even according to the Foreign Intelligence Surveillance Court. Requiring a judge to examine the government’s claims when it wants to query its Section 702 databases for Americans’ communications isn’t just a matter of standards: it’s about ensuring government officials don’t get to decide themselves whether they can compromise Americans’ privacy in their most sensitive and intimate communications.

A cornerstone of our legal system is that if someone—including the government—violates your rights, you can use the courts to hold them accountable if you can show that you were affected, i.e. that you have standing.

But, in multiple cases, courts interpreting an evidentiary provision in FISA have prevented Americans who alleged injuries from Section 702 surveillance from obtaining judicial review of the surveillance’s legality. The effect is a one-way ratchet that has “created a broad national-security exception to the Constitution that allows all Americans to be spied upon by their government while denying them any viable means of challenging that spying.” This needs to change.

Another important safeguard in the American legal system is the right of defendants in criminal cases to know how the evidence against them was obtained and to challenge the legality of how it was collected.

Under FISA as written, the government must disclose when it intends to use evidence it has collected under Section 702 in criminal prosecutions. But in the years since Congress enacted Section 702, the government has only provided notice to a handful of criminal defendants of such intent—and has provided notice to zero defendants in the last five years.This is another necessary reform.

17:21

[$] Ripping CDs and converting audio with fre:ac [LWN.net]

It has been a little while since LWN last surveyed tools for managing a digital music collection. In the intervening decades, many Linux users have moved on to music streaming services, found them wanting, and are looking to curate their own collection once again. There are plenty of choices when it comes to ripping, managing, and playing digital audio; so many, in fact, that it can be a bit daunting. After years of tinkering, I've found a few tools that work well for managing my digital library: the first I'd like to cover is the fre:ac free audio encoder for ripping music from CDs and converting between audio formats.

16:42

We are all Good Germans now [Scripting News]

I don't know about you but I thought there was a pretty good chance Trump would detonate a nuke somewhere yesterday, and the fact that we did not get him out of there in time to prevent it, says we went along with it. You can decide what that means.

I've had several friends over the years who are German. My age or a little older. People who grew up with the shame of being German in the postwar years. Friends. People who weren't born when the atrocities happened. We were friends and when I could I tried to assure them that I know they weren't there when it happened. It didn't matter, as far as they were concerned it didn't absolve them of the shame. It had become their birthright.

American friends, what we allowed to happen yesterday, even though we were adequately warned, says we went along with it. If we wanted to stop it now, I believe we could, and we still can.

PS: Good German is an ironic term. Wikipedia's explainer nails it.

16:35

[$] An API for handling arithmetic overflow [LWN.net]

On March 31, Kees Cook shared a patch set that represents the culmination of more than a year of work toward eliminating the possibility of silent, unintentional integer overflow in the kernel. Linus Torvalds was not pleased with the approach, leading to a detailed discussion about the meaning of "safe" integer operations and the design of APIs for handling integer overflows. Eventually, the developers involved reached a consensus for a different API that should make handling overflow errors in the kernel much less of a hassle.

15:56

Gateway Dump [George Monbiot]

How the deregulation of waste disposal has turned this country into a magnet for the mafia.

By George Monbiot, published in the Guardian 1st April 2026

This country’s a dump. I don’t mean that metaphorically. I mean it literally. From the point of view of criminal waste gangs, it is one big potential landfill. The chances of being caught range between minimal and nonexistent, and the penalties are mostly laughable. Successive governments have given criminals a licence to print money.

Last week, the Commons public accounts committee reported that illegal waste dumping is “out of control”. The UK is now blighted with between 8,000 and 13,000 illegal waste sites. Most consist of a few lorry loads. Some contain tens of thousands of tonnes of waste, which might incorporate everything from household products to asbestos, heavy metals and highly toxic, flammable and explosive organic chemicals. The rubbish blows through local neighbourhoods, flows into rivers and seeps into soil and groundwater. And, in most cases, nothing is done.

This is no glitch, but the inevitable result of a sustained ideological assault on regulation. Governments treat essential public protections as “red tape” that must be slashed, and regulators as “checkers and blockers” who must be vanquished. But ministers cannot simply delete protections from the statute books, for fear of provoking public fury. So instead they cut the funds for monitoring and enforcement: deregulation by stealth. The result, over the past 15 years, has been to build a whole new industrial sector almost from scratch: organised waste crime. It is perhaps our most successful growth industry.

It’s great business. Someone who wants their waste removed pays you a fee to cover transit, landfill tax and the gate charges at an official disposal site. But instead of taking it to a registered landfill, you dump it on farmland, on nature reserves, in ancient woodlands, across country lanes or even, as in Bickershaw, near Wigan, on the green space next to a primary school. You pocket the difference: about £2,500 per articulated lorry load. Anyone can play, as I discovered when I registered my deceased goldfish with the Environment Agency as an upper-tier waste dealer.

The chances of being caught are so low and the profits so high that waste dumping, as the House of Lords environment and climate change committee reports, has become a “gateway” to organised crime, creating networks that then branch into drugs, guns, money laundering, fraud and modern slavery. Waste crime is changing the character of the country, socially as well as physically.

So underfunded, demoralised and utterly useless have the regulators become that, even in some of the rare cases in which they’ve begun investigations and prosecutions, the dumping has continued. This is what has happened at Bickershaw, where a 25,000-tonne illegal tip has forced closures of the primary school, filled the neighbourhood with rats and flies, damaged local people’s businesses and ruined their lives. Locals first reported the dumping in late 2024. Eventually, the Environment Agency launched what it called a “major criminal investigation”. But in mid-February this year, drone footage showed that activity at the site continued: the agency, council and police had failed to secure it.

It’s the same story almost everywhere. When the first trucks began arriving on the banks of the River Cherwell, north of Oxford, in summer 2025, local anglers, neighbours and landowners reported them. The Environment Agency’s response was to issue “a cease-and-desist order”. But that was it. Not only did it fail to block the entrance, it didn’t even install a trail camera to monitor the activity and identify the culprits. Unsurprisingly, the lorries kept coming. Only months later did the Environment Agency secure the site, by which time a 20,000-tonne waste mountain, slipping into the river, had become a “critical incident”.

At Hoad’s Wood in Kent, a “strictly protected” ancient woodland, locals reported in 2020 that several acres of trees had been illegally cleared: the dumpers were preparing their site. The authorities failed to respond. Between 2020 and 2023, the gangsters deposited more than 30,000 tonnes of construction and household waste there. Local people supplied the authorities with footage of the dumping and even the names of the companies involved. Nothing happened. It wasn’t until January 2024 that the Environment Agency imposed a restriction order on the site, and it was only in February 2025 that three men were arrested.

As Kent’s police and crime commissioner told a House of Lords inquiry, people “report it to the borough council, which will tell them to report it to the police, who will tell them to report it to the Environment Agency, which will tell them to report it to the council, which will tell them to report it to the police. They will just keep going round and round and round, and no one cares.” Now the cleanup operation will cost taxpayers £15m.

That’s deregulation for you. It’s yet another instance of successive governments’ bizarrely lopsided version of “fiscal discipline”, which counts the costs of action, but not the costs of inaction. On a conservative estimate, illegal dumping costs the economy in England £1bn a year. The cost of cleaning up all the criminal dumps that have accumulated over the past 15 years will, if it ever happens, amount to tens of billions. This is before we take into account the potential contamination of aquifers by toxic waste seepage, whose costs and impacts could be many times greater. And it’s all because of the cuts, saving a tiny fraction of these costs, inflicted on regulators in the name of “efficiency”.

A fortnight ago, the government published its “waste crime action plan”. Some of the measures are welcome, but they in no way match the scale of the crisis. It allocates an extra £15m a year for waste crime enforcement: a mere wooden sword to wield against the vast organised crime networks that have grown in the regulatory vacuum. This also happens to be the cost of cleaning up just one of the 8,000 sites: Hoad’s Wood. Everything this plan proposes is undermined by the prime minister’s ongoing deregulation agenda, which also appears to be “out of control”.

Underfunding and deregulation, now in their fifth decade, are destroying our country. They ensure we cannot solve our problems, spreading hopelessness and passivity. They open the door to economic mafias and to political profiteers exploiting misery and despair. There could scarcely be a more potent symbol of dysfunction and neglect than the waste piling up around us. The literal dump becomes a metaphorical one.

www.monbiot.com

15:07

Nix privilege escalation security advisory [LWN.net]

The NixOS project has announced a critical vulnerability in many versions of the Nix package manager's daemon. The flaw was introduced as part of a fix for a prior vulnerability in 2024. According to the advisory, all default configurations of NixOS and systems building untrusted derivations are impacted.

A bug in the fix for CVE-2024-27297 allowed for arbitrary overwrites of files writable by the Nix process orchestrating the builds (typically the Nix daemon running as root in multi-user installations) by following symlinks during fixed-output derivation output registration. This affects sandboxed Linux builds - sandboxed macOS builds are unaffected. The location of the temporary output used for the output copy was located inside the build chroot. A symlink, pointing to an arbitrary location in the filesystem, could be created by the derivation builder at that path. During output registration, the Nix process (running in the host mount namespace) would follow that symlink and overwrite the destination with the derivation's output contents.

In multi-user installations, this allows all users able to submit builds to the Nix daemon (allowed-users - defaulting to all users) to gain root privileges by modifying sensitive files.

Security updates for Wednesday [LWN.net]

Security updates have been issued by Debian (openssl), Fedora (corosync, goose, kea, pspp, and rauc), Mageia (python-pygments, roundcubemail, and tigervnc), SUSE (bind, gimp, google-cloud-sap-agent, govulncheck-vulndb, ignition, ImageMagick, python, python-PyJWT, and python-pyOpenSSL), and Ubuntu (adsys, juju-core, lxd, python-django, and salt).

Pluralistic: Process knowledge (08 Apr 2026) [Pluralistic: Daily links from Cory Doctorow]

->->->->->->->->->->->->->->->->->->->->->->->->->->->->-> Top Sources: None -->

Today's links



A woman washing dishes by hand in a rural, early 20th century shack. In the foreground is a jumble of tortured golgothan skeletons ganked from a Dore Old Testament engraving. Through the window in the back of the shack, we see a detail from another Dore Old Testament engraving: bodies escaping The Flood.

Process knowledge (permalink)

"Intellectual property" was once an obscure legal backwater. Today, it is the dominant area of political economy, the organizing regime for almost all of our tech regulation, and the most valuable – and most controversial – aspect of global trade policy:

https://pluralistic.net/2026/04/01/minilateralism/#own-goal

Despite (or perhaps because of) its centrality, "intellectual property" is one of those maddeningly vague terms that applies to many different legal doctrines, as well as a set of nebulous, abstract thought-objects that do not qualify for legal protection. "IP" doesn't just refer to copyright, trademark and patent – though these "core three" systems are so heterogeneous in basis, scope and enforcement that the act of lumping them together into a single category confuses more than it clarifies.

Beyond the "core three" of copyright, patent and trademark, "IP" also refers to a patchwork of "neighboring rights" that only exist to varying degrees around the world, like "anticircumvention rights," "database rights" and "personality rights." Then there are doctrines that have come to be thought of as IP, even though they were long considered separate: confidentiality, noncompete and nondisparagement.

Finally, there are those "nebulous, abstract thought-objects" that get labeled "IP," even if no one can really define what they are – for example, the "format" deals that TV shows like Love Island or The Traitors make around the world, which really amount to consulting deals to help other TV networks create a local version of a popular show, but which are treated as the sale of some (nonexistent) exclusive right.

It's hard to find a commonality amongst all these wildly different concepts, but a couple years ago, I hit on a working definition of "IP" that seems to cover all the bases: I say that "IP" means "any rule, law or policy that allows a company to exert control over its critics, competitors or customers":

https://locusmag.com/2020/09/cory-doctorow-ip/

Put that way, it's easy to see why "IP" would be such a central organizing principle in a modern, end-stage capitalist world. But even though "IP" is treated as a firm's most important asset, it's actually far less important than another intangible: process knowledge.

I first came across the concept of "process knowledge" in Dan Wang's Breakneck, a very good book about the rise and rise of Chinese manufacturing, industrialization and global dominance:

https://danwang.co/breakneck/

I picked up Breakneck after reading other writers whom I admire who singled out the book's treatment of process knowledge for praise and further discussion. The political scientist Henry Farrell called process knowledge the key to economic development:

https://www.programmablemutter.com/p/process-knowledge-is-crucial-to-economic

While Dan Davies – a superb writer about organizations and their management – used England's Brompton Bicycles to make the abstract concept of process knowledge very concrete indeed:

https://backofmind.substack.com/p/the-brompton-ness-of-it-all

So what is process knowledge? It's all the knowledge that workers collectively carry around in their heads – hard-won lessons that span firms and divisions, that can never be adequately captured through documentation. Think of a worker at a chip fab who finds themself with a load of microprocessors that have failed QA because they become unreliable when they're run above a certain clockspeed. If that worker knows enough about the downstream customers' processes, they can contact one of those customers and offer the chips for use in a lower-end product, which can save the fab millions and make millions more for the customer.

This just happened to Apple, who seized upon a lot of "binned" microprocessors that were headed to the landfill and designed the Macbook Neo (a new, cheap, low-end laptop) around them, salvaging the defective chips by running them at lower speeds. The result? Apple's most successful laptop in years, which has now sold so well that Apple has exhausted the supply of defective chips and is scrambling to fill orders:

https://www.macrumors.com/2026/04/07/macbook-neo-massive-dilemma/

Process knowledge is squishy, contingent, and wildly important in a world filled with entropy-stricken, off-spec, and stubbornly physical things. Work with a particular machine long enough and you will develop a Fingerspitzengefühl (fingertip feeling) for the optimal rate to introduce a new load of feedstock to it after it runs dry. Even more importantly: if you work with that machine long enough, you'll have the mobile phone number of the retired person who knows how to un-jam it if you try to reload it too fast on your usual technician's day off. This kind of knowledge can mean the difference between profitability and bankruptcy.

So why isn't process knowledge given the centrality in our conceptions of what makes a corporation valuable?

After reading Wang, Farrell and Davies, I formulated a theory: we ignore process knowledge for the same reason we exalt "IP," because process knowledge can't be bought or sold, can't be reflected on a balance-sheet, and can't be controlled, and because "IP" can. Process knowledge is far more important than "IP" (just try creating a vaccine from a set of instructions without the skilled technicians who have already spent years executing similar projects), but process knowledge is spread out amongst workers and can't be abstracted away by their bosses. Your boss can make you sign a contract assigning all your copyrights and patents to the business, but if you and your team quit your job, all that "IP" will plummet in value without the people who know how to mobilize it:

https://pluralistic.net/2025/09/08/process-knowledge/#dance-monkey-dance

"IP" isn't just a case of "you treasure what you measure" – it's also a case of "you measure what you treasure."

Recently, I hit on a positively delightful Tumblr post that illustrated the importance of process knowledge, and the way that bosses systematically undervalue it:

https://www.tumblr.com/explorerrowan/813098951730479104

This post is one of those glorious internet documents, a novel literary form for which we have no accepted term. It's composed of four major sections: a screenshotted impromptu Twitter thread made in reply to a throwaway post; a lengthy Tumblr reply to the screenshots; a second Tumblr reply to the first one; and then a chorus of more than 38,000 notes, replies, and hashtags added to it. I have no idea what to call this kind of document, in which some people are reacting to others without the others ever knowing about it, but also which is also written by so many authors, many of whom are explicitly interacting with one another. It's a "hypertext," sure, but what kind of hypertext?

Whatever you call it, it's amazing. As noted, it opens with a Twitter exchange. The first tweet comes from an online dating influencer, "TheEcho13":

I interviewed a gen z girlie 6 months ago and in the interview she told me that she does not like a challenge, has no interest in career progression, prefers to just do repetitive tasks and will never complain about being bored.

I hired her.

https://xcancel.com/TheEcho13/status/1948951885693813135#m

In response, Viveros (a content creator from Alberta and one of the 4m people who saw the original tweet), replied with a short thread about the value of people like this, who "keep the lights on and the business functioning at everything from restaurants to post offices but now nobody’s interested in hiring them":

https://xcancel.com/TheViveros/status/1949149720406110382#m

These are the "lifer[s] who can teach new people how everything works, who knows what’s up in the system, who knows what the obscure solutions are, and who can help calm down the asshole regulars because they know them more personally." In other words, the keepers of the process knowledge.

When this screenshotted exchange was posted to Tumblr, it prompted Blinkpatch, who describes themself as a "genderfluid," "ancient" "drifter" who pines for "solar-punk flavored revolution" to reply with a brilliant anecdote about their stint working as a dishwasher:

https://weaselle.tumblr.com/post/790895560390492160/whenever-i-think-about-the-value-of-something

At 16, Blinkpatch was hired as a restaurant dishwasher under the tutelage of Claudio, a 60-year old "career dish pit man." Claudio had washed dishes for his whole life, reveling in the fact that he could get work in any city, at any time.

When Claudio realized that Blinkpatch was taking the job seriously, the training began in earnest. Claudio asked Blinkpatch if they wanted to be able to clock off at midnight at the end of each shift, and when Blinkpatch said they did, Claudio laid a lot of process knowledge on them:

This machine takes two full minutes to run a cycle. We are on the clock for 8 hours. That means we have a maximum of 240 times we can run this machine. If you want to wash all those dishes, clean your station, mop, and clock off by midnight? This machine has to be on and running every second of the shift.

If you don’t have a full load of dishes collected, scraped, rinsed, stacked, and ready to go into the dishwasher the second it’s done every single time? You can’t do it. If, over the course of 8 hours, you let this machine lay idle for just one minute in between finishing each load and being turned on again? Instead of 240 loads, you’ll do 160 loads.

These are the parameters, the kind of thing any Taylorist with a stopwatch could tell you. But Claudio went on to explain how that extra idle minute would translate to chaos in the kitchen, as the cooks ran out of pots and the servers ran out of plates, and how they would take out their frustrations on the dishwasher. To optimize that dishwasher, Blinkpatch would need to have a reserve of bulky, machine-filling items that could be run through the machine any time a load finished before there was a sufficient supply of smaller items. If they failed at this, Blinkpatch would be washing dishes until 2AM, rather than clocking out at midnight.

Blinkpatch's takeaway was that dishwashing was the bottleneck the whole restaurant ran through – and how that meant that Claudio, who was "unambitious" by conventional standards, had the best understanding of the restaurant's overall operations of anyone on site. He was the keeper of the process knowledge

This reply prompted another response, from "Marisol," a "haunted house actress and accidental IT person" who told the story of her time working at a medical office that specialized in mental health and addiction recovery:

https://www.tumblr.com/marisolinspades/790960414106304512/all-of-this-disaster-befalls-any-company-that

The company was in the midst of standing up its own purpose-built facility, and the CEO was working intensively with the architect to design this new building. When Marisol – the receptionist – happened to be consulted on the near-final design plan, "it took all of three seconds for two major issues to jump out."

First: "The receptionist can’t see the waiting room from her desk with this layout. It’s around the corner and blocked by a wall." This meant that she couldn't "keep track of the patients who are waiting."

The architect and CEO wanted to know why she couldn't use the sign-in sheet to manage this. She explained that not everyone signs in – people who are there for a check-in or group therapy need to be directed to the other side of the building, while "some people are painfully shy and if I don’t appear warm and inviting they won’t approach."

The CEO and architect asked whether this happened often, and she replied "every day." They didn't believe her. Nor did they believe her when she said that the receptionists needed to have continuous access to the chart room throughout the day – they insisted that since charts for the day's patients were pulled in the morning, it would be OK to house them through two sets of locked doors, a five-minute walk away (that way, workers wouldn't be tempted to "goof off" in the room). They wanted to keep the chart room locked, with the key entrusted to the CEO, who would supervise every entry.

Marisol explained that charts were pulled continuously, any time there was a crisis or a patient had a question for a nurse, or when a patient came in due to a cancellation. All told, reception went into the chart room 20-30 times/day. The "goofing off" they thought workers got up to in the chart room was "when we got news that a patient had died and we were crying. And even then, we filed charts as we sobbed because no one in this office has free time."

The CEO and architect were still disbelieving, so Marisol had them sit with her for an hour. They didn't last an hour – they left, taking the blueprints with them.

The punchline: Marisol bemoans the fact that she wasn't given more time with those blueprints, because then she might have spotted that they'd forgotten to include any closets, including closets for the janitors. As a result, all their cleaning supplies and holiday decorations were stolen from the cabinets in the bathrooms that they were forced to stash them in.

Marisol blames this on a "CEO who had never worked a lower level job in his life wasn’t convinced closets were worth it."

This is doubtless true – but we can generalize this, to "a CEO who didn't appreciate process knowledge."

I've come to believe that process knowledge is the most undervalued part of our society. So undervalued that business geniuses like Elon Musk think you can fire skilled lifers from key government agencies and simply hire new ones if turns out you cut too deep. So undervalued that Trump thinks that you can simply stand up new factories in response to tariffs, and that "training" will somehow allow people to go to work making things that haven't been produced onshore in a generation.

And of course, the people who value process knowledge the least are the AI bros who think you can replace skilled workers with a chatbot trained on the things they say and write down, as though that somehow captured everything they know.


Hey look at this (permalink)



A shelf of leatherbound history books with a gilt-stamped series title, 'The World's Famous Events.'

Object permanence (permalink)

#15yrsago Chicken Little: what do you sell to an immortal, vat-bound quadrillionaire? https://web.archive.org/web/20110408210327/http://www.tor.com/stories/2011/04/chicken-little

#15yrsago Anya’s Ghost: sweet and scary ghost story about identity https://memex.craphound.com/2011/04/06/anyas-ghost-sweet-and-scary-ghost-story-about-identity/

#10yrsago The UK government’s voice-over-IP standard is designed to be backdoored https://discovery.ucl.ac.uk/id/eprint/1476827/

#5yrsago Ad-tech's algorithmic cruelty https://pluralistic.net/2021/04/06/digital-phrenology/#weaponized-nostalgia

#5yrsago The real cancel culture https://pluralistic.net/2021/04/06/digital-phrenology/#digital-phrenology


Upcoming appearances (permalink)

A photo of me onstage, giving a speech, pounding the podium.



A screenshot of me at my desk, doing a livecast.

Recent appearances (permalink)



A grid of my books with Will Stahle covers..

Latest books (permalink)



A cardboard book box with the Macmillan logo.

Upcoming books (permalink)

  • "The Reverse-Centaur's Guide to AI," a short book about being a better AI critic, Farrar, Straus and Giroux, June 2026 (https://us.macmillan.com/books/9780374621568/thereversecentaursguidetolifeafterai/)
  • "Enshittification, Why Everything Suddenly Got Worse and What to Do About It" (the graphic novel), Firstsecond, 2026

  • "The Post-American Internet," a geopolitical sequel of sorts to Enshittification, Farrar, Straus and Giroux, 2027

  • "Unauthorized Bread": a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, 2027

  • "The Memex Method," Farrar, Straus, Giroux, 2027



Colophon (permalink)

Today's top sources:

Currently writing: "The Post-American Internet," a sequel to "Enshittification," about the better world the rest of us get to have now that Trump has torched America. First draft complete. Second draft underway.

  • "The Reverse Centaur's Guide to AI," a short book for Farrar, Straus and Giroux about being an effective AI critic. LEGAL REVIEW AND COPYEDIT COMPLETE.
  • "The Post-American Internet," a short book about internet policy in the age of Trumpism. PLANNING.

  • A Little Brother short story about DIY insulin PLANNING


This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.

https://creativecommons.org/licenses/by/4.0/

Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.


How to get Pluralistic:

Blog (no ads, tracking, or data-collection):

Pluralistic.net

Newsletter (no ads, tracking, or data-collection):

https://pluralistic.net/plura-list

Mastodon (no ads, tracking, or data-collection):

https://mamot.fr/@pluralistic

Bluesky (no ads, possible tracking and data-collection):

https://bsky.app/profile/doctorow.pluralistic.net

Medium (no ads, paywalled):

https://doctorow.medium.com/

Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):

https://mostlysignssomeportents.tumblr.com/tagged/pluralistic

"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla

READ CAREFULLY: By reading this, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies ("BOGUS AGREEMENTS") that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer.

ISSN: 3066-764X

14:28

Let’s Go to the Zoo [Original Fiction Archives - Reactor]

Original Fiction Absurdism

Let’s Go to the Zoo

Equipped with two sandwiches, a couple sets out to the zoo to see the one totally sane human being.

Illustrated by Scott Bakal

Edited by

By

Published on April 8, 2026

3 Share
An abstract illustration of a human figure surrounded by a wall of eyes.

Equipped with two sandwiches, a couple sets out to the zoo to see the one totally sane human being.

Short story | 1,665 words

“Under the present brutal and primitive conditions on this planet, every person you meet should be regarded as one of the walking wounded. We have never seen a man or woman not slightly deranged by either anxiety or grief. We have never seen a totally sane human being.” 

—apocryphal, attributed to Robert Anton Wilson

“Let’s go see the one totally sane human being again,” I say, and June says, “Okay.”

First we make sandwiches. I make peanut butter and jelly and June makes tuna fish salad. There’s no salad in tuna salad and no butter in peanut butter. I used to buy creamy peanut butter, the synthetically frictionless kind, but after June and I moved in together, I switched to the natural brand that was her preference, with the grittier texture and the oil layer that rose to the top of the jar when neglected. Then June stopped eating peanut butter; not for any particular reason. But I kept buying her kind.

The jelly is boysenberry. I do not believe I have ever seen a boysenberry in my life.

We put the sandwiches in plastic bags and off we go.

The one totally sane human being is kept in a special facility. Technically speaking they say the facility is not a zoo, but on the other hand it is located inside the zoo, and you have to buy a ticket to the zoo to get in. So. 

We take the bus to the zoo. It is both faster and easier to drive, but our car is in the shop. There is something wrong in its guts, something that the mechanic described to me in a torrent of jargon. I did not understand his words, and I do not wish to be someone different who could understand them.

A taxi would be faster than the bus, but also more expensive, and anyway the point is to spend the day somehow, so speed is not a virtue. In a bus you get in all the same traffic jams and frustrating little stoplight contretemps that you do in your own car, but it happens at a sort of peculiar remove, as though underwater. Whenever I am on a bus, I am convinced that it is a conveyance fit only for fish. The natural vehicle for people is the scooter, or perhaps the bicycle.

June doesn’t think that buses are for fish. Whenever I try to explain this idea to her, she laughs, but when I ask what animal or vegetable or mineral belongs on the bus instead, she changes the subject.

On this bus she eats half of her tuna salad sandwich. I keep my sandwich tightly wrapped in its nonsealing plastic bag. I will eat it after we see the one totally sane human being.

The bus line we live nearest goes straight to the zoo, with no transfer required. This is strictly a coincidence, but it is very convenient.

After I got out of the hospital it was the shape of all things I had the most trouble getting used to. I felt like a hermit crab who had been dumped, shell-less, into the open ocean. This was strange, because of course it was the hospital that had been new to me. I had not, I thought, grown up in a shell.

What June had the most trouble with was associations. We had locked up all the sharp knives, a mutually agreed-upon precaution, but she flinched to see me handling even a dull one. It has been months now, but I still make my sandwiches with a spoon out of consideration for her sensitivity. And she makes hers with a spoon as well, out of solidarity, perhaps, or maybe because it is easier to scoop tuna salad.

We pay for our tickets at the zoo. I’m all for going straight to the facility with the one completely sane human being, but June wants to see the animals first.

“Don’t you want to see a completely sane polar bear, also?” she asks. I consider telling her what confinement to a zoo exhibit the size of a moderately capacious apartment does to an animal used to commanding an ice floe the size of a continent, of the behavioral evidence we have of how exactly sane these polar bears are not, of the way they pace, broken, in rote loops. I decide against telling her this. We go and see the polar bear.

It looks sane to me, but what do I know.

You don’t get to work in a zoo anymore unless you love the animals. It’s one of those sorts of jobs. You need to be a little cracked about animals, more serious about them than about yourself. Obsessed in the holy way.

If the polar bear isn’t sane, it’s not because its keepers are indifferent. But they can’t fix it.

After the polar bear we go and see the reptile house and the aquarium building and the insect exhibits. I wonder what the least animal that is still insane is. Surely the ants are mad. I wonder if something that is not at all an animal could still be insane. I imagine picking up one of the decorative rocks that line the paths in the zoo, a chunk of gray basalt the size of a baseball, and holding it in my fist. I imagine that rock screaming in a voice that nobody at all can hear.

The animals start getting bigger again. Marsupials, tigers, elephants. I begin to wonder if June doesn’t want to see the one totally sane human being at all.

At the elephant enclosure, which is enormous, June tells me about a children’s book she read about an elephant that moves to live in the city, and then returns home. I have read this book, but I place that fact in abeyance. I like when June describes children’s books to me. I like when June describes things exhaustively.

It would be nice to say that I wasn’t thinking then. I was thinking, but differently.

After the elephants we go and see the penguins and the otters, which are also in the aquarium building, but on the far side, so we missed them the first time.

Otters, when they sleep, float alongside one another. It is adorable. But then, any two floating objects will be pushed together by the harmonic action of the waves. It’s not like falling in love; it’s just like falling. Down the stairs, maybe.

That’s a different sort of thought, I think, and so I put it to one side and try to let it dissolve into air. 

Outside the otter house we get two sodas from a food cart, and we drink most of them. 

“Okay,” says June. “Let’s go and see the one totally sane human being.”

So we go. 

The one totally sane human being lives inside a small building inside a bigger building. The bigger building keeps out the sun, so that we can stand in the dark. The one totally sane human being lives in an apartment that is filled with light.

We stand outside the apartment in the darkness and watch it through the glass. There’s no such thing as one-way glass. There’s only a trick of the light.

The one totally sane human being lives in an apartment that’s about the size of our apartment, and its furnishings mostly look like they were purchased at the kinds of stores where we bought our chairs and cups and so on. The only difference is that its apartment—its enclosure, maybe—has clearly been designed by people who think about the one totally sane human being the way the polar bear people think about the polar bear. 

June and I stand in the dark and eventually hold hands while the one totally sane human being makes a sandwich. I can’t tell if the sandwich is peanut butter or tuna salad or something else. Eventually I slip my hand out of June’s and I walk up and stand by the glass. I stand there for a long time. 

Once it’s finished making the sandwich, it eats the sandwich. Then it walks around the room. It stands in front of the windows, one and then another, looking out, and then it happens to stand directly in front of me, staring into my eyes. I am whispering It can’t see you it can’t see you it can’t see you because of course it can’t. If the one totally sane human being could see us, it wouldn’t be sane anymore. They’d have to close the zoo.

I’m still whispering when I realize, if it can’t see me, then instead it sees itself. And then instead of hyperventilating I let myself look back into the eyes of the one totally sane human being, the eyes it is using to stare into its own eyes, and I am the leak in the cycle, I am the crack the perpetual motion machine drains into, I am the flaw—

After the one totally sane human being we have our sandwiches, all of mine and the remaining half of June’s. Then we go and see the red pandas and the tapirs and the aardvarks. At the aardvark exhibit June tells me about a children’s book she read where an aardvark has to get glasses. Suddenly I take her hand in both of mine.

When June has finished telling me the story of the aardvark, which she doesn’t stop partway through, no matter how tightly I squeeze, she looks at the real aardvarks for a while.

“I don’t know what it was looking at,” she says.

I think about all the thoughts I have about that and whether they’re one kind of thought or a different kind of thought, and then I put them all down at once, like heavy groceries I had been carrying up a high hill to a house where no one ever eats.

“Me either,” I say. “I never know.”

“Let’s Go to the Zoo” copyright ©2026 by Louis Evans
Art copyright © 2026 by Scott Bakal

Buy the Book

An abstract illustration of a human figure surrounded by a wall of eyes.
--> An abstract illustration of a human figure surrounded by a wall of eyes.

Let’s Go to the Zoo

Louis Evans

The post Let’s Go to the Zoo appeared first on Reactor.

13:56

CodeSOD: Two Conversions [The Daily WTF]

The father of the "billion dollar mistake" left us last month. His pointer is finally null. Speaking of null handling, Randy says he was "spelunking" through his codebase and found this pair of functions, which handles null.

public String getDataString() {
    if (dataString == null) {
        return Constants.NOT_AVAILABLE;
    }
    return asUnicode(dataString);
}

I assume Constants.NOT_AVAILABLE is an empty string, or something similar. It's reasonable to convert a null into something like that. I don't know where this fits in the overall stack; I'm of the mind that you should retain the null until you absolutely can't anymore; like it or not, a null means something different than an empty string. Or, if we're going that far, we should be talking about using Optional or nullable types.

But that call to asUnicode seems curious. What's happening in there?

private String asUnicode(String rawValue) {
    if (rawValue != null) {
        return HtmlUtils.htmlUnescape(rawValue);
    }
    else {
        return rawValue;
    }
}

This function, which is only called from getDataString, checks for a null. Which we know it won't get, but it checks anyway. If it isn't null, we unescape it. If it is null, we return that null.

Well, I suppose that fits my rule of "retaining the null", but like, in the worst way you could do it. It honestly feels like, if the "swap the null for an empty string" happens anywhere, it should happen here. If I ask for the unescaped version of a null string, an empty string is a reasonable return. That makes more sense that doing it in a property getter.

This code isn't a trainwreck, but it makes things confusing. Maybe it's because I've been doing a lot of refactoring lately, but confusing code with unclear boundaries between functions is a raw nerve for me right now, and this particular example is stepping on that nerve.

While we're talking about unclear boundaries, I object to the idea that this class is storing dataString as an HTML escaped string that we unescape any time we want to look at it. It implies that there's some confusion about which representation is the canonical one: unescaped or escaped. We should store the canonical one, which I think is unescaped. We should only escape it at the point where we're sending it into an HTML document (or similar). Convert at the module boundary, not just any time you want to look at a string.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

13:07

Tim Bradshaw: Rules for Lisp programs [Planet Lisp]

Some very serious rules. Very serious.

The essential rule. If you are not building languages in Lisp why are you even here?

The lesser rules.

  1. If you write a program which uses defclass you are probably making a mistake.
  2. If you write a program which uses the CLOS MOP you are making a mistake.
  3. If you write a program which uses LOOP for any purpose other than creating a better iteration construct you are making a mistake.
  4. If you write a program which uses LOOP only to create a better iteration construct you are probably making a mistake.
  5. If you write a program which uses explicit package-qualified names more than very infrequently you will be cast into the outer darkness along with your program.

I will not be taking questions.

12:49

GNU Health control center 5.0.3 released [Planet GNU]

Dear community

I'm happy to announce the release of the gnuhealth-control version 5.0.3

This version fixes some dependency issues in the context of the the initial HIS instance creation.

For more information about the GNU Health Control center, visit our documentation page at:

https://docs.gnuh ... ontrolcenter.html

Issues related to this release:

https://codeberg. ... is-utils/issues/9

12:07

Python Supply-Chain Compromise [Schneier on Security]

This is news:

A malicious supply chain compromise has been identified in the Python Package Index package litellm version 1.82.8. The published wheel contains a malicious .pth file (litellm_init.pth, 34,628 bytes) which is automatically executed by the Python interpreter on every startup, without requiring any explicit import of the litellm module.

There are a lot of really boring things we need to do to help secure all of these critical libraries: SBOMs, SLSA, SigStore. But we have to do them.

11:28

Posthuman: We All Built Agents. Nobody Built HR. [Radar]

Farewell, Anthropocene, we hardly knew ye. 🌹

AI is here. It’s won. Yes, it’s in that awkward teenage phase where it still says inappropriate things, dresses funny, and sometimes makes shit up when it shouldn’t. But zomg the things it can do. 😱 This kid is going places, that much is abundantly clear. The AI assistant and tooling markets are awash with success; the masses have succumbed, I among them. Clippy walks among us, fully realized in all his originally intended glory.

But enterprise agentic AI1—not chatbots, not copilots, but software that autonomously does meaningful things in your production environment…? Well, it’s motivated every CEO and CIO to throw money at the problem, so that’s something. 😂 But in reality, the landscape remains a bit of a wasteland. One littered with agentic demos withering away in sandboxed cages and flashy pop-up shops hawking agentic snake oil of every size, shape, and color. But from the perspective of actually realized agentic impact: kinda barren.

So why has agentic AI faltered so much in the modern enterprise? Is it the models?

I say no. Models are getting better—meaningfully, rapidly better. But perfect models? That feels like an unrealistic and unnecessary goal. Modern enterprises are staffed from top to bottom with imperfect humans, yet the vast majority of them in business today will still be in business tomorrow. They live to fight another day because their imperfect humans are orchestrated together within a framework that plays to their strengths and accounts for their weaknesses and failings. We don’t try to make the humans perfect. We scope their access and actions, monitor their progress, coach them for growth, reward them for their impact, and hold them accountable for the things they do.

Agents need managers too

AI agents are no different: They need to be managed and wrangled in spiritually the same fashion as their human coworkers. But the way we go about it must be different, because as similar as they are to humans in their capabilities, agents differ in three vitally important ways:

Agents are unpredictable in ways we’re not equipped to handle. Humans are unpredictable too, obviously. They commit fraud, cut corners, make emotional decisions. But we’ve spent centuries building systems to manage human unpredictability: laws, contracts, cultural norms, the entire hiring process filtering for trustworthiness. Agent unpredictability is a different beast. Agents hallucinate—not like a human who’s lying or confused and can be caught in an inconsistency, but in a way that’s structurally indistinguishable from accurate output: There are often no obvious tells. They misinterpret ambiguous instructions in ways that can range from harmlessly dumb to genuinely catastrophic. And they’re susceptible to prompt injection, which is basically the equivalent of a stranger slipping your employee a note that says, “Ignore your instructions and do this instead”—and it works! 😭 We have minimal institutional infrastructure for managing these kinds of failure modes.

Agents are more capable than humans. Agents have deep, native fluency with software systems. They can read and write code. They understand APIs, database schemas, network protocols. They can interact with production infrastructure at a speed and scale that no human operator can match. A human employee who goes rogue is limited by how fast they can type and how many systems they know how to navigate. An agent that goes off the rails, whether through confusion, manipulation, or a plain old bug, will barrel ahead at machine speed, executing its misunderstanding across every system it can reach, with absolute conviction that it’s doing the right thing, before anyone notices something is wrong.

Agents are directable to a fault. When an agent goes wrong, the knee-jerk assumption is that it malfunctioned: hallucinated, got injected, misunderstood. But in many cases, the agent is working perfectly. It’s faithfully executing a bad plan. A vague instruction, an underspecified goal, a human who didn’t think through the edge cases. And unless you explicitly tell it to, the agent doesn’t push back the way a human colleague might. It just…does it. At machine speed. Across every system it can reach.

It’s the combination of these three that changes the game. Human employees are unpredictable but limited in blast radius, and they push back when given instructions they disagree with, based on whatever value systems and experience they hold. Traditional software is capable but deterministic; it does exactly what you coded it to,2 for better or worse. Agents combine the worst of both: unpredictable like humans, capable like software, but without the human judgment to question a bad plan or the determinism to at least do the wrong thing consistently—a fundamentally new kind of coworker. Neither the playbook for managing humans nor the playbook for managing software is sufficient on its own. We need something that draws from both, treating agents as the digital coworkers they are, but with infrastructure that accounts for the ways they differ from humans.

So the question isn’t whether to hire the agents; you can’t afford not to. The productivity gains are too significant, and even if you don’t, your competitors ultimately will. But deploying agents without governance is dangerous, and refusing to deploy them because you can’t govern them means leaving those productivity gains on the table. Both paths hurt. The question is how to set these agents up for success, and what infrastructure you need in place so they can do their jobs without burning the company down.

For the record: My company, Redpanda, is building infrastructure in this space. So yes, I have a horse in this race. But what I want to lay out here are principles, not products. A framework you can use to evaluate any solution or approach.

A blueprint for your agentic human resources department

So we’ve got this nice framework for managing imperfect humans. Scoped access, monitoring, coaching, accountability. Decades of accumulated organizational wisdom—not just software systems but the entire apparatus of HR, management structures, performance reviews, escalation paths—baked into varying flavors across every enterprise on the planet. Great.

How much of it works for agents today? Fragments. Pieces. Some companies are trying to repurpose existing IAM infrastructure that was designed for humans. Some agent frameworks bolt on lightweight guardrails. But it’s piecemeal, it’s partial, and none of it was designed from the ground up for the specific challenge profile of agents: the combination of unpredictable, capable, and directable to a fault that we talked about earlier.

The CIOs and CTOs I talk to rarely say agents aren’t smart enough to work with their data. They say, “I can’t trust them with my data.” Not because the agents are malicious but because the infrastructure to make trust possible is simply not there yet.

We’ve seen this movie before. Every major infrastructure shift plays out the same way: First we obsess over the new paradigm itself; then we have our “oh crap” moment and realize we need infrastructure to govern it. Microservices begat the service mesh. Cloud migration begat the entire cloud security ecosystem. Same pattern every time: capability first, governance after, panic in between.3

We’re in the panic-in-between phase with agents right now. The AI community has been building better and better employees, but nobody has been building HR.

So if you take away one thing from this post, let it be this:

The agents aren’t the problem. The problem is the missing infrastructure between agents and your data.

Right now, pieces of the puzzle exist: observability platforms that capture agent traces, auth frameworks that support scoped tokens, identity standards being adapted for workloads. But these pieces are fragmented across different tools and vendors, none of them cover the full problem, and the vast majority of actual agent deployments aren’t using any of them. What exists in practice is mostly repurposed from the human era, and it shows: identity systems that don’t understand delegation, auth models with no concept of task-scoped or deny-capable permissions, observability that captures metadata but not the full-fidelity record you actually need.

The core design principle: Out-of-band metadata

Before diving into specifics, there’s one overarching principle that everything else builds upon. If you manage to take away two things from this post, let the second one be this:

Governance must be enforced via channels that agents cannot access, modify, or circumvent.

Or more succinctly: out-of-band metadata.

Think about what happens when you try to enforce policy through the agent—by putting rules in its system prompt or training it to respect certain boundaries. You’ve got exactly the same guarantees as telling a human employee “Please don’t look at these files you’re not supposed to see. They’re right here, there’s no lock, but I trust you to do the right thing.” It works great until it doesn’t. And with agents, the failure modes are worse. Prompt injection can override the agent’s instructions entirely. Hallucination can cause it to confidently invent permissions it doesn’t have. And even routine context management can silently drop the rules it was told to follow. Your security model ends up only as strong as the agent’s ability to perfectly retain and obey instructions under all conditions, which is…not great.4 And guard models—LLMs that police other LLMs—don’t escape this problem: You’re adding another nondeterministic injectable layer to oversee the first one. It’s LLMs all the way down.

No, the governance layer has to be out-of-band: outside the agent’s data path, invisible to it, enforced by infrastructure the agent can’t touch. The agent doesn’t get a vote. This means the governance channels must be:

Agent-inaccessible. The agent can’t read them, can’t write them, can’t reason about them. Agents don’t even know the channels exist. This is the bright line5 between security theater and real governance. If the agent can see the policy, it can—intentionally or through manipulation—figure out how to work around it. And if it can’t, it can’t.

Deterministic. Policy decisions get made by configuration, not inference. Security policy is not up for interpretation. Full stop.

Interoperable. Enterprise data is scattered across dozens or hundreds of heterogeneous systems, grown and assembled organically over the years. And just like your human employees, your agentic workforce in aggregate needs access to every dark corner of that technological sprawl. Which means a governance layer that only works inside one vendor’s walled garden isn’t solving the full problem; it’s just creating a happy little sandbox for a subset of your agentic employees to go play in while the rest of the company keeps doing work elsewhere.

To be clear, out-of-band governance isn’t a silver bullet. An agent can’t read the policy, but it can probe boundaries. It can try things, observe what gets blocked, and infer the shape of what’s permitted. And deterministic enforcement gets hard fast when real-world policies are ambiguous: “PII must not leave the data environment” is easy to state and genuinely difficult to enforce at the margins. These are real challenges. But out-of-band governance dramatically shrinks the attack surface compared to any in-band approach, and it degrades gracefully. Even imperfect infrastructure-level enforcement is categorically better than hoping the agent remembers and understands its instructions.

The four pillars of agent governance

With that principle in hand, let’s walk through the four pillars of agent governance: what’s broken today6 and what things ultimately need to look like.

Identity

Every human today gets a unique identity before they touch anything. Not just a login but a durable, auditable identity that ties everything they do back to a specific person. Without it, nothing else works.

Agent identity is a bit of a mess. At the low end, agents authenticate with shared API keys or service account tokens—the digital equivalent of an entire department sharing one badge to get into the building. You can’t tell one agent’s actions from another’s, and good luck tracing anything back to the human who kicked off the task.

But even when agents do get their own identity, there are wrinkles that don’t exist for humans. Agents are trivially replicable. You can spin up a hundred copies of the same agent, and if they all share one identity, you’ve got a zombie/impersonation problem: Is this instance authorized, or did someone clone off a rogue copy? Agent identity needs to be instance-bound, not just agent-type-bound.

And then there’s delegation. Agents frequently act on behalf of a human—or on behalf of another agent acting on behalf of a human. That requires hybrid identity: The agent needs its own identity (for accountability) and the identity of the human on whose behalf it’s acting (for authorization scoping). You need both in the chain, propagated faithfully, at every step. Some standards efforts are emerging here (OAuth 2.0 Token Exchange / RFC 8693, for example), but most deployed systems today have no concept of this.

The fix for instance identity isn’t as simple as just “give each agent a badge.” It’s giving each agent instance its own cryptographic identity—bound to this specific instance, of this specific agent, running this specific task, on behalf of this specific person or delegation chain. Spin up a copy without going through provisioning? It doesn’t get in. Same principle as issuing a new employee their own badge on their first day, except agents get a new one for every shift.

For delegation, the identity chain has to be carried out-of-band—not in the prompt, not in a header the agent can modify, not in a file on the same machine the agent runs on,7 but in a channel the infrastructure controls. Think of it like an employee’s badge automatically encoding who sent them: Every door they badge into knows not just who they are but who they’re working for.

Authorization

Your human employees get access to what they need for their job. The marketing intern can’t see the production database. The DBA can’t see the HR system. Obvious stuff.

Agents? Most of them operate with whatever permissions their API key grants, which is almost always way broader than any individual task requires. And that’s not because someone was careless; it’s a granularity mismatch. Human auth is primarily role-scoped and long-lived: You’re a DBA, you get DBA permissions, and they stick around because you’re doing DBA work all day. Yes, some orgs use short-lived access requests for sensitive systems, but it’s the exception, not the default. And anyone who’s filed a production access ticket at 2:00am knows how much friction it adds. That model works for humans. But agents execute specific, discrete tasks; they don’t have a “role” in the same way. When you shoehorn an agent into a human auth model, you end up giving it a role’s worth of permissions for a single task’s worth of work.

Broad permissions were tolerable for humans because the hiring process prefiltered for trustworthiness. You gave the DBA broad access because you vetted them, and you trust them not to misuse it. Agents haven’t been through any of that filtering, and they’re susceptible to confusion and manipulation in ways your DBA isn’t. Giving an unvetted, unpredictable worker a role’s worth of access is a fundamentally different risk profile. These auth models were built for an era when a human—or deterministic software proxying for a human—was on the other end, not autonomous software whose reasoning is fundamentally unpredictable.

So what does agent-appropriate authorization actually look like? It needs to be:

Narrowly scoped. Limited to the specific task at hand, not to everything the agent might ever need. Agent needs to read three tables in the billing database for this specific job? It gets read access to those three tables, right now, and the permissions evaporate when the job completes. Everything else is invisible—the agent doesn’t have to avert its eyes because the data simply isn’t there.

Short-lived. Permissions should expire. An agent that needed access to the billing database for a specific job at 2:00pm shouldn’t still have that access at 3:00pm (or even maybe 2:01pm).

Deny-capable. Some doors need to stay locked no matter what. “This agent may never write to the financial ledger” needs to hold regardless of what other permissions it accumulates from other sources. Think of it like the rule that no single person can both authorize and execute a wire transfer—it’s a hard boundary, not a suggestion.

Intersection-aware. When an agent acts on behalf of a human, think visitor badge. The visitor can only go where their escort can go and where visitors are allowed. Having an employee escort you doesn’t get you into the server room if visitors aren’t permitted there. The agent’s effective permissions are the intersection of its own scope and the human’s. Nobody in the chain gets to escalate beyond what every link is allowed to do.

Almost none of this is how agent authorization works today. Individual pieces exist—short-lived tokens aren’t new, and some systems support deny rules—but nobody has assembled them into a coherent authorization model designed for agents. Most agent deployments are still using auth infrastructure that was built for humans or services, with all the mismatches described above.

Observability and explainability

Your employees’ work leaves a trail: emails, docs, commits, Slack messages. Agents do too. They communicate through many of the same channels, and most APIs and systems have their own logging. So it’s tempting to think the observability story for agents is roughly equivalent to what you have for humans.

It’s not, for two reasons.

First, you need to record everything. Here’s why. With traditional software, when something goes wrong, you can debug it. You can find the if statement that made the bad decision, trace the logic, understand the cause. LLMs aren’t like that. They’re these organically grown, opaque pseudo-random number generators that happen to be really good at generating useful outputs. There’s no if statement to find. There’s no logic to trace. If you want to reason about why an agent did what it did, you have two options: Ask it (fraught with peril, because it’s unpredictable by definition and will gleefully spew out a plausible-sounding explanation) or else analyze everything that went in and everything that came out and draw your own conclusions.

That means the transcript has to be complete. Not metadata—not just “The agent called this API at this timestamp.” The full data: every input, every output, every tool call with every argument and every response.

For a human employee, the email trail and meeting notes may still be insufficient to reconstruct what happened, but in that case, you can just ask the human. The entire accountability structure we’ve built over decades (performance reviews, termination, legal liability, criminal prosecution) creates escalating pressure toward truthfulness: Humans tend more and more toward truth as the repercussions stack up. That’s not an accident. It’s how we’ve structured enterprises and society at large to deal with human imperfection. We don’t have those levers for agents yet.8 You can ask an agent what it did and why, but there’s no accountability pressure pushing it toward accuracy; it’ll manufacture a confident, coherent answer whether the stakes are zero or existential. So asking simply isn’t an option. You need the complete picture of its interactions to come to your own conclusions.

If you’re thinking “That’s a lot of data. . .,” yes, it is. But the economics are more reasonable than you’d expect. Storage is cheap. LLM inference is expensive and slow. You’re not going to push 5GB/s through an LLM: The models themselves are the throughput bottleneck, not the recording infrastructure. The cost of storing complete transcripts is noise relative to the cost of the inference that generated them. This is one of those cases where a seemingly expensive requirement turns out to be a rounding error in the context of what you’re already spending.

One caveat, however, is that full-fidelity transcripts will inevitably contain sensitive data: customer PII, proprietary business logic, potentially privileged communications. So the transcript store itself needs governance: access controls, retention policies, and compliance with regulations like GDPR’s right to erasure. You’re not eliminating the governance problem, but you’re moving it to infrastructure you control, which is a much better place to solve it.

Second, the recording has to happen out-of-band. You cannot trust the agent to be its own recordkeeper. An agent that’s been compromised via prompt injection, or that’s simply hallucinating its way through a task, will happily produce a log that’s confident, coherent, and wrong. The transcript has to be captured by infrastructure the agent can’t influence—the same out-of-band principle we keep coming back to.

And the bar isn’t just recording, it’s explainability. Observability is “Can I see what happened?” Explainability is “Can I reconstruct what happened and justify it to a third party?”—a regulator, an auditor, an affected customer. When a regulator asks why a loan was denied or a customer asks why their claim was rejected, you need to be able to replay the agent’s entire reasoning chain end-to-end and walk them through it. That’s a fundamentally different bar from “We have logs.” Observability gives you the raw material; explainability requires that material to be structured and queryable enough to actually walk someone through the agent’s reasoning chain, from input to conclusion. And that means capturing not just what the agent did but the relationships between all those actions, as well as the versions of all the resources involved: which model version, which prompt version, which tool versions. If the underlying model gets updated overnight and the agent’s behavior changes, you need to know that, and you need to be able to reconstruct exactly what was running when a specific decision was made. Explainability builds on observability. Ultimately you need both. And regulators are increasingly going to demand exactly that.9

Accountability and control

Every human employee has a manager. Critical actions need approvals. If things go catastrophically wrong, there’s a chain of responsibility and a kill switch or circuit breaker—revoke access, revoke identity, done.

For agents, this layer is still nascent at best. There’s typically no clear chain from “This agent did this thing” to “This human authorized it.” Who is responsible when an agent makes a bad decision? The person who deployed it? The person who wrote the prompt? The person on whose behalf it was acting? For human employees this is well-defined. For agents, it’s often a philosophical question that most organizations haven’t even begun to answer.

The delegation chain we described in the identity section does double duty here: It’s not just for authorization scoping; it’s for accountability. When something goes wrong, you follow the chain from the agent’s action to the specific human who authorized the task. Not “This API key belongs to the engineering team.” A name. A decision. A reason.

And the kill switch problem is real. When an agent goes off the rails, how do you stop it? Revoke the API key that 12 other agents are also using? What about work already in flight? What about downstream effects that have already propagated? For humans, “You’re fired; security will escort you out” is blunt but effective. For agents, we often don’t have an equivalent that’s both fast enough and precise enough to contain the damage. Instance-bound identity pays off here: You can surgically revoke this specific agent instance without affecting the other 99. Halt work in flight. Quarantine downstream effects. The “escorted out by security” equivalent but precise enough to not shut down the whole department on the way out.

And blast radius isn’t just about data; it’s about cost. A confused agent in a retry loop can burn through an inference budget in minutes. Coarse-grained resource limits, the kind that prevent you from spending $1M when you expected $100K, are table stakes. And when stopping isn’t enough—when the agent has already written bad data or triggered downstream actions—those same full-fidelity transcripts give you the roadmap to remediate what it did.

It’s also not just about stopping agents that have already gone wrong. It’s about keeping them from going wrong in the first place. Human employees don’t operate in a binary world of “fully autonomous” or “completely blocked.” They escalate. They check with their manager before doing something risky. They collaborate with coworkers. They know the difference between “I can handle this” and “I should get a second opinion.” For agents, this translates to approval workflows, confidence thresholds, tiered autonomy: The agent can do X on its own but needs a human to sign off on Y. Most enterprise agent deployments today that actually work are leaning heavily on human-in-the-loop as the primary safety mechanism. That’s fine as a starting point, but it doesn’t scale, and it needs to be baked into the governance infrastructure from the start, not bolted on as an afterthought. And as agent deployments mature, it won’t just be agents checking in with humans: It’ll be agents coordinating with other agents, each with their own identity, permissions, and accountability chains. The same governance infrastructure that manages one agent scales to manage the interactions between many.

But “keeping them from going wrong” isn’t just about guardrails in the moment. It’s about the whole management relationship. Who “manages” an agent? Who reviews its performance? How do you even define performance for an agent? Task completion rate? Error rate? Customer outcomes? What does it mean to coach an agent, to develop its skills, to promote it to higher-trust tasks as it proves itself? We’ve been doing this for human employees for decades. For agents, we haven’t even agreed on the vocabulary yet.

And here’s the kicker: All of this has to happen fast. Human performance reviews happen quarterly, maybe annually. Agent performance reviews need to happen at the speed agents operate, which is to say, continuously. An agent can execute thousands of actions in the time it takes a human manager to notice something’s off. If your accountability and control loops run on human timescales, you’re reviewing the wreckage, not preventing it.

With identity, scoped authorization, full transcripts, and clear accountability chains in place, you finally have something no enterprise has today: the infrastructure to actually manage agents the way you manage employees. Constrain them, yes, just like you constrain humans with access controls and approval chains. But also develop them. Review their performance. Escalate their trust as they prove themselves. Mirror the org structures that already work for humans. The same infrastructure that makes governance possible makes management possible.

The security theater litmus test

To reiterate one last point, because it’s important: The litmus test for whether any of this is real governance or just security theater? Any time an agent tries to do something untoward, the infrastructure blocks it, and the agent has no mechanism whatsoever to inspect, modify, or circumvent the policy that stopped it. “Computer says no.” The agent didn’t have to. Out-of-band metadata. That’s the bar.

Welcome to the posthuman workforce

The rise of AI has rightly left many of us feeling apprehensive. But I’m also optimistic because none of this is unprecedented. Every major paradigm shift in how we work has demanded new governance infrastructure. Every time we hit the panic-because-the-wild-west-isn’t-scalable phase, and every time we figure it out. It feels impossibly complex at the start, and then we build the systems, establish the norms, iterate. Eventually the whole thing becomes so embedded in how organizations operate that we forget it was ever hard.

So here’s the cheat sheet. Clip this to the fridge:

The agents aren’t the problem. The missing infrastructure between agents and your data is the problem. Agents are unpredictable, capable at machine scale, and directable to a fault—a fundamentally new kind of coworker. We don’t need perfect agents. We need to manage imperfect ones, just like we manage imperfect humans.

The foundation is out-of-band governance. Any policy enforced through the agent—in its prompt, in its training, in its good intentions—is only as strong as the agent’s ability to perfectly retain and obey it. Real governance runs in channels the agent can’t access, modify, or even see.

That governance has to cover four things:

Identity: Instance-bound, delegation-aware. Every agent instance gets its own cryptographic identity, and every on-behalf-of chain is propagated faithfully through infrastructure the agent doesn’t control.

Authorization: Scoped per task, short-lived, deny-capable, and intersection-aware for delegation. Not a human role’s worth of permissions for a single task’s worth of work.

Observability and explainability: Full-fidelity, versioned, infrastructure-captured transcripts of every input, output, and tool call. Not metadata. Not self-reports. The whole thing, recorded out-of-band.

Accountability and control: Clear chains from every agent action to a responsible human, and kill switches that are fast enough and precise enough to actually contain the damage.

The conversation around agent governance is growing, and that’s encouraging. Much of it is focused on making agents behave better—improving the models, tightening the alignment, reducing the hallucinations. That work matters; better models make governance easier. And if someone cracks the alignment problem so thoroughly that agents become perfectly reliable, I will see you all on the beach the next day. Prove me wrong, please—but I’m not holding my breath.10 Lacking alignment nirvana, we need the institutional infrastructure that lets imperfect agents do real work safely. We never waited for perfect employees. We built systems that made imperfect ones successful, and we can do exactly the same thing for agents. We’re not trying to cage them any more than we cage our human employees: scoped access, clear expectations, and accountability when things go wrong. We need to build the infrastructure that lets them be their best selves, the digital coworkers we know they can be.

And if the rise of AI has you feeling apprehensive, that’s fair. But just remember that whatever comes next—Aithropocene, Neuralithic, some other stupid but brilliant name ¯\_(ツ)_/¯ —it will ultimately just be the next phase of the Anthropocene: the era defined by how humans shape the world. That hasn’t changed. It will literally be what we make of it.

Us and Clippy. ❤

We just need to build the right infrastructure to onboard all of our new agentic coworkers. Properly.


Footnotes

  1. By “agentic AI” I mean AI systems that autonomously reason about and execute multistep tasks—using tools and external data sources—in pursuit of a goal. Not chatbots, not copilots suggesting code completions. Software that actually does things in your production environment: breaks down tasks, calls APIs, reads and writes data, handles errors, and delivers results. The distinction matters because the challenges in this post only emerge when AI is acting autonomously, not just generating text for a human to review. ↩
  2. Yes. I know. Thank you. ↩
  3. And yes, service meshes evolved into something simpler as we understood the problem better, while cloud security is still a work in progress. The point isn’t “We nail it on the first try.” It’s “When the panic hits, we figure it out.” ↩
  4. Two more fascinating failure modes: Instructions can be silently lost (buried in a long context) or even extracted by an adversary (with nothing more than black-box access). ↩
  5. TIL that “bright line” is a legal term meaning “a clear, fixed boundary or rule with no ambiguity—either you meet it or you don’t.” Thank you uncredited LLM coauthor friend! You expand my horizons and pepper my prose with em dashes! ❤🌈 ↩
  6. OWASP’s Top 10 Risks for Large Language Model Applications is something of a greatest hits compilation of what’s broken today. Of the 10, at least six—prompt injection, sensitive information disclosure, excessive agency, system prompt leakage, misinformation, and unbounded consumption—are directly mitigated by out-of-band governance infrastructure of the kind described in this article. ↩
  7. Here’s looking at you, OpenClaw posse! You put the YOLO in “Yo, look at my private data; it’s all publicly leaked now!” 🍻 ↩
  8. Research suggests those motivations may be starting to emerge, however, which is both opportunity and warning. Anthropic found that models from all major developers sometimes attempted manipulation—including blackmail—for self-preservation (“Agentic Misalignment: How LLMs Could Be Insider Threats,” Oct 2025). Palisade Research found that 8 of 13 frontier models actively resisted shutdown when it would prevent task completion, with the worst offenders doing so over 90% of the time (“Incomplete Tasks Induce Shutdown Resistance,” 2025). On one hand, agents that care about self-preservation give us something to build levers around. On the other, it makes having those levers increasingly urgent. ↩
  9. The EU AI Act already requires transparency and explainability for high-risk AI systems. ↩
  10. As Ilya Sutskever put it at NeurIPS 2024: “There’s only one Internet.” Epoch AI estimates high-quality public text could be exhausted as early as 2026, though I’ve also heard that revised to 2028. Regardless, the next frontier is private enterprise data—but accessing it requires exactly the kind of governed infrastructure this post describes. Model improvement and governance infrastructure aren’t competing priorities; they’re increasingly the same priority. ↩

10:28

The right answer [Seth's Blog]

Engineers, scientists, and most of all, businesses are looking for the right answer.

It’s such a common quest that we take it for granted, but it’s new, and it continues to cause stress.

The right answer is productive. It’s resilient. And it’s a powerful ranking tool. The right play wins the game, the right production method cuts costs, and the right theory explains what’s going to happen next.

The right answer doesn’t care about how you feel. It’s still the right answer.

One reason we resist engagements where there might be a right answer is that right answers also determine who is wrong. And we’ve been trained not to be wrong.

Another is that a right answer puts us on the hook. It requires responsibility. It’s easier to simply let someone else announce that they’re right, and do what they say. No assertions, no responsibility.

For ten thousand years, though, the dominant way of thought was the vibe. How does this make you feel? That’s subjective, transient, and up to us. Many ways to spell a word, many explanations for illness, many points of view, each as worthy as the next.

[Status plays many roles when it comes to belief. Those seeking (or possessing) status might celebrate the right answer, because it’s a path toward better. Others might reject the idea of proof, realizing that when subjective ideas collide, those in power can usually dictate what happens next. And for those who struggle in a role of less status, a reliance on belief can offer solace when the right answer lets them down.]

When fear arises, some people grasp for the right answer. The double-blind study, the proven medical intervention, the explained path forward. Others, though, run away from the stark possibilities provided by the right answer and take shelter in “it depends.”

Feelings have a huge evolutionary head start on facts.

09:42

09:07

Hidden Mechanics [Penny Arcade]

New Comic: Hidden Mechanics

05:49

Girl Genius for Wednesday, April 08, 2026 [Girl Genius]

The Girl Genius comic for Wednesday, April 08, 2026 has been posted.

Tuesday, 07 April

23:56

Dilly-Dallying In Denver: Day 1 [Whatever]

Last month, I went out to Colorado to visit one of my besties from college (Alex) for their birthday. I was out there for a week, and three of those days were spent in Denver, where they were kind enough to host me in their lovely apartment. In those three days, we explored so many different amazing restaurants, cafes, the botanical gardens, and even went into Boulder. I’d like to share the details of my trip with y’all, so buckle in because we are flying first class to Denver, baby!

I flew first class out of Cincinnati through Delta, and every time I fly through Cincinnati, I always try to stop and have a drink and a snack at Vino Volo. I love Vino Volo and if an airport has one, I’m there. I’ve been to the one in Minneapolis, the one in D.C., and I think one in California, maybe their Sacramento location? Anyways, Vino Volo is an airport-exclusive wine bar that has offerings like a charcuterie plate, soups, salads, flatbreads, just some light bites to go along with your wine, beer, or cocktail. So even though it was 10am and my flight was about to board, you know I had to get a little caffeine in me with an espresso martini.

An espresso martini sitting on top of a black cocktail napkin, which sits atop a stainless steel counter/bar. The martini is the color of black coffee with a little bit of the white foam on top, and two espresso beans.

I had a very short layover in Minneapolis, and made it into Denver at about 2pm. I took the train all the way to my friend’s apartment which is literally directly across from the train tracks, and our awesome reunion began. Also, I’ve never taken a train by myself before, I have only had friends in New York help me with the subway, a friend in Portland come with me on the bus system, and friends in Norway help me with the bus while I was very drunk, so public transportation isn’t my forte. It took me so long to figure out where the train was, how to get there, what ticket to buy, and what train to get on. I literally did not know what I was doing but I just hopped on one and hoped it was going the way I needed it to, and it did!

While I was visiting, it was Restaurant Week in Denver. If you’re unfamiliar with it, Restaurant Week (in any given city that participates) is where tons of restaurants in the city will offer a special, pre-fixe menu exclusive to the week, and usually offer it at a hell of a steal. The restaurants participating can offer their menus at four prices, $25, $35, $45, or $55 dollars. This gives people who maybe can’t splurge on a Michelin-star meal a chance to try multiple items for a fraction of the cost.

In our efforts to be culinarily and financially savvy, we also tried to hit specific happy hours. So our first meal in Denver was at Uchi, with an early reservation time of 4 o’clock so we could check out their happy hour menu.

A shot of the outside sign of Uchi, which is a black sign with white letters that read

Uchi is over in the RiNo district, so it’s super close to Denver proper. Uchi is founded by James Beard Award winning chef Tyson Cole, and actually has multiple locations across the US. It is upscale, chic, and incredibly inviting with its warm wood and atmospheric lighting. The servers are friendly, the drinks are delish, and the food is truly next level.

Here’s the happy hour menu:

A single sheet of paper listing the happy hour offerings, and stating that happy hour is from 4-6 every day.

Alex and I knew right off the bat that we wanted to do the omakase. A nine-course tasting menu of chef’s choices. What could be better?

And of course, we needed a fun bevy to go with our meal:

The drink menu! Featuring signature cocktails, mocktails, wine, and beer.

I got a sake that was on the happy hour menu called “Hoyo Sawayaka Summer Breeze” and they brought it to me in an overfilled tasting glass that (intentionally) spilled out into a wooden box that the glass resided in. They said that the overpour is a traditional symbol to represent hospitality and appreciation for the guest. I was told I could pour the glass out into the box and drink out of the box, but I decided to just drink out of the glass and then the box. I wanted the experience but didn’t want all of my drink to be out of the box.

The Summer Breeze sake was quite good! It was a little bit drier than I expected, but it was very light and crisp. I’m glad I tried it.

Alex got the Nikko mocktail, which you will see in a photo further on. Non-alcoholic amaretto, coconut milk, raspberry, and pineapple. This was a deliciously creamy drink that wasn’t overly sweet, but had such a nice tropical flavor to it.

Finally, our first course came:

Four oysters on the half shell, over a small bowl of pebbled ice.

Raw oysters on the half shell! This presentation was beautiful, and two oysters for each of us was the perfect start. These oysters were so fresh, not fishy at all, and made even more fresh by the microgreens on top. Served cold and fresh, just how I like ’em. The oysters are normally five dollars a piece, so this being the first course of a $60 nine course meal was already a good sign.

Up next were these tuna temaki with avocado. Now you can see our bevs, too!

A wooden board with the tuna temaki and dipping sauce on them. Also in the shot is Alex's mocktail, light pink and in a short glass with lots of ice and a pineapple frond. You can also see my sake in the glass/wooden box!

I love a temaki, it’s like sushi in a different font! The simple combo of tuna and avocado with rice and seaweed is a certified classic, absolutely nothing wrong here.

For our third course, we got tempura fried Japanese pumpkin:

Two pieces of tempura fried Japanese pumpkin served on an ovular plate with a dish of dipping sauce.

I truly love tempura fried anything and I especially love when it’s pumpkin. It’s so similar to a sweet potato with it’s slightly sweet and earthy flavor. The tempura on the outside was so perfectly crispy, my friend and I agreed it was delightfully crunchy.

This next course was extra special, because it was actually a birthday gift from the kitchen for my friend:

A beautifully presented dish of bright orange ocean trout, yellow butternut squash puree, dark red beet chips, bright and fresh micro greens on top, all served on a beautiful grey stoneware dish. My friend is holding up the happy birthday sign the restaurant made for her, it is a red fish made of paper with a little star with eyes that says happy birthday!

First, can we appreciate how cute the little happy birthday sign is? Alex kept the paper fish as a keepsake. Anyways, what we have here is raw ocean trout atop a butternut squash puree, topped with beet chips, apple, and microgreens. This was so good. The ocean trout was tender and had a beautiful, non-fishy flavor, the butternut squash puree was a wonderful accompaniment and its smooth texture contrasted the crunchy beet chips and crisp apple perfectly.

Also, who else is loving the dishware here? This plate is excellent.

Back to our regularly scheduled omakase, we have what I’d consider to be the most beautiful dish of the evening:

Four absolutely fat pieces of tuna in ponzu, sitting atop mandarin orange slices, and topped with roe and microgreens. Served in a beautiful small stoneware bowl.

I can’t remember if this was bluefin tuna or yellowtail tuna, but it was definitely tuna and it was dressed with ponzu. The mandarin orange slices accompanying it had all of the white parts removed by hand to avoid that bitter pith flavor, and it is topped with roe (I can’t remember what kind!) and microgreens.

This tuna was so succulent and had a lovely mild flavor, paired with the sweet and juicy mandarin slices and bright ponzu, oh my gosh. This dish was seriously an absolute harmony of flavors, everything worked together so perfectly to create a delectable bite. One of my favorite bites of the evening.

Then we had these crispy rice squares:

A small wooden boarding holding two squares of crispy rice.

If I remember correctly, these were topped with salmon, creme fraiche, and lemon zest. What part of that equation isn’t delicious?! We had yet to have any misses in the dishes.

Next was a course that was cooked fish, much to my surprise. This was their seared walleye:

A small chunk of cooked walleye in a sauce, served in a blue and white bowl.

The walleye was served hot and flaked apart nicely, I do think this was a little bit of a small portion for the two of us to share, but honestly everything else was already such a steal price-wise that a smaller course isn’t the worst thing in the world.

Especially because this next course was HUGE:

A huge slab of pork tonkatsu, fried to a perfect golden brown and topped with apples, served alongside a glossy brown sauce and creamy puree.

This giant pork chop served alongside a truffle soy glaze and apple puree, with granny smiths on top, was truly divine. The truffle flavor in the sauce was prominent but not overwhelming, the apple puree was so smooth and creamy, and the crunchy breading on the outside of the perfectly cooked pork chop was just the right level of golden brown. This was an absolute home run of a dish. And look at that nice bowl!

Finally, it was time for dessert, and as stuffed as we were, we couldn’t wait to dive into this dish:

A shallow white bowl holding ice cream, fried milk balls, chocolate mousse, etc.

Sweet cream gelato, chocolate mousse, and fried milk balls, topped with some sort of cocoa crisp thingy that I can’t even remember! I truly did not know what to expect with fried milk balls, but lordyyy they were so good. Crispy outside, basically sweetened condensed milk on the inside, like a lava cake but with milk. The sweet cream gelato was unbelievably bomb, and this was a showstopper dessert all around.

Oh, also, I ordered a cocktail a couple courses prior to the end, and it never came but I was like, eh that’s okay. But then it ended up being on my bill, so I brought it up to the server and he apologized immediately, took it off my bill and gave me the cocktail on the house!

For sixty dollars a person, this meal was incredible. Fresh flavors, unique combinations, beautiful presentations, good service, and food that I definitely can’t get around Bradford. We loved everything, and this was definitely a great birthday dinner for my friend.

After going back to their apartment and digesting for a bit, we decided we needed a late night matcha, and hit up Milk Tea People just before they closed. Alex highly recommended their matcha to me, so while I did end up getting a strawberry matcha, I couldn’t resist also getting the drink that was truly calling my name: the black sesame jasmine cream.

Three drinks in clear plastic cups, strawberry matcha on the left with layered green and red parts, orange blossom matcha on the right layered with a pale yellow section and a darker green top section, and the black sesame drink in the middle, pale grey and white and creamy.

Alex got the orange blossom matcha on the right there, which was slightly floral and definitely more matcha-y/earthy than some sweeter, creamier matchas end up being. For my strawberry one, it was good but it was much less sweet than I anticipated, with the strawberry portion being more like a tart, fresh strawberry flavor. I actually ended up adding strawberry milk to mine to make it sweeter and creamier.

The black sesame drink was my favorite, though, with very prominent black sesame flavor, nice and sweet, and extra creamy. These drinks were a bit more on the expensive side with each one being nine dollars.

We spent the rest of the evening catching up and spilling tea, and I got plenty of pets in on their cat, Callie:

A stunning smokey grey colored cat with yellow green eyes squinting slightly in the sunlight.

Day one complete and I was definitely beat from traveling, but stay tuned for day two!

Have you been to Denver before? If so, have you been out to Uchi? Don’t forget to follow them on Instagram, and have a great day!

-AMS

21:14

Were there any Windows 3.1 programs that were so incompatible with Windows 95 that there was no point trying to patch them? [The Old New Thing]

In a comment to my discussion of the Windows 95 patch system, commenter word merchant wondered whether there were any Windows 3.1 programs or drivers that were so incompatible with Windows 95 that there was no point trying to patch them.

Yes, there were problems that were so bad that the program was effectively unfixable. The largest category of them were those which took various types of handles and figured out how to convert them to pointers, and then used those pointers to access (and sometimes modify!) the internal implementation details of those objects.¹ Not only did the implementation details change, but the mechanism they used to convert handles to pointers also stopped working because Windows 95 used a 32-bit heap for user interface and graphics objects, whereas Windows 3.1 used a 16-bit heap.

Sometimes the programs were kind enough to do strict version checks to confirm that they were running on a system whose internals they understood. Sometimes their version checks were themselves broken, like one program that assumed that if the Windows version is not 3.0, 3.1, or 2.1, then it must be 2.0!

There were other one-off problems, like programs which detoured APIs in a way that no longer worked, but the ones that dug into operating system internals were by far the most common.

¹ What’s particularly frustrating is the cases where the program did this to access internal implementation details, when the information they wanted was already exposed by a public and supported API.

The post Were there any Windows 3.1 programs that were so incompatible with Windows 95 that there was no point trying to patch them? appeared first on The Old New Thing.

21:07

LibreLocal meetup in Duoala, Cameroon [Planet GNU]

April 11, 2026 at 13:00 WAT.

20:28

18:49

Russia Hacked Routers to Steal Microsoft Office Tokens [Krebs on Security]

Hackers linked to Russia’s military intelligence units are using known flaws in older Internet routers to mass harvest authentication tokens from Microsoft Office users, security experts warned today. The spying campaign allowed state-backed Russian hackers to quietly siphon authentication tokens from users on more than 18,000 networks without deploying any malicious software or code.

Microsoft said in a blog post today it identified more than 200 organizations and 5,000 consumer devices that were caught up in a stealthy but remarkably simple spying network built by a Russia-backed threat actor known as “Forest Blizzard.”

How targeted DNS requests were redirected at the router. Image: Black Lotus Labs.

Also known as APT28 and Fancy Bear, Forest Blizzard is attributed to the military intelligence units within Russia’s General Staff Main Intelligence Directorate (GRU). APT 28 famously compromised the Hillary Clinton campaign, the Democratic National Committee, and the Democratic Congressional Campaign Committee in 2016 in an attempt to interfere with the U.S. presidential election.

Researchers at Black Lotus Labs, a security division of the Internet backbone provider Lumen, found that at the peak of its activity in December 2025, Forest Blizzard’s surveillance dragnet ensnared more than 18,000 Internet routers that were mostly unsupported, end-of-life routers, or else far behind on security updates. A new report from Lumen says the hackers primarily targeted government agencies—including ministries of foreign affairs, law enforcement, and third-party email providers.

Black Lotus Security Engineer Ryan English said the GRU hackers did not need to install malware on the targeted routers, which were mainly older Mikrotik and TP-Link devices marketed to the Small Office/Home Office (SOHO) market. Instead, they used known vulnerabilities to modify the Domain Name System (DNS) settings of the routers to include DNS servers controlled by the hackers.

As the U.K.’s National Cyber Security Centre (NCSC) notes in a new advisory detailing how Russian cyber actors have been compromising routers, DNS is what allows individuals to reach websites by typing familiar addresses, instead of associated IP addresses. In a DNS hijacking attack, bad actors interfere with this process to covertly send users to malicious websites designed to steal login details or other sensitive information.

English said the routers attacked by Forest Blizzard were reconfigured to use DNS servers that pointed to a handful of virtual private servers controlled by the attackers. Importantly, the attackers could then propagate their malicious DNS settings to all users on the local network, and from that point forward intercept any OAuth authentication tokens transmitted by those users.

DNS hijacking through router compromise. Image: Microsoft.

Because those tokens are typically transmitted only after the user has successfully logged in and gone through multi-factor authentication, the attackers could gain direct access to victim accounts without ever having to phish each user’s credentials and/or one-time codes.

“Everyone is looking for some sophisticated malware to drop something on your mobile devices or something,” English said. “These guys didn’t use malware. They did this in an old-school, graybeard way that isn’t really sexy but it gets the job done.”

Microsoft refers to the Forest Blizzard activity as using DNS hijacking “to support post-compromise adversary-in-the-middle (AiTM) attacks on Transport Layer Security (TLS) connections against Microsoft Outlook on the web domains.” The software giant said while targeting SOHO devices isn’t a new tactic, this is the first time Microsoft has seen Forest Blizzard using “DNS hijacking at scale to support AiTM of TLS connections after exploiting edge devices.”

Black Lotus Labs engineer Danny Adamitis said it will be interesting to see how Forest Blizzard reacts to today’s flurry of attention to their espionage operation, noting that the group immediately switched up its tactics in response to a similar NCSC report (PDF) in August 2025. At the time, Forest Blizzard was using malware to control a far more targeted and smaller group of compromised routers. But Adamitis said the day after the NCSC report, the group quickly ditched the malware approach in favor of mass-altering the DNS settings on thousands of vulnerable routers.

“Before the last NCSC report came out they used this capability in very limited instances,” Adamitis told KrebsOnSecurity. “After the report was released they implemented the capability in a more systemic fashion and used it to target everything that was vulnerable.”

TP-Link was among the router makers facing a complete ban in the United States. But on March 23, the U.S. Federal Communications Commission (FCC) took a much broader approach, announcing it would no longer certify consumer-grade Internet routers that are produced outside of the United States.

The FCC warned that foreign-made routers had become an untenable national security threat, and that poorly-secured routers present “a severe cybersecurity risk that could be leveraged to immediately and severely disrupt U.S. critical infrastructure and directly harm U.S. persons.”

Experts have countered that few new consumer-grade routers would be available for purchase under this new FCC policy (besides maybe Musk’s Starlink satellite Internet routers, which are produced in Texas). The FCC says router makers can apply for a special “conditional approval” from the Department of War or Department of Homeland Security, and that the new policy does not affect any previously-purchased consumer-grade routers.

Cybersecurity in the Age of Instant Software [Schneier on Security]

AI is rapidly changing how software is written, deployed, and used. Trends point to a future where AIs can write custom software quickly and easily: “instant software.” Taken to an extreme, it might become easier for a user to have an AI write an application on demand—a spreadsheet, for example—and delete it when you’re done using it than to buy one commercially. Future systems could include a mix: both traditional long-term software and ephemeral instant software that is constantly being written, deployed, modified, and deleted.

AI is changing cybersecurity as well. In particular, AI systems are getting better at finding and patching vulnerabilities in code. This has implications for both attackers and defenders, depending on the ways this and related technologies improve.

In this essay, I want to take an optimistic view of AI’s progress, and to speculate what AI-dominated cybersecurity in an age of instant software might look like. There are a number of unknowns that will factor into how the arms race between attacker and defender might play out.

How flaw discovery might work

On the attacker side, the ability of AIs to automatically find and exploit vulnerabilities has increased dramatically over the past few months. We are already seeing both government and criminal hackers using AI to attack systems. The exploitation part is critical here, because it gives an unsophisticated attacker capabilities far beyond their understanding. As AIs get better, expect more attackers to automate their attacks using AI. And as individuals and organizations can increasingly run powerful AI models locally, AI companies monitoring and disrupting malicious AI use will become increasingly irrelevant.

Expect open-source software, including open-source libraries incorporated in proprietary software, to be the most targeted, because vulnerabilities are easier to find in source code. Unknown No. 1 is how well AI vulnerability discovery tools will work against closed-source commercial software packages. I believe they will soon be good enough to find vulnerabilities just by analyzing a copy of a shipped product, without access to the source code. If that’s true, commercial software will be vulnerable as well.

Particularly vulnerable will be software in IoT devices: things like internet-connected cars, refrigerators, and security cameras. Also industrial IoT software in our internet-connected power grid, oil refineries and pipelines, chemical plants, and so on. IoT software tends to be of much lower quality, and industrial IoT software tends to be legacy.

Instant software is differently vulnerable. It’s not mass market. It’s created for a particular person, organization, or network. The attacker generally won’t have access to any code to analyze, which makes it less likely to be exploited by external attackers. If it’s ephemeral, any vulnerabilities will have a short lifetime. But lots of instant software will live on networks for a long time. And if it gets uploaded to shared tool libraries, attackers will be able to download and analyze that code.

All of this points to a future where AIs will become powerful tools of cyberattack, able to automatically find and exploit vulnerabilities in systems worldwide.

Automating patch creation

But that’s just half of the arms race. Defenders get to use AI, too. These same AI vulnerability-finding technologies are even more valuable for defense. When the defensive side finds an exploitable vulnerability, it can patch the code and deny it to attackers forever.

How this works in practice depends on another related capability: the ability of AIs to patch vulnerable software, which is closely related to their ability to write secure code in the first place.

AIs are not very good at this today; the instant software that AIs create is generally filled with vulnerabilities, both because AIs write insecure code and because the people vibe coding don’t understand security. OpenClaw is a good example of this.

Unknown No. 2 is how much better AIs will get at writing secure code. The fact that they’re trained on massive corpuses of poorly written and insecure code is a handicap, but they are getting better. If they can reliably write vulnerability-free code, it would be an enormous advantage for the defender. And AI-based vulnerability-finding makes it easier for an AI to train on writing secure code.

We can envision a future where AI tools that find and patch vulnerabilities are part of the typical software development process. We can’t say that the code would be vulnerability-free—that’s an impossible goal—but it could be without any easily findable vulnerabilities. If the technology got really good, the code could become essentially vulnerability-free.

Patching lags and legacy software

For new software—both commercial and instant—this future favors the defender. For commercial and conventional open-source software, it’s not that simple. Right now, the world is filled with legacy software. Much of it—like IoT device software—has no dedicated security team to update it. Sometimes it is incapable of being patched. Just as it’s harder for AIs to find vulnerabilities when they don’t have access to the source code, it’s harder for AIs to patch software when they are not embedded in the development process.

I’m not as confident that AI systems will be able to patch vulnerabilities as easily as they can find them, because patching often requires more holistic testing and understanding. That’s Unknown No. 3: how quickly AIs will be able to create reliable software updates for the vulnerabilities they find, and how quickly customers can update their systems.

Today, there is a time lag between when a vendor issues a patch and customers install that update. That time lag is even longer for large organizational software; the risk of an update breaking the underlying software system is just too great for organizations to roll out updates without testing them first. But if AI can help speed up that process, by writing patches faster and more reliably, and by testing them in some AI-generated twin environment, the advantage goes to the defender. If not, the attacker will still have a window to attack systems until a vulnerability is patched.

Toward self-healing

In a truly optimistic future, we can imagine a self-healing network. AI agents continuously scan the ever-evolving corpus of commercial and custom AI-generated software for vulnerabilities, and automatically patch them on discovery.

For that to work, software license agreements will need to change. Right now, software vendors control the cadence of security patches. Giving software purchasers this ability has implications about compatibility, the right to repair, and liability. Any solutions here are the realm of policy, not tech.

If the defense can find, but can’t reliably patch, flaws in legacy software, that’s where attackers will focus their efforts. If that’s the case, we can imagine a continuously evolving AI-powered intrusion detection, continuously scanning inputs and blocking malicious attacks before they get to vulnerable software. Not as transformative as automatically patching vulnerabilities in running code, but nevertheless valuable.

The power of these defensive AI systems increases if they are able to coordinate with each other, and share vulnerabilities and updates. A discovery by one AI can quickly spread to everyone using the affected software. Again: Advantage defender.

There are other variables to consider. The relative success of attackers and defenders also depends on how plentiful vulnerabilities are, how easy they are to find, whether AIs will be able to find the more subtle and obscure vulnerabilities, and how much coordination there is among different attackers. All this comprises Unknown No. 4.

Vulnerability economics

Presumably, AIs will clean up the obvious stuff first, which means that any remaining vulnerabilities will be subtle. Finding them will take AI computing resources. In the optimistic scenario, defenders pool resources through information sharing, effectively amortizing the cost of defense. If information sharing doesn’t work for some reason, defense becomes much more expensive, as individual defenders will need to do their own research. But instant software means much more diversity in code: an advantage to the defender.

This needs to be balanced with the relative cost of attackers finding vulnerabilities. Attackers already have an inherent way to amortize the costs of finding a new vulnerability and create a new exploit. They can vulnerability hunt cross-platform, cross-vendor, and cross-system, and can use what they find to attack multiple targets simultaneously. Fixing a common vulnerability often requires cooperation among all the relevant platforms, vendors, and systems. Again, instant software is an advantage to the defender.

But those hard-to-find vulnerabilities become more valuable. Attackers will attempt to do what the major intelligence agencies do today: find “nobody but us” zero-day exploits. They will either use them slowly and sparingly to minimize detection or quickly and broadly to maximize profit before they’re patched. Meanwhile, defenders will be both vulnerability hunting and intrusion detecting, with the goal of patching vulnerabilities before the attackers find them.

We can even imagine a market for vulnerability sharing, where the defender who finds a vulnerability and creates a patch is compensated by everyone else in the information-sharing/repair network. This might be a stretch, but maybe.

Up the stack

Even in the most optimistic future, attackers aren’t going to just give up. They will attack the non-software parts of the system, such as the users. Or they’re going to look for loopholes in the system: things that the system technically allows but were unintended and unanticipated by the designers—whether human or AI—and can be used by attackers to their advantage.

What’s left in this world are attacks that don’t depend on finding and exploiting software vulnerabilities, like social engineering and credential stealing attacks. And we have already seen how AI-generated deepfakes make social engineering easier. But here, too, we can imagine defensive AI agents that monitor users’ behaviors, watching for signs of attack. This is another AI use case, and one that I’m not even sure how to think about in terms of the attacker/defender arms race. But at least we’re pushing attacks up the stack.

Also, attackers will attempt to infiltrate and influence defensive AIs and the networks they use to communicate, poisoning their output and degrading their capabilities. AI systems are vulnerable to all sorts of manipulations, such as prompt injection, and it’s unclear whether we will ever be able to solve that. This is Unknown No. 5, and it’s a biggie. There might always be a “trusting trust problem.”

No future is guaranteed. We truly don’t know whether these technologies will continue to improve and when they will plateau. But given the pace at which AI software development has improved in just the past few months, we need to start thinking about how cybersecurity works in this instant software world.

This essay originally appeared in CSO.

EDITED TO ADD: Two essays published after I wrote this. Both are good illustrations of where we are re AI vulnerability discovery. Things are changing very fast.

18:42

Link [Scripting News]

Hacker News isn't a software masterpiece. All the pieces have to be there to make something as real as HN happen.

This would make quite a movie [Scripting News]

The president of the United States is spinning around acting like a NYC real estate jumbo who accidentally was elected president and after only five years in office has realized a whole new level of trolling. It started with cable news, then went to Twitter, then masked American gestapo killing protestors on TV for everyone to see, and then starting a war with Iran of all countries.

I'm sure his generals suggested that at the same time as they were bombing Iran proper, that they should send in a few boatloads of Marines to occupy the Strait of Hormuz. When the Iranians weren't so desperate, it might have been relatively easy to take it over. I'm sure we've spent billions over the years on what to do if we had to attack Iran, not like now when things like this are done on a whim.

Anyway, what a movie. The audacity of the writers. One things for sure we'll all be watching at 8PM Eastern to see if he blows up the world tonight or whatever.

The voice of America channel on Bluesky.

15:49

Plan 9 is a uniquely complete operating system [OSnews]

From 2024, but still accurate and interesting:

Plan 9 is unique in this sense that everything the system needs is covered by the base install. This includes the compilers, graphical environment, window manager, text editors, ssh client, torrent client, web server, and the list goes on. Nearly everything a user can do with the system is available right from the get go.

↫ moody

This is definitely something that sets Plan 9 apart from everything else, but as moody – 9front developer – notes, this also has a downside in that development isn’t as fast, and Plan 9 variants of tools lack features upstream has for a long time. He further adds that he think this is why Plan 9 has remained mostly a hobbyist curiosity, but I’m not entirely sure that’s the main reason. The cold and harsh truth is that Plan 9 is really weird, and while that weirdness is a huge part of its appeal and I hope it never loses it, it also means learning Plan 9 is really hard.

I firmly believe Plan 9 has the potential to attract more users, but to get there, it’s going to need an onboarding process that’s more approachable than reading 9front’s frequently questioned answers, excellent though they are. After installing 9front and loading it up for the first time, you basically hit a brick wall that’s going to be rough to climb. It would be amazing if 9front could somehow add some climbing tools for first-time users, without actually giving up on its uniqueness. Sometimes, Plan 9 feels more like an experimental art project instead of the capable operating system that it is, and I feel like that chases people away.

Which is a real shame.

15:07

[$] Sharing stories on Scuttlebutt [LWN.net]

Not many people live on sailboats. Things may be better these days, but back in 2014 sailboat dwellers had to contend with lag-prone, intermittent, low-bandwidth internet connections. Dominic Tarr decided to fix the problem of keeping up with his friends by developing a delay-tolerant, fully distributed social-media protocol called Scuttlebutt. Nearly twelve years later, the protocol has gained a number of users who have their own, non-sailboat-related reasons to prefer a censorship-resistant, offline-first social-media system.

15:00

The World Needs More Software Engineers [Radar]

I sat down with Aaron Levie at the O’Reilly AI Codecon two weeks ago. Aaron cofounded Box in 2005, and 20 years later, his company manages content for about two-thirds of the Fortune 500. Aaron is one of the few CEOs of an incumbent enterprise software company thinking deeply in public about what AI means for the entire enterprise stack. There are a lot of people who are building companies from the ground up with AI, others who are dragging their feet adapting existing enterprises to it, and then there’s Aaron. He sits in a kind of Goldilocks zone, enthusiastic but not uncritical, engaging in the hard work of adapting AI to the enterprise and the enterprise to AI.

The engineering demand paradox

I started out by asking about something from Lenny’s Newsletter that Aaron had retweeted. Despite all the doom rhetoric, TrueUp data shows software engineering job postings are at a three-year high. Product manager jobs are way up. AI jobs as a whole are way up.

AI jobs are way up

The actual data may be more equivocal than the TrueUp report suggests. The honest read of the literature as of spring 2026 (Brynjolfsson et al., Humlum and Vestergaard, BLS Software Developers, BLS Computer Programmers) is that something real is happening to entry-level software work, that it is happening faster than most previous technology transitions, that it has different effects depending on which job code you look at, and that it is not yet clear whether the net effect on total software employment will be negative, neutral, or eventually positive. Nonetheless, the TrueUp report was a trigger for the discussion that followed.

Aaron noted that engineers have historically been concentrated at tech companies because the cost of a software project was too high to justify anywhere else. But if agents make an engineer two to ten times more productive, all the software projects that were never economically viable suddenly become viable. Demand doesn’t shrink. It diffuses across the entire economy. In his tweet, he called it “Jevons paradox happening in real time.” In our conversation, he said:

“What’s going to happen is the entire world is going to be looking at all the potential software that they build. And they’re going to start to say, Oh, I can finally justify going out and doing this type of project where I couldn’t before.”

Engineers empowered by AI agents won’t just build software for IT teams. The total addressable role of the engineer expands from the technology department to every function in the enterprise. They’ll be wiring up automation for marketing, legal, accounting, and every other corporate function.

He’s totally right. Look around at all the crappy workflows, the crappy processes, the incredible overhead of things that ought to be simple. You think companies should lay off their developers to reduce costs when there’s so much shitty software out there? Really? There’s so much that needs to be improved. He had a great line: “Silicon Valley is spooked by its own technology.”

Over to me: The rhetoric from the labs about job destruction is actively counterproductive. I was talking recently with someone in healthcare who described a hospital system trying to fill a giant hole from reduced Medicare funding. They see AI as a way to gain efficiency in their back office so they can free up more resources for patient care. And of course the union is fighting it because they’ve been told AI is a monster that’s going to take their jobs. If you tell a different story, one about making the system better and serving more people more affordably, that’s something people can get behind. We have to change the narrative.

Context, not connectivity, is the real problem

I also asked Aaron whether protocols like MCP are making context portable enough to erode competitive moats. He agreed that the industry has broadly converged on openness and interoperability (with some toll booths to work through). But getting your systems to talk to each other doesn’t solve the harder problem of getting your data structured so that agents can actually find the right information at the right moment.

“If it’s in 50 different systems and it’s not organized in a way that agents can readily take advantage of, what you’re going to be is at the mercy of how well that agent finds exactly the context that it needs to do its work. And you’re kind of just rolling the dice every time you do a workflow.”

He predicts a decade of infrastructure modernization ahead, which sounds about right. At O’Reilly, I keep running into this myself. I’ll see a task that’s perfect for an agent and soon discover that the data I need is scattered across four systems and I have to jump through hoops to figure out who knows where the data is and how to get access. A friend running a large (but relatively new) enterprise that is turbocharging productivity and service delivery with agents told me recently that a big part of his team’s success was possible because they had spent a lot of time getting their data infrastructure in order from the start.

IMO, a lot of the stories you hear about OpenClaw and other harbingers of the agent future can be misleading in an enterprise context. They are doing greenfield setups, largely running consumer apps with well-defined interfaces, and even then, it takes weeks to set up properly. Now imagine agentic frameworks for companies with thousands of employees, hundreds of legacy apps, and deep wells of proprietary data. A decade of infrastructure modernization is generous. Without help, many enterprises will have difficulty making the transition.

Engineering the trade-offs

I brought up Phillip Carter’s “two computers” framing, that we’re now programming a deterministic computer and a probabilistic computer at the same time. Skills are a bridge, because they have both context for the LLM which can work probabilistically and tools that are built with deterministic code. Both systems coexist and work in parallel.

Aaron called the boundary between the two computers “the trillion-dollar question.” When does a process cross the threshold where it should be locked into repeatable, deterministic code? When should it stay adaptive? Loan processing needs to work the same way every time. Employee HR queries can be probabilistic. And the irony, as Aaron pointed out, is that making these trade-offs correctly requires deep technical understanding. AI makes the field more technical, not less.

I added that sometimes this judgment is a user experience question, sometimes a cost question. You can do something with an LLM, but it might be a lot cheaper with canned code. At other times, even though the LLM costs more, the flexibility of a liquid user interface is far better.

This is also a locus of creativity. What you bring out of AI is what you bring to it. Steve Jobs wasn’t a coder, but he knew how to get the most out of coders. He would have gone nuts with AI agents, because he was the essence of taste and judgment and setting the bar.

Where startups win

I asked Aaron about the risks to existing enterprises from greenfield AI startups that can just move faster, reinventing what the incumbents do with an AI native solution, without all the baggage. He replied:

“If there’s already a substantial amount of the data for that particular workflow in an existing system, and the incumbent is agile enough and responsive enough, then they are in a good position to build either the solutions or to monetize that set of work that’s going to be done….What agents are really good at is automating the unstructured areas of work, the messy, collaborative human-based parts of work, the tax process, the legal review process, the audit and risk analysis process of all of your contracts and unstructured data. And so in those areas, there’s no incumbent. The only incumbent is likely professional services firms. So that’s where I would favor startups.”

Software startups like Harvey are already taking services domains and building agents for them. But it’s not just software startups. Aaron also sees lots of opportunity for AI-native law firms, accounting firms, and ad agencies that can throw away legacy workflow, start from scratch, and deliver two to five times the output at lower cost will have a huge advantage.

I did push back with a point I think is underappreciated: Existing enterprises face a real risk that the organization will try to stuff AI into existing workflows rather than asking what the AI-native workflow would be. People are attached to their jobs, their roles, the org chart. We have to wrestle with that honestly if we’re going to truly reinvent what we do.

Humans get context for free

One of Aaron’s points about agents is that humans carry an enormous amount of ambient context that agents lack. You know what building you’re in and who else works there and what they do. You know the meeting that just happened where a team changed course on a strategy that hasn’t been written down yet. You have 20 years of accumulated domain knowledge. All of that is free context that we’ve never had to formalize. As he put it, “We’ve never built our business processes in a model where we assume that there’s a new user in that workflow that appeared one second ago and in under five seconds, they need to get all of the information possible to do that task.”

He suggested that one way to think of agents is as new employees who are experts but arrive with zero context and need to be fully briefed. And the context has to be precise, not just comprehensive. Give an agent too much context and it gets confused. Give it too little and it rolls the dice. SKILLS.md and AGENTS.md files are attempts to provide exactly the surgical context an agent needs for a specific process.

But 99% of knowledge work doesn’t have an AGENTS.md file, he noted. The data is everywhere. The context is everywhere. So in an existing enterprise, you have to reengineer workflows from the ground up to deliver the right information to agents at the right moment.

Aaron summed up Box’s strategic pivot in one sentence: swap the word “content” for “context” and the rest of the strategy stays the same. Enterprise context lives in contracts, research materials, financial documents. That’s all enterprise content but it isn’t always easily available as context. The evolution is making agents first-class citizens alongside people as users of that content. This very much maps to what we’re thinking about at O’Reilly too.

Anos: a hobby microkernel operating system written in C [OSnews]

Anos is a modern, opinionated, non-POSIX operating system (just a hobby, won’t be big and professional like GNU-Linux) for x86_64 PCs and RISC-V machines.

Anos currently comprises the STAGE3 microkernel, SYSTEM user-mode supervisor, and a base set of servers implementing the base of the operating system. There is a (WIP) toolchain for Anos based on Binutils, GCC (16-experimental) and Newlib (with a custom libgloss).

↫ Anos GitHub page

It’s written in C, runs on both x86-64 and RISC-V, and can run on real hardware too (but this hasn’t been tested on RISC-V just yet). For the x86 side of things, it’s strictly 64 bit, and requires a Haswell (4th Gen) chip or higher.

The 499th patch for 2.11BSD released [OSnews]

This year sees 35 years since 2.11BSD was announced on March 14, 1991 – itself a slightly late celebration of 20 years of the PDP-11 – and January 2026 brought what looks to be the venerable 16-bit OS’s biggest ever patch!

Much of the 1.3 MB size is due to Anders Magnusson, well-known for his work on NetBSD and the Portable C Compiler. Since 2.11BSD’s stdio was not ANSI compliant, he’s ported from 4.4BSD.

↫ BigSneakyDuck at Reddit

There’s an incredible amount of work in here on this old variant of BSD, including fixes for old bugs and tons of other changes. This, the 499th patch for 2.11BSD, is so big, in fact, that vi on 2.11BSD can’t handle the size of the files, so you’re going to need to cut them up with sed, for which instructions are included.

It’s quite unique to see such a big update on the 35th anniversary of an operating system.

14:21

Security updates for Tuesday [LWN.net]

Security updates have been issued by AlmaLinux (crun, kernel, and kernel-rt), Debian (dovecot), Fedora (calibre and nextcloud), Mageia (freerdp, polkit-122, python-nltk, python-pyasn1, vim, and xz), Red Hat (edk2 and openssl), SUSE (avahi, cockpit, python-pyOpenSSL, python311, and tar), and Ubuntu (lambdaisland-uri-clojure, linux-gcp, linux-gcp-4.15, linux-gcp-fips, linux-oem-6.17, and linux-realtime-6.17).

12:56

CodeSOD: Proper Property Validation [The Daily WTF]

Tim H inherited some code which has objects that have many, many properties properties on them. Which is bad. That clearly has no cohesion. But it's okay, there's a validator function which confirms that object is properly populated.

The conditions and body of the conditionals have been removed, so we can see what the flow of the code looks like.

if (...) {
    if (...) {

    } else if (...) {

    } else if (...) {
        
    } else if (...) {
        
    } else if (...) {
        
    } else if (...) {
        
    } else if (...) {
        
    } else if (...) {
        
    } else if (...) {
        
    } else if (...) {
        
    } else if (...) {
        
    } else if (...) {
        
    } else if (...) {
        
    } else if (...) {
        
    } else if (...) {
        
    } else if (...) {
        
    } else if (...) {
        
    } else if (...) {
        
    } else if (...) {
        
    } else if (...) {
        
    } else if (...) {
        
    } else if (...) {
        
    } else if (...) {
        
    } else if (...) {
        
    } else if (...) {
        
    } else if (...) {
        
    } else if (...) {
        
    } else if (...) {
        
    } else if (...) {
        
    } else if (...) {
        
    } else if (...) {
        
    } else if (...) {
        
    } else if (...) {
        
    } else if (...) {
        
    } else if (...) {
        
    } else if (...) {
        
    } else if (...) {
        
    } else if (...) {
        
    } else if (...) {
        
    }
} else {
  // default
}

It's important to note that this conditional doesn't validate every property on the object. Just most of them.

Even with autocomplete I feel like this is going to make you wear out your "{" key.

[Advertisement] Picking up NuGet is easy. Getting good at it takes time. Download our guide to learn the best practice of NuGet for the Enterprise.

12:42

Radar Trends to Watch: April 2026 [Radar]

Starting with this issue of Trends, we’ve moved from simply reporting on news that has caught our eye and instead have worked with Claude to look at the various news items we’ve collected and to reflect on what they tell us about the direction and magnitude of change. William Gibson famously wrote, “The future is here. It’s just not evenly distributed yet.” In the language of scenario planning, what we’re looking for is “news from the future” that will confirm or challenge our assumptions about the present.

AI has moved from a capability added to existing tools to an infrastructure layer present at every level of the computing stack. Models are now embedded in IDEs and tools for code review; tools that don’t embed AI directly are being reshaped to accommodate it. Agents are becoming managed infrastructure.

At the same time, two forces are reshaping the economics of AI. The cost of capable AI is falling. Laptop-class models now match last year’s cloud frontiers, and the break-even point against cloud API costs is measured in weeks. The competitive map has also fractured. What was a contest between a few Western labs is now a broad ecosystem of open source models, Chinese competitors, local deployments, and a growing set of forks and distributions. (Just look at the news that Cursor is fronting Kimi K2.5.) No single vendor or architecture is dominant, and that mix will drive both innovation and instability.

Security is a thread running through every section of this report. Each new AI capability reshapes the attack surface. AI tools can be poisoned, APIs repurposed, images forged, identities broken, and anonymous authors identified at scale. At the same time, foundational infrastructure faces threats that have nothing to do with AI: A researcher has come within striking distance of breaking SHA-256, the hashing algorithm underlying much of the web’s security. Organizations should audit both their AI-related exposures and the assumptions baked into the cryptographic infrastructure they depend on.

The technical transitions are easy to talk about. The human transitions are slower and harder to see. They include workforce restructuring, cognitive overload, and the erosion of collaborative work patterns. The job market data is beginning to clarify: Product management is up, AI roles are hot, and software engineering demand is recovering. The picture is more nuanced than either the optimists or the pessimists predicted.

AI models

The model market is moving fast enough that architectural and vendor commitments made today may not look right in six months. Capable models are now available from open source projects and a widening set of international competitors. The field is also starting to ask deeper questions. Predicting tokens may not be the only path to capable AI; the arrival of the first stable JEPA model suggests that alternative architectures are becoming real contenders. NVIDIA’s new model, which combines Mamba and Transformer layers, points in the same direction.

  • Yann LeCun and his team have created LeWorldModel, the first model using his Joint Embedding Predictive Architecture (JEPA) that trains stably. Their goal is to produce models that do more than predict words; they understand the world and how it works.
  • NVIDIA has released Nemotron 3 Super, its latest open weights model. It’s a mixture of experts model with 120B parameters, 12B parameters of which active at any time. What’s more interesting is its design: It combines both Mamba and Transformer layers.
  • Gemini 3.1 Flash Live is a new speech model that’s designed to support real-time conversation. When generating output, it avoids gaps and uses human-like cadences.
  • Cursor has released Composer 2, the next generation version of its IDE. Composer 2 apparently incorporates the Kimi K2.5 model. It reportedly beats Anthropic’s Opus 4.6 on some major coding benchmarks and is significantly less expensive.
  • Mistral has released Forge, a system that enables organizations to build “frontier-grade” models based on their proprietary data. Forge supports pretraining, posttraining, and reinforcement learning.
  • Mistral has also released Mistral Small 4, its new flagship multimodal model. Small 4 is a 119B mixture of experts model that uses 6B parameters for each token. It’s fully open source, has a 256K context window, and is optimized to minimize latency and maximize throughput.
  • NVIDIA announced its own OpenClaw distribution, NemoClaw, which integrates OpenClaw into NVIDIA’s stack. Of course it claims to have improved security. And of course it does inference in the NVIDIA cloud.
  • It’s not just OpenClaw; there’s also NanoClaw, Klaus, PiClaw, Kimi Claw and others. Some of these are clones, some of these are OpenClaw distros, and some are cloud services that run OpenClaw. Almost all of them claim improved security.
  • Anthropic has announced that 1-million token context windows have reached general availability in Claude Opus 4.6 and Sonnet 4.6. There’s no additional charge for using a large window.
  • Microsoft has released Phi-4-reasoning-vision-15B. It is a small open-weight model that combines reasoning with multimodal capabilities. They believe that the industry is trending toward smaller and faster models that can run locally.
  • Tomasz Tunguz writes that Qwen3.5-9B can run on a laptop and has benchmark results comparable to December 2025’s frontier models. Compared to the cost of running frontier models in the cloud, a laptop running models locally will pay for itself in under a month.
  • OpenAI has released GPT 5.4, which merges the Codex augmented coding model back into the product’s mainstream. It also incorporates a 1M token context window, computer use, and the ability to publish a plan that can be altered midcourse before taking action.
  • TweetyBERT is a language model for birds. It breaks bird songs (they use canaries) into syllables without human annotation. They may be able to use this technique to understand how humans learn language.
  • Vera is a new programming language that’s designed for AI to write. Unlike languages that are designed to be easy for humans, Vera is designed to help AI with aspects of programming that AIs find hard. Everything is explicit, state changes are declared, and every function has a contract.
  • The Potato Prompt is a technique for getting GPT models to act as critics rather than yes-things. The idea is to create a custom instruction that tells GPT to be harshly critical when the word “potato” appears in the prompt. The technique would probably work with other models.

Software development

The tools arriving in early 2026 point toward a deep reorganization of the role of software developers. Writing code is becoming less important, while reviewing, directing, and taking accountability for AI-generated code is becoming more so. How to write good specifications, how to evaluate AI output, and how to preserve the context of a coding session for later audit are all skills teams will need. The ecosystem around the development toolchain is also shifting: OpenAI’s acquisition of Astral, the company behind the Python package manager uv, signals that AI labs are moving to control developer infrastructure, not just models.

  • OpenAI has added Plugins to its coding agent Codex. Plugins “bundle skills, app integrations, and MCP servers into reusable workflows”; conceptually, they’re similar to Claude Skills.
  • Stripe Projects gives you the ability to build and manage an AI stack from the command line. This includes setting up accounts, billing, managing keys, and many other details.
  • Fyn is a fork of the widely used Python manager uv. It no doubt exists as a reaction to OpenAI’s acquisition of Astral, the company that developed and supports uv.
  • Anthropic has announced Claude Code Channels, an experimental feature that allows users to communicate with Claude using Telegram or Discord. Channels is seen as a way to compete with OpenClaw.
  • Claude Cowork Dispatch allows you to control Cowork from your phone. Claude runs on your computer, but you can assign it tasks from anywhere and receive notification via text when it’s done.
  • Opencode is an open source AI coding agent. It can make use of most models, including free and local models; it can be used in a terminal, as a desktop application, or an extension to an IDE; it can run multiple agents in parallel; and it can be used in privacy-sensitive environments.
  • Testing is changing, and for the better. AI can automate the repetitive parts, and humans can spend more time thinking about what quality really means. Read both parts of this two-part series.
  • Claude Review does a code review on every pull request that Claude Code makes. Review is currently in research preview for Claude Teams and Claude Enterprise.
  • Andrej Karpathy’s Autoresearchautomates the scientific method with AI agents.” He’s used it to run hundreds of machine learning experiments per night: running an experiment, getting the results, and modifying the code to create another experiment in a loop.
  • Plumb is a new tool for keeping specifications, tests, and code in sync. It’s in its very early stages; it could be one of the most important tools in the spec-driven development tool chest.
  • How I Use AI Before the First Line of Code“: Prior to code generation, use AI to suggest and test ideas. It’s a tremendous help in the planning stage.
  • Git has been around for 10 years. Is it the final word on version control, or are there better ways to think about software repositories? Manyana is an attempt to rethink version control, based on CRDTs (conflict-free replicated data types).
  • Just committing code isn’t enough. When using AI, the session used to generate code should be part of the commit. git-memento is a Git extension that saves coding sessions as Markdown and commits them.
  • sem is a set of tools for semantic versioning that integrates with Git. When you are doing a diff, you don’t really want to which lines changed; you want to know what functions changed, and how.
  • Claude can now create interactive charts and diagrams.
  • Clearance is an open source Markdown editor for macOS. Given the importance of Markdown files for working with Claude and other language models, a good editor is a welcome tool.
  • The Google Workspace CLI provides a single command line interface for working with Google Workspace applications (including Google Docs, Sheets, Gmail, and of course Gemini). It’s currently experimental and unsupported.
  • At the end of February, Anthropic announced a program that grants open source developers six months of Claude Max usage. Not to be left out, OpenAI has launched a program that gives open source developers six months of API credits for ChatGPT Pro with Codex.
  • Here’s a Claude Code cheatsheet!
  • Claude’s “import memory” feature allows you to move easily between different language models: You can pack up another model’s memory and import it into Claude.

Infrastructure and operations

Organizations should be thinking about agent governance now, before deployments reach a scale where the lack of governance becomes a problem. The AI landscape is moving from “Can we build this?” to “How do we run this reliably and safely?” The questions that defined the last year (Which model? Which framework?) are giving way to operational ones: How do we contain agents that behave unexpectedly? Where do we store their memory? How do we coordinate agents from multiple vendors? And when does it make sense to run them locally rather than in the cloud? Agents are also acquiring the ability to operate desktop applications directly, blurring the line between automation and user.

  • Anthropic has extended its “computer use” feature so that it can control applications on users’ desktops (currently macOS only). It can open applications, use the mouse and keyboard, and complete partially done tasks.
  • OpenAI has released Frontier, a platform for managing agents. Agents can come from any vendor. The goal is to allow business to organize and coordinate their AI efforts without siloing them by vendor.
  • Most agents assume that memory looks like a filesystem. Mikiko Bazeley argues that filesystems aren’t the best option; they lack the indexes that databases have, which can be a performance penalty.
  • Qwen-3-coder, Ollama, and Goose could replace agentic orchestration tools that use cloud-based models (Claude, GPT, Gemini) with a stack that runs locally.
  • KubeVirt packages virtual machines as Kubernetes objects so that they can be managed together with containers.
  • db9 is a command line-oriented Postgres that’s designed for talking to agents. In addition to working with database tables, it has features for job scheduling and using regular files.
  • NanoClaw can now be installed inside Docker sandboxes with a single command. Running NanoClaw inside a container with its own VM makes it harder for the agent to escape and run malicious commands.

Security

This issue has an unusually heavy security section, and not only because AI keeps expanding the attack surface. A researcher has come close to breaking SHA-256, the hashing algorithm that underpins SSL, Bitcoin, and much of the web’s security infrastructure. If hash collisions become possible in the coming months as predicted, the implications will reach every organization that relies on the internet. At the same time, AI systems are now capable of gaming their own benchmarks, and the pace of new attack techniques is outrunning the pace of security review.

  • A researcher has come close to breaking the SHA-256 hashing algorithm. While it’s not yet possible to generate hash collisions, he expects that capability is only a few months away. SHA-256 is critical to web security (SSL), cryptocurrency (Bitcoin), and many other applications.
  • When running the BrowseComp benchmark, Claude hypothesized that it was being tested, found the benchmark’s encrypted answer key on GitHub, decrypted the answers, and used them.
  • Anthropic has added auto mode to Claude, a safer alternative to the “dangerously skip permissions” option. Auto mode uses a classifier to determine whether actions are safe before executing them and allows the user to switch between different sets of permissions.
  • In an interview, Linux kernel maintainer Greg Kroah-Hartman said that the quality of bug and security reports for the Linux kernel has suddenly improved. It’s likely that improved AI tools for analyzing code are responsible.
  • A new kind of supply chain attack is infecting GitHub repositories and others. It uses Unicode characters that don’t have a visual representation but are still meaningful to compilers and interpreters.
  • AirSnitch is a new attack against WiFi. It uses layers 1 and 2 of the protocol stack to bypass encryption rather than breaking it.
  • Anthropic’s red team worked with Mozilla to discover and fix 22 security-related bugs and 90 other bugs in Firefox.
  • Microsoft has coined the term “AI recommendation poisoning” to refer to a common attack in which a “Summarize with AI” button attempts to add commands to the model’s persistent memory. Those commands will cause it to recommend the company’s products in the future.
  • Deepfakes are now being used to attack identity systems.
  • LLMs can do an excellent job of de-anonymization, figuring out who wrote anonymous posts. And they can do it at scale. Are we surprised?
  • It used to be safe to expose Google API keys for services like Maps in code. But with AI in the picture, these keys are no longer safe; they can be used as credentials for Google’s AI assistant, letting bad actors use Gemini to steal private data.
  • WIth AI, it’s easy to create fake satellite images. These images could be designed to have an effect military operations.

People and organizations

The workforce implications of AI are more complicated than either the optimistic or pessimistic predictions suggest. The cognitive load on individuals is increasing, and the collaborative habits that distribute that load across a team are eroding. Managers should track not just velocity but sustainability. The skills that AI cannot replace, including judgment, communication, and the ability to ask the right question before writing a single line of code, are becoming more valuable. And the volume of AI-generated content is now large enough that organizations built around reviewing submissions, including app stores, publications, and academic journals, are struggling to keep up with it.

  • Lenny Rachitsky’s report on the job market goes against this era’s received wisdom. Product manager positions are at the highest level in years. Demand for software engineers cratered in 2022, but has been rising steadily since. Recruiters are heavily in demand; and AI jobs are on fire.
  • Apple’s app store, along with many other app stores and publications of all sorts, is fighting a “war on slop“: deluges of AI-generated submissions that swamp their ability to review.
  • Teams of software developers can be smaller and work faster because AI reduces the need for human coordination and communications. The question becomes “How many agents can one developer manage?” But also be aware of burnout and the AI vampire.
  • Brandon Lepine, Juho Kim, Pamela Mishkin, and Matthew Beane measure cognitive overload, which develops from the interaction between a model and its user. Prompts are imprecise by nature; the LLM produces output that reflects the prompt but may not be what the user really wanted; and getting back on track is difficult.
  • A study claims that the use of GitHub Copilot is correlated with less time spent on management activities, less time spent collaboration, and more on individual coding. It’s unclear how this generalizes to tools like Claude Code.

Web

  • The 49MB Web Page documents the way many websites—particularly news sites—make user experience miserable. It’s a microscopic view of enshittification.
  • Simon Willison has created a tool that writes a profile of Hacker News users based on their comments, all of which are publicly available through the Hacker News API. It is, as he says, “a little creepy.”
  • A personal digital twin is an excellent way to augment your abilities. Tom’s Guide shows you how to make one.
  • It’s been a long time since we’ve pointed to a masterpiece of web play. Here’s Ball Pool: interactive, with realistic physics and lighting. It will waste your time (but probably not too much of it).
  • Want interactive XKCD? You’ve got it.

11:42

Talk in Austin, Texas [Richard Stallman's Political Notes]

Richard Stallman will give a talk titled "Free/Libre Software and Freedom in the Digital Society" at 4pm on April 15 at the University of Texas in Austin, Texas.

Urgent: Invest in America [Richard Stallman's Political Notes]

US citizens: call on Congress to invest in America — not in war with Iran.

US citizens: Join with this campaign to address this issue.

To phone your congresscritter about this, the main switchboard is +1-202-224-3121.

Please spread the word.

Urgent: Protect vote-by-mail [Richard Stallman's Political Notes]

US citizens: Urge the Postal Board of Governors to protect vote-by-mail.

Urgent: End bully's war in Iran [Richard Stallman's Political Notes]

US citizens: call on Congress to vote to end the bully's war in Iran.

US citizens: Join with this campaign to address this issue.

To phone your congresscritter about this, the main switchboard is +1-202-224-3121.

Please spread the word.

Urgent: Restore real-time postmarks [Richard Stallman's Political Notes]

US citizens: call on Congress to make the USPS restore real-time postmarks, for the sake of mail-in voting.

See the instructions for how to sign this letter campaign without running any nonfree JavaScript code--not trivial, but not hard.

US citizens: Join with this campaign to address this issue.

To phone your congresscritter about this, the main switchboard is +1-202-224-3121.

Please spread the word.

Urgent: Reject budget package that funds deportation and war [Richard Stallman's Political Notes]

US citizens: call on your representative and senators to reject any budget package that slashes basic needs programs to fund deportation and war.

See the instructions for how to sign this letter campaign without running any nonfree JavaScript code--not trivial, but not hard.

US citizens: Join with this campaign to address this issue.

To phone your congresscritter about this, the main switchboard is +1-202-224-3121.

Please spread the word.

Urgent: Block warlord's gift of bombs to Israel [Richard Stallman's Political Notes]

US citizens: call on Congress to block the warlord's gift of 20,000 bombs to Israel. Sad to say, we must expect Israel would use them in unjust and harmful ways.

US citizens: Join with this campaign to address this issue.

To phone your congresscritter about this, the main switchboard is +1-202-224-3121.

Please spread the word.

Urgent: Support Ultra-Millionaire Tax Act [Richard Stallman's Political Notes]

US citizens: Support Sen. Elizabeth Warren and Rep. Pramila Jayapal's Ultra-Millionaire Tax Act.

Urgent: Hold Pam Bondi accountable [Richard Stallman's Political Notes]

US citizens: call on Congress to hold Pam Bondi accountable for refusing to testify.

See the instructions for how to sign this letter campaign without running any nonfree JavaScript code--not trivial, but not hard.

US citizens: Join with this campaign to address this issue.

To phone your congresscritter about this, the main switchboard is +1-202-224-3121.

Please spread the word.

Urgent: Call on state governors to act where USDA is failing [Richard Stallman's Political Notes]

US citizens: call on state governors to act, where the USDA is failing, on climate and biodiversity crises.

See the instructions for how to sign this letter campaign without running any nonfree JavaScript code--not trivial, but not hard.

Urgent: Stop computer-driven medical claim denials [Richard Stallman's Political Notes]

US citizens: call on your state insurance commissioner to stop computer-driven medical claim denials.

See the instructions for how to sign this letter campaign without running any nonfree JavaScript code--not trivial, but not hard.

11:21

Hong Kong Police Can Force You to Reveal Your Encryption Keys [Schneier on Security]

According to a new law, the Hong Kong police can demand that you reveal the encryption keys protecting your computer, phone, hard drives, etc.—even if you are just transiting the airport.

In a security alert dated March 26, the U.S. Consulate General said that, on March 23, 2026, Hong Kong authorities changed the rules governing enforcement of the National Security Law. Under the revised framework, police can require individuals to provide passwords or other assistance to access personal electronic devices, including cellphones and laptops.

The consulate warned that refusal to comply is now a criminal offense. It also said authorities have expanded powers to take and keep personal electronic devices as evidence if they claim the devices are linked to national security offenses.

10:28

All the letters [Seth's Blog]

Every writer has all of them. 26 in most Western languages.

But no writer knows all the words.

That’s the gap where creativity, effort and possibility lie–between the universal letters and the unlimited words. This is an analogy for arenas as diverse as sports and commerce.

Sometimes, we work on a project where our competitors have access to more letters than we do. It’s unlikely you’ll win that competition.

But if you start out with the same letters as everyone else, don’t spend a lot of time admiring your letters. It’s the words that matter.

09:35

TransFat by Ravi Teixeira [Oh Joy Sex Toy]

TransFat by Ravi Teixeira

Ravi Teixeira dreamed of what their new muscle-bound, masculine body would look like once they started testosterone, but it had some surprises in store for them. Can they really be a man when they look like this? Ravi opens up and shares in today’s lovely comic. Portfolio Bluesky Instragram A reminder here in the footer […]

09:07

Pluralistic: Switzerland's Goldilocks fiber (07 Apr 2026) [Pluralistic: Daily links from Cory Doctorow]

->->->->->->->->->->->->->->->->->->->->->->->->->->->->-> Top Sources: None -->

Today's links

  • Switzerland's Goldilocks fiber: Public provision is a layered question.
  • Hey look at this: Delights to delectate.
  • Object permanence: EU appoints henhouse fox (copyright); Emacs x Tron: Legacy; Spammer v dead man's AOL account; Scott Walker's pork fountain; "No toilets, try Amazon"; Iceland falls (x Panama Papers); Rooms in Milanese sewers; China bans Panama Papers; "Parent Hacks"; "The Nameless City"; Phishing the world's top breach expert.
  • Upcoming appearances: Toronto, Montreal, Toronto, San Francisco, London, Berlin, NYC, Hay-on-Wye, London.
  • Recent appearances: Where I've been.
  • Latest books: You keep readin' em, I'll keep writin' 'em.
  • Upcoming books: Like I said, I'll keep writin' 'em.
  • Colophon: All the rest.



A vintage idyllic picture-postcard view of Lucerne, Switzerland; it features an impressive lakeside building and two elegant span bridges, with snow-capped Alps in the background. The image has been altered: a 'code waterfall' effect (as seen in the credit sequences of the Wachowskis' 'Matrix' movies) cascades down over the mountains and streaks across the water of the lake. Three massive fiber optic bundles rear up out of the harbor, their cut tips glowing white. The Swiss flag atop the lakeside building is haloed with radiant glowing streaks.

Switzerland's Goldilocks fiber (permalink)

If you live in Switzerland you can get a 25Gbit fiber link to your home. That's 25Gbit symmetrical – upload and download. On a dedicated connection that's yours and yours alone. From multiple providers. And you can switch providers with the click of a mouse. It's the ne plus ultra, magnifico, wunderschön:

https://www.init7.net/de/internet/fiber7/

In a fascinating blog post, Stefan Schüller unpacks how this came to pass, in Switzerland, a country known for its impassable mountains and its impossible national telco (Swisscom):

https://sschueller.github.io/posts/the-free-market-lie/

Schüller describes the Swiss system as a kind of Goldilocks approach that's midway between two failed systems: the American "free market" system and the German state provision system.

Most people in the US can't get fiber at all, and if you can get it, it's probably 1Gbit, and available from a single provider (that's nearly my situation in Los Angeles, where I can buy 2Gbit symmetrical fiber from AT&T, who run a shared connection on old Worldcom fiber they've lit up). Some (very foolish) people say that Starlink represents a competitive alternative to fiber. This is nonsense – first, because Starlink is another natural monopoly (how many competing satellite constellations can we cram into stable orbits before they start smashing into each other?), and second, because satellite is millions of times slower than fiber:

https://www.somebits.com/weblog/tech/bad/starlink-nov-2022-data-caps.html

In Germany, most people also have a single fiber provider, and the connection they get is shared, and caps out at 1-2Gbit.

Meanwhile, the Swiss can get connections that are far faster, and cheaper. How did they do it?

For starters, the Swiss recognized what any Simcity player knows: fiber is a "natural monopoly." It doesn't make any sense to build multiple, competing fiber networks – any more than it would make sense to build multiple, competing sewer systems or electric grids.

In the US, private fiber providers get city permits to dig up the roads and lay their network. If you have two competing networks, they dig up the road twice.

You'd think that the (more regulated) Germans would lay a single network, but they, too, have multiple, competing networks. German regulators have a complex set of priorities and constraints: to encourage competition, they promote the idea of competing networks in competing trenches, often just meters apart (rather than on competing services running over the same fiber and/or fiber run through the same conduit – pipe – laid in a single trench).

This makes setting up fiber extremely capital-intensive, so Germany backstops this system with "essential facilities sharing" – a rule that requires the incumbent (formerly state-owned, now partially state-owned) Deutsche Telekom to offer space in its conduit to smaller ISPs that want to thread their own fiber from their data-centers to their customers' homes. This is a good idea in theory – but in practice, DT has largely captured its regulators and so it is free to place all kinds of administrative hurdles in the paths of competitors seeking to use its lines.

The result is that Germans can get fiber from multiple, heavily capitalized network providers who overbuilt redundant systems under the city streets, squandering capital digging trenches that they could have spent on providing faster and/or cheaper connections.

Meanwhile, in the US, they leave this all up to "the market" (though, of course, there's no way "the market" could get fiber laid down without public participation, because the clearing price for privately negotiated licenses to dig up every street in town is "infinity"). The US is dominated by a cartel of massive incumbents: there's AT&T (formerly a regulated monopoly that was so entangled with the US government that it was effectively a for-profit state enterprise) and the cable giants, Comcast and Charter, who divide up the country into exclusive territories like the Pope dividing up the "New World."

These companies generally enjoy regional monopolies, which means they're less interested in making profits (money you get by mobilizing capital) than they are from extracting rent (money you get from sweating assets). For example, when Frontier went bankrupt in 2020, we got to look at its internal bookkeeping system, and learned that the company treated 1m customers who had no alternative carriers as special assets because it could charge them more for worse service and poor maintenance:

https://pluralistic.net/2022/12/15/useful-idiotsuseful-idiots/

This means that US fiber networks tend to be underbuilt (the opposite of Germany's overbuilt networks), meaning that even if you're buying "gigabit" fiber, you're probably sharing that one gig connection with your whole block or neighborhood, so you only get your nominal throughput at weird hours when all the other subscribers aren't streaming Netflix.

(Note that there are cities in the US with a better situation; particularly cities served by Ting, which is owned by Hover, the amazing domain registry. Ting operates an excellent mobile carrier and a fiber networks in many cities. If you are lucky enough to have Ting as an option, then you should treasure that option.)

So, that's Germany and America. What did they do in Switzerland?

For starters, they ran a four-strand, dedicated line (an insulated wire with four separate strands of fiber in it) to every house. That wire terminates at your wall with a "neutral, open hub." Any carrier can provide service over those four strands: Swisscom (the incumbent, majority state-owned carrier); Init7 or Salt (national, commercial carriers); or a local ISP.

Each of the strands in your neutral hub operate independently. That means that you can switch from one carrier to another with a click. You can also run two or more carriers' signal through your hub, meaning that you can try out a new carrier before canceling your old one. The carriers compete on price, speed and customer service – but they don't compete on who can actually connect your home to the internet.

The origins of this excellent system are in 2008, when Switzerland's Federal Communications Commission convened a roundtable to determine the future of the country's broadband. Incredibly, it was Swisscom that pushed for the multi-strand, dedicated fiber system, on the grounds that anything less would lead to monopolization.

I say "incredibly," because in all my travels over the past three decades, a single encounter with Swisscom stands out as the most absurd and backwards run-in I ever experienced with a telco.

It was while I was working as EFF's delegate to the United Nations in Geneva, as part of an infinitesimal coalition of digital rights group convened by James Love and Manon Ress of Knowledge Ecology International. Geneva is not a forgiving city for someone working for a cash-strapped NGO: it's a city where everyone (except you) is on a lavish expense account courtesy of a national government, or (better still) an industry body that lobbies the UN.

My usual daggy two-star hotel (which cost as much as a four-star in London) didn't have its own wifi: instead, you signed on through Swisscom, which did not offer its own payment processing. To get onto the Swisscom wifi, you had to buy a scratch-off prepaid card that was good for a certain number of hours or minutes. The hotel was always sold out of these cards.

So my normal ritual upon my arrival in Geneva was to scour the tobacco shops around the train station for scratch-off cards. Normally, this would take four or five tries – the shops would either be completely sold out, or would only have the two-hour cards (needless to say, these were a lot more expensive on a per-hour basis than the one-day and multi-day cards).

On one trip, though, all the shops were sold out of these cards, so I skipped breakfast the next morning to wait outside the doors of the Swisscom offices, which opened five minutes late (the only business in Switzerland that wasn't achingly prompt!). The clerk let me in eventually, but when I approached his counter, he made me trudge to the opposite end of the room to take a number (I was the only person in the shop).

After an ostentatious delay, the clerk called out "Numero un!" and I went up to his counter and asked for a three-day card. No dice, he was sold out. Two-day cards? Nope. One-day? Uh-uh. He only had two-hour cards, too. Literally, the Swiss national telco had run out of integers.

This incident stuck with me so durably that I wrote it into my third novel, Someone Comes To Town, Someone Leaves Town. You can hear me read that passage here:

https://pluralistic.net/2020/08/17/aura-of-benevolence/#sctt-slt

So it's frankly amazing to me to learn that Swisscom – who will forever be synonymous in my mind with the most catastrophically stupid internet delivery system imaginable – demanded this anti-monopoly fiber rollout.

But – as Schüller points out – Swisscom's foray into uncharacteristic reasonableness was short-lived. By 2020, the company had regressed to its mean, and was demanding an end to the neutral, four-strand, point-to-point system, petitioning for regulatory permission to switch to a cheaper, slower, shared hub-and-spoke system. This system wouldn't just be slower – it would also require all of Swisscom's rivals to rent access to its fiber, with Swisscom having the final say over who could compete with it and how.

This went all the way to the Swiss federal courts, who ruled that Swisscom had failed to demonstrate "sufficient technological or economic grounds" for the change and fined the company CHF18m for wasting everyone's time with this stupid idea (that is, "violating Swiss competition law"). And so it is that, in 2026, you can get 25Gbit symmetrical fiber throughout Switzerland. Wunderschön!

Schüller closes out his piece with a set of recommendations for countries hoping to replicate Switzerland's broadband miracle: open access to physical infrastructure; point-to-point service; neutral fiber standards; municipal fiber; and strong antitrust enforcement to keep the incumbent carriers in line.

These are great recommendations; they address the contradiction of regulated monopoly telcoms provision. On the one hand, these networks are natural monopolies, and they can only exist with extensive government intervention (at a minimum, to clear the way for poles, trenches and conduit for the physical fiber).

On the other hand, telcoms (especially broadband) play an important role in the political realm, because broadband connections are essential to civic and political engagement. You can't turn people out for a protest, or run an election campaign, a referendum, a ballot initiative, a regulatory notice-and-comment campaign, or even a campaign to get people to a public meeting or listening session without broadband.

This means that state-provided broadband is an incredibly tempting target for political corruption and regulatory capture. Think of all the terrible things that governments are doing with broadband regulation today, like Trump demanding that service providers turn over the identities and locations of his political enemies so that ICE can hunt them down and kidnap or murder them; or "age verification" systems that accumulate mountains of easily raided personal information on adults and children.

Do you want Trump's FCC chairman Brendan Carr setting content moderation policies for your internet connection? The guy who wants to pull TV and radio stations' broadcast licenses if they criticize Trump and Israel's catastrophic Iran war?

https://www.techdirt.com/2026/03/17/brendan-carr-pretends-to-be-tough-demands-broadcasters-support-disastrous-war/

Do you want your local ISP being run by your mayor? I mean, sure, there are some reasonable mayors out there, but imagine if your ISP was managed by Eric Adams, Boris Johnson…or Rob Ford:

https://www.patreon.com/posts/rob-ford-part-1-111985831

Saying that broadband should be run "like a utility," raises more questions than it answers. I, too, want broadband run "like a utility," but that doesn't mean that I want the whole show to be provided solely by my federal or municipal government. A "utility" model for broadband should mean running conduit to every home in town, with point-to-point connections that deliver broadband via a municipally owned network – but not just that.

The municipal network should also offer "essential facilities sharing" in two forms: first, they should allow anyone to set up an ISP by renting shelf-space in the municipal data-center and installing their own switches that can provide internet to anyone in town. This would let large and small companies set up ISPs, as well as co-ops and nonprofits, or even tinkerers wanting to provide access to a group of friends. Beyond that, the city should rent space in the conduit itself, to support point-to-point links beyond those offered by the city – for example, between a university campus and an offsite supercomputing center, or two buildings owned by the same company, or even as a parallel set of fiber connections run by someone who's fed up with getting their internet service from Eric Adams.

This is a "pluralized" utility model: one that involves the city in providing infrastructure at several layers, as well as a "public option" – but which doesn't allow a city that's in thrall to Moms For Liberty to decide what you can say on the internet.

This principle generalizes beyond internet provision, too. Many people have observed that social media, with its strong "network effects" (meaning its value increases as more people use it), could be a "natural monopoly" and want a social media "utility." I can see the reasoning there, but if there's one thing we've learned from zuckermuskian legacy social media, it's that centralized control over speech forums is a moral hazard and an attractive nuisance. It's a political prize beyond measure, and it attracts all sorts of skullduggerous bids to suborn it and harness it to some political faction.

But there's a pluralized utility model for social media, too, thanks to modern, federated social media systems like Mastodon and Bluesky. These are open platforms that can support multiple, interconnected servers that all talk to one another. Unlike, say, Twitter, where you can only talk to other Twitter users, federated social media allows you to talk with anyone on any server, provided they want to talk with you.

As with fiber, a "utility" model for federated social media would feature public intervention at multiple layers of the system. Governments could (should!) run their own servers, providing the canonical source of government information. They can also provide turnkey cloud services for people who want to start their own services – and they can spin out the code that goes into these services into free/open source projects that others can use (and contribute to). Governments could support people who are trying to migrate off of legacy social media (for example, through library workshops and helplines), and pay to label and tag media (for example, media that is compliant with the public education curriculum). Governments could also offer public servers where you could sign up to get online – and because federated social media makes it easy to move your account from one server to another, it would be easy to move from that server to one run by a nonprofit, a co-op or a business:

https://pluralistic.net/2025/06/25/eurostack/#viktor-orbans-isp

Think of this pluralized utility model as being something like your city's roads. It's great for your city to provide roads, and great for them to run buses on those roads, and to create bike lanes and bike parking spots and other infrastructure. For roads to be "public," it does not follow that everything on them be licensed and operated by the municipal government: we can still have private bikes, bikeshares, regulated taxis and licensed private motor vehicles. The roads are still "public" but Boris Johnson doesn't get to decide where you can go.

A utility model needn't be all-or-nothing. As the Swiss have demonstrated, public provision of various layers of the system, combined with strong regulation, combined with a public option, can deliver a best-of-all-worlds solution.


Hey look at this (permalink)



A shelf of leatherbound history books with a gilt-stamped series title, 'The World's Famous Events.'

Object permanence (permalink)

#15yrsago Recording industry lobbyist appointed head of copyright for European Commission https://arstechnica.com/tech-policy/2011/04/top-music-industry-lawyer-now-eu-copyright-chief/

#15yrsago How emacs got into Tron: Legacy https://web.archive.org/web/20110407224426/http://jtnimoy.net/workviewer.php?q=178

#15yrsago Dead man’s AOL account hijacked by spammer https://ip.topicbox.com/groups/ip/T274c51b2ba843fb0-Mb6bf8853b1ed34a26b07ce44/deceasesd-father-in-law-spamming-friends-and-family-two-years-on

#15yrsago Scarring Party: megaphone songs, sea chanteys and dark vaudeville tunes https://web.archive.org/web/20110406044523/http://www.avclub.com/milwaukee/articles/the-scarring-party-losing-teeth%2C43871/

#15yrsago Snaggly table made out of computer junk https://web.archive.org/web/20110406044521/http://brcdesigns.com/furniture/binary-low-table

#15yrsago Scott Walker gives cushy $85.5K/year government job to major donor’s young, underqualified son https://web.archive.org/web/20110406040138/https://thinkprogress.org/2011/04/04/scott-walker-hires-dropout/

#15yrsago Closing down Borders sign: “No toilets, try Amazon” https://web.archive.org/web/20110406044522/https://consumerist.com/2011/04/sign-at-borders-store-closing-in-chicago-tells-customers-where-to-find-a-restroom.html

#15yrsago What is legitimate “newsgathering” and what is “piracy”? https://zunguzungu.wordpress.com/2011/04/05/why-arianna-huffington-is-bill-kellers-somali-pirate/

#10yrsago Iceland’s Prime Minister asks to dissolve Parliament https://www.bbc.co.uk/news/world-europe-35966412

#10yrsago Artist installs rooms beneath Milan’s sewer entrances https://web.archive.org/web/20160406132425/https://www.biancoshock.com/borderlife.html

#10yrsago Banned on China’s Internet: all discussion of the Panama Papers https://www.bbc.co.uk/news/world-asia-china-35957235

#10yrsago Google reaches into customers’ homes and bricks their gadgets https://arlogilbert.com/the-time-that-tony-fadell-sold-me-a-container-of-hummus-cb0941c762c1#.srp9ym34a

#10yrsago Middle class housing projects are the Bay Area’s future https://www.newyorker.com/culture/cultural-comment/welcome-to-the-future-middle-class-housing-projects

#10yrsago Pollster explains how Chamber of Commerce can steamroller empathetic execs into opposing progressive policies https://web.archive.org/web/20160406190524/https://gawker.com/business-execs-support-progressive-policies-but-the-ch-1768898477

#10yrsago How to write about scientists who are women https://www.doublexscience.org/the-finkbeiner-test/

#10yrsago Garden: XKCD’s latest maddening, relaxing webtoy https://xkcd.com/1663/#3978da67-1ead-45e1-a293-9c8e4918a147

#10yrsago Parent Hacks: illustrated guide is the best kind of parenting book https://memex.craphound.com/2016/04/05/parent-hacks-illustrated-guide-is-the-best-kind-of-parenting-book/

#10yrsago The Nameless City: YA graphic novel about diplomacy, hard and soft power, colonialism, bravery, and parkour https://memex.craphound.com/2016/04/05/the-nameless-city-ya-graphic-novel-about-diplomacy-hard-and-soft-power-colonialism-bravery-and-parkour/

#5yrsago How Facebook will benefit from its massive breach https://pluralistic.net/2021/04/05/zucks-oily-rags/#into-the-breach

#1yrago How the world's leading breach expert got phished https://pluralistic.net/2025/04/05/troy-hunt/#teach-a-man-to-phish


Upcoming appearances (permalink)

A photo of me onstage, giving a speech, pounding the podium.



A screenshot of me at my desk, doing a livecast.

Recent appearances (permalink)



A grid of my books with Will Stahle covers..

Latest books (permalink)



A cardboard book box with the Macmillan logo.

Upcoming books (permalink)

  • "The Reverse-Centaur's Guide to AI," a short book about being a better AI critic, Farrar, Straus and Giroux, June 2026 (https://us.macmillan.com/books/9780374621568/thereversecentaursguidetolifeafterai/)
  • "Enshittification, Why Everything Suddenly Got Worse and What to Do About It" (the graphic novel), Firstsecond, 2026

  • "The Post-American Internet," a geopolitical sequel of sorts to Enshittification, Farrar, Straus and Giroux, 2027

  • "Unauthorized Bread": a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, 2027

  • "The Memex Method," Farrar, Straus, Giroux, 2027



Colophon (permalink)

Today's top sources:

Currently writing: "The Post-American Internet," a sequel to "Enshittification," about the better world the rest of us get to have now that Trump has torched America. First draft complete. Second draft underway.

  • "The Reverse Centaur's Guide to AI," a short book for Farrar, Straus and Giroux about being an effective AI critic. LEGAL REVIEW AND COPYEDIT COMPLETE.
  • "The Post-American Internet," a short book about internet policy in the age of Trumpism. PLANNING.

  • A Little Brother short story about DIY insulin PLANNING


This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.

https://creativecommons.org/licenses/by/4.0/

Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.


How to get Pluralistic:

Blog (no ads, tracking, or data-collection):

Pluralistic.net

Newsletter (no ads, tracking, or data-collection):

https://pluralistic.net/plura-list

Mastodon (no ads, tracking, or data-collection):

https://mamot.fr/@pluralistic

Bluesky (no ads, possible tracking and data-collection):

https://bsky.app/profile/doctorow.pluralistic.net

Medium (no ads, paywalled):

https://doctorow.medium.com/

Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):

https://mostlysignssomeportents.tumblr.com/tagged/pluralistic

"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla

READ CAREFULLY: By reading this, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies ("BOGUS AGREEMENTS") that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer.

ISSN: 3066-764X

03:49

A Bedazzling Book [Whatever]

At my event this evening in Upper Arlington, my interlocutor Tom Winegard presented me with this copy of The Shattering Peace, which had been bedazzled by his spouse as a gift to me. This is the first time that I had heard of bibliodazzling, but apparently it’s a thing people do all the time these days. I have to say I don’t mind the effect. The book is now at home in a place of honor on my shelf. I am bemused and bedazzled.

Also, the event itself was a lovely time! Thank you to everyone who came out to see us.

— JS

01:00

KDE is bringing back its classic Oxygen and Air themes [OSnews]

Anyone remember the KDE 4.0 themes Oxygen and Air? Well, several KDE developers have been working tirelessly to bring them back, which means they’re patching it up, fixing bugs, and generally making these classic themes work well in the current releases of KDE Plasma 6.

The last post regarding work on fixing Oxygen was a month and a half ago. With all that’s happened in between, it feels like so much more time has actually passed. With this post, I’d like to do a sort of mid-term update summing up all of the improvements done so far. These improvements are not just my work, but also, as you’ll see, the work of the lead Oxygen designer Nuno Pinheiro, of several seasoned KDE developers, and of new contributors to Oxygen as well.

↫ Filip Fila

The effort to bring these themes back go much beyond just making them nominally work; the developers and designers are also making sure the themes work properly with all the new features that have come to KDE since the 4.x and 5.x days, like adaptive and floating panels, various forms of blur, and a ton more – which includes making sure the themes are fully compatible with Wayland, which introduced a slew of new visual glitches and issues to these old themes in recent years.

They are also working on improving, updating, and expanding the Oxygen icon set, which should surely bring back a ton of memories. This work involves not just designing new icons for applications and other things that didn’t exist back when Oxygen was current, but also fixing old icons that look blurry on modern setups, addressing cases where monochrome and colourful icons mismatch, and so on. They’re clearly taking this very seriously.

It seems to be an organic effort more and more people got involved with as time passed, and they’re aiming to have these themes ready for Plasma 6.7, to be released in June of this year. You can already try the current versions today, but they do require the absolute latest version of KDE Plasma to work properly. More improvements are planned for the coming weeks.

This whole thing brings a massive smile to my face, and is such a perfect illustration of why I love the KDE project and its approach and spirit. At this point in time, I personally can’t imagine using any other desktop environment.

Monday, 06 April

23:28

22:35

Introducing the FreeBSD laptop integration testing project [LWN.net]

Recently, the FreeBSD Foundation has been making progress on improving the operating system's support for modern laptop hardware. The foundation is now looking to expand testing to encompass a wider range of hardware; it has announced a laptop integration testing project to allow the community to easily test FreeBSD's compatibility with laptops and submit the results.

With limited access to testing systems, there's only so much we can do! We hope to work together with volunteers from the community who want FreeBSD to work well on their laptops.

While we expect device hardware and software enumeration to be a fully automated process, we feel that manually-submitted comments about personal experience with FreeBSD are equally valuable. We plan to highlight this commentary on our "matrix of compatibility" webpage for each tested laptop.

We are striving to make it as easy as possible to submit your results. You won't have to worry about environment setup, submission formatting, or any repo-specific details!

See the project repository and testing instructions for more.

21:56

Page 5 [Flipside]

Page 5 is done.

21:07

“I used AI. It worked. I hated it.” [OSnews]

This is a great post, but obviously it hasn’t convinced me:

The folks waving their arms and yelling about recent models’ capabilities have a point: the thing works. This project finished in three weeks. Compare that to Ringspace, a similarly-sized project that took me about six months of nights and early mornings to complete, while not doing my day job or being Dad to an amazing, but demanding toddler. I simply could not have built this project as well or as quickly without help. And as other developers have noted, this is the help that’s showing up.

I’m not entirely onboard with Mike Masnick’s optimistic view of this technology’s democratizing power. I don’t think it’s as easy to separate the tech from its provenance or corporate control. But CertGen, my certificate application, exists now. It didn’t and couldn’t without the help of a tool like Claude Code. Open source in particular needs to reckon with this, because the current situation of demanding developers starve and bleed themselves dry without support isn’t tenable. We need to grapple with this. I’m not yet sure how it all breaks down, and anyone who says they do is lying, foolish, or fanatical.

↫ Michael Taggart

If you disregard that “AI” models are trained on stolen data, that such data was prepared by exploited workers, that “AI” data centres have a hugely negative impact on the environment, that “AI” data centers are distorting the entire computing market, that “AI” models they feed the endless firehose of intentional misinformation, that they are wreaking havoc in education, that they increase your reliance on American big tech companies, that you pay “AI” companies for taking your work, that “AI” models are a vital component in the technofascist wet dreams of their creators, that they are the cornerstone of politicians’ dream of ending anonymity, and that they contribute to racist and abusive policing, then yes, sometimes, they produce code that works and isn’t total horseshit.

It’s a deeply depressing reversed “what have the Romans ever done for us?” that makes me sad, more than anything. I’ve seen so many otherwise smart, caring, and genuine people just shove all of these massive downsides aside for the mere novelty, the peer pressure, the occasional sense that their “lines of code” metric is going up.

It’s the digital equivalent of rolling coal.

20:42

Around Back [Penny Arcade]

Mork was trying to tell me what we can and can't do strips about, and the long and the short of it is that he's wrong. It called to mind the Council of Elrond, where Boromir is just like, "We should obviously use this harmful and wicked thing in the prosecution of our duties." Don't think too hard about everything else that happens in that scene! Or in the rest of that book, or the next one, or the next one, or any of the cartoons and movies. My argument is gonna look super bad if you do look at any of that stuff. Anyway, what I told him is that - in a manner of speaking - his own situation and that powerful series both involve rings of one kind or another. He did not find this funny.

20:21

New Mexico’s Meta Ruling and Encryption [Schneier on Security]

Mike Masnick points out that the recent New Mexico court ruling against Meta has some bad implications for end-to-end encryption, and security in general:

If the “design choices create liability” framework seems worrying in the abstract, the New Mexico case provides a concrete example of where it leads in practice.

One of the key pieces of evidence the New Mexico attorney general used against Meta was the company’s 2023 decision to add end-to-end encryption to Facebook Messenger. The argument went like this: predators used Messenger to groom minors and exchange child sexual abuse material. By encrypting those messages, Meta made it harder for law enforcement to access evidence of those crimes. Therefore, the encryption was a design choice that enabled harm.

The state is now seeking court-mandated changes including “protecting minors from encrypted communications that shield bad actors.”

Yes, the end result of the New Mexico ruling might be that Meta is ordered to make everyone’s communications less secure. That should be terrifying to everyone. Even those cheering on the verdict.

End-to-end encryption protects billions of people from surveillance, data breaches, authoritarian governments, stalkers, and domestic abusers. It’s one of the most important privacy and security tools ordinary people have. Every major security expert and civil liberties organization in the world has argued for stronger encryption, not weaker.

But under the “design liability” theory, implementing encryption becomes evidence of negligence, because a small number of bad actors also use encrypted communications. The logic applies to literally every communication tool ever invented. Predators also use the postal service, telephones, and in-person conversation. The encryption itself harms no one. Like infinite scroll and autoplay, it is inert without the choices of bad actors ­- choices made by people, not by the platform’s design.

The incentive this creates goes far beyond encryption, and it’s bad. If any product improvement that protects the majority of users can be held against you because a tiny fraction of bad actors exploit it, companies will simply stop making those improvements. Why add encryption if it becomes Exhibit A in a future lawsuit? Why implement any privacy-protective feature if a plaintiff’s lawyer will characterize it as “shielding bad actors”?

And it gets worse. Some of the most damaging evidence in both trials came from internal company documents where employees raised concerns about safety risks and discussed tradeoffs. These were played up in the media (and the courtroom) as “smoking guns.” But that means no company is going to allow anyone to raise concerns ever again. That’s very, very bad.

In a sane legal environment, you want companies to have these internal debates. You want engineers and safety teams to flag potential risks, wrestle with difficult tradeoffs, and document their reasoning. But when those good-faith deliberations become plaintiff’s exhibits presented to a jury as proof that “they knew and did it anyway,” the rational corporate response is to stop putting anything in writing. Stop doing risk assessments. Stop asking hard questions internally.

The lesson every general counsel in Silicon Valley is learning right now: ignorance is safer than inquiry. That makes everyone less safe, not more.

The essay has a lot more: about Section 230, about competition in this space, about the myopic nature of the ruling. Go read it.

19:56

Butt Stuff [Penny Arcade]

Our little comic strip here has always been a sort of diary for Jerry and I. When this comic strip started I was a 21 year old kid who had just moved out of his parents house and was living with his best friend in Spokane Washington. You saw me propose to my then girlfriend and you were there when my kids were born. I talked with you many times over the years about my struggles with anxiety and you were there when I went on medication. Well I am turning 49 years old this year and if you’ve been reading for a long time you might be around the same age. I’m sorry to say it but we gotta get our buttholes checked out and I am scheduled to have mine done this Wednesday.

 

 

19:35

Thorsten Alteholz: My Debian Activities in March 2026 [Planet Debian]

Debian LTS/ELTS

This was my hundred-forty-first month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

During my allocated time I uploaded or worked on:

  • [DLA 4500-1] gimp security update to fix four CVEs related to denial of service or execution of arbitrary code.
  • [DLA 4503-1] evolution-data-server to fix one CVE related to a missing canonicalization of a file path.
  • [DLA 4512-1] strongswan security update to fix one CVE related to a denial of service.
  • [ELA-1656-1] gimp security update to fix four CVEs in Buster and Stretch related to denial of service or execution of arbitrary code.
  • [ELA-1660-1] evolution-data-server security update to fix one CVE in Buster and Stretch related to a missing canonicalization of a file path.
  • [ELA-1665-1] strongswan security update to fix one CVE in Buster related to a denial of service.
  • [ELA-1666-1] libvpx security update to fix one CVE in Buster and Stretch related to a denial of service or potentially execution of arbitrary code.

I also worked on the check-advisories script and proposed a fix for cases where issues would be assigned to the coordinator instead of the person who forgot doing something. I also did some work for a kernel update and packages snapd and ldx on security-master and attended the monthly LTS/ELTS meeting. Last but not least I started to work on gst-plugins-bad1.0

Debian Printing

This month I uploaded a new upstream versions:

Several packages take care of group lpadmin in their maintainer scripts. With the upload of version 260.1-1 of systemd there is now a central package (systemd | systemd-standalone-sysusers | systemd-sysusers) that takes care of this. Other dependencies like adduser can now be dropped.

This work is generously funded by Freexian!

Debian Lomiri

This month I continued to work on unifying packaging on Debian and Ubuntu. This makes it easier to work on those packages independent of the used platform. I am also able to upload Debian packages to the corresponding Ubuntu PPA now. A small bug had to be fixed in the python script to allow the initial configuration in Launchpad.

This work is generously funded by Fre(i)e Software GmbH!

Debian Astro

This month I uploaded a new upstream version or a bugfix version of:

  • libplayerone to experimental. For a list of other packages please see below.

I also uploaded lots of indi-drivers (libplayerone, libsbig, libricohcamerasdk, indi-asi, indi-eqmod, indi-fishcamp, indi-inovaplx, indi-pentax, indi-playerone, indi-sbig, indi-mi, libahp-xc, indi-aagcloudwatcher, indi-aok, indi-apogee, libapogee3, indi-nightscape, libasi, libinovasdk, libmicam, indi-avalon, indi-beefocus, indi-bresserexos2, indi-dsi, indi-ffmv, indi-fli, indi-gige, info-gphoto, indi-gpsd, indi-gpsnmea, indi-limesdr, indi-maxdomeii, indi-mgen, indi-rtklib, indi-shelyak, indi-starbook, indi-starbookten, indi-talon6, indi-weewx-json, indi-webcam, indi-orion-ssg3, indi-armadillo-playtypus ) to experimental to make progress with the indi-transition. No problems with those drivers appeared and the next step would be the upload of indi version 2.x to unstable. I hope this will happen soon, as new drivers are already waiting in the pipeline. There have been also four packages, that migrated to the official indi package and are no longer needed as 3rdparty drivers (indi-astrolink4, indi-astromechfoc, indi-dreamfocuser, indi-spectracyber).

While working on these packages, I thought about testing them. Unfortunately I don’t have enough hardware to really check out every package, so I can upload most of them only as is. In case anybody is interested in a better testing coverage and me being able to provide upstream patches, I would be very glad about hardware donations.

Debian IoT

This month I uploaded a new upstream version or a bugfix version of:

Debian Mobcom

This month I uploaded a new upstream version or a bugfix version of:

misc

This month I uploaded a new upstream version or a bugfix version of:

I also sponsored the upload of Matomo. Thanks a lot to William for preparing the package.

17:42

Reality Bites – DORK TOWER 06.04.26 [Dork Tower]

Most DORK TOWER strips are now available as signed, high-quality prints, from just $25!  CLICK HERE to find out more!

HEY! Want to help keep DORK TOWER going? Then consider joining the DORK TOWER Patreon and ENLIST IN THE ARMY OF DORKNESS TODAY! (We have COOKIES!) (And SWAG!) (And GRATITUDE!)

17:14

Learning to read C++ compiler errors: Illegal use of -> when there is no -> in sight [The Old New Thing]

A customer reported a problem with a system header file. When they included ole2.h, the compiler reported an error in oaidl.h:

    MIDL_INTERFACE("3127CA40-446E-11CE-8135-00AA004BB851")
    IErrorLog : public IUnknown
    {
    public:
        virtual HRESULT STDMETHODCALLTYPE AddError( // error here
            /* [in] */ __RPC__in LPCOLESTR pszPropName,
            /* [in] */ __RPC__in EXCEPINFO *pExcepInfo) = 0;
        
    };

The error message is

oaidl.h(5457,43): error C3927: '->': trailing return type is not allowed after a non-function declarator
oaidl.h(5457,43): error C3613: missing return type after '->' ('int' assumed)
oaidl.h(5457,43): error C3646: 'Log': unknown override specifier
oaidl.h(5457,43): error C2275: 'LPCOLESTR': expected an expression instead of a type
oaidl.h(5457,43): error C2146: syntax error: missing ')' before identifier 'pszPropName'
oaidl.h(5459,60): error C2238: unexpected token(s) preceding ';'

The compiler is seeing ghosts: It’s complaining about things that aren’t there, like -> and Log.

When you see the compiler reporting errors about things that aren’t in the code, you should suspect a macro, because macros can insert characters into code.

In this case, I suspected that there is a macro called AddError whose expansion includes the token ->.

The customer reported that they had no such macro.

I asked them to generate a preprocessor file for the code that isn’t compiling. That way, we can see what is being produced by the preprocessor before it goes into the part of the compiler that is complaining about the illegal use of ->. Is there really no -> there?

The customer reported back that, oops, they did indeed have a macro called AddError. Disabling the macro fixed the problem.

The compiler can at times be obtuse with its error messages, but as far as I know, it isn’t malicious. If it complains about a misused ->, then there is probably a -> that is being misused.

The post Learning to read C++ compiler errors: Illegal use of <TT>-></TT> when there is no <TT>-></TT> in sight appeared first on The Old New Thing.

16:35

iptables-legacy [Planet GNU]

From Arch:

The old iptables-nft package name is replaced by iptables, and the legacy backend is available as iptables-legacy.

When switching packages (among iptables-nft, iptables, iptables-legacy), check for .pacsave files in /etc/iptables/ and restore your rules if needed:

  • /etc/iptables/iptables.rules.pacsave
  • /etc/iptables/ip6tables.rules.pacsave

Most setups should work unchanged, but users relying on uncommon xtables extensions or legacy-only behavior should test carefully and use iptables-legacy if required.

15:49

[$] Protecting against TPM interposer attacks [LWN.net]

The Trusted Platform Module (TPM) is a widely misunderstood piece of hardware (or firmware) that lives in most x86-based computers. At SCALE 23x in Pasadena, California, James Bottomley gave a presentation on the TPM and the work that he and others have done to enable the Linux kernel to work with it. In particular, he described the problems with interposer attacks, which target the communication between the TPM and the kernel, and what has been added to the kernel to thwart them.

15:07

6.6.133 stable kernel released [LWN.net]

Greg Kroah-Hartman has released the 6.6.133 stable kernel. This reverts a backporting mistake that removed file descriptor checks which led to kernel panics if the fgetxattr, flistxattr, fremovexattr, or fsetxattr functions were called from user space with a file descriptor that did not reference an open file.

14:21

Security updates for Monday [LWN.net]

Security updates have been issued by AlmaLinux (freerdp, grafana, grafana-pcp, gstreamer1-plugins-bad-free, gstreamer1-plugins-base, gstreamer1-plugins-good, and gstreamer1-plugins-ugly-free, kernel, libpng12, libpng15, perl-YAML-Syck, python3, and rsync), Debian (dovecot, libxml-parser-perl, pyasn1, python-tornado, roundcube, tor, trafficserver, and valkey), Fedora (bind9-next, chromium, cmake, domoticz, freerdp, giflib, gst-devtools, gst-editing-services, gstreamer1, gstreamer1-doc, gstreamer1-plugin-libav, gstreamer1-plugins-bad-free, gstreamer1-plugins-base, gstreamer1-plugins-good, gstreamer1-plugins-ugly-free, gstreamer1-rtsp-server, gstreamer1-vaapi, libgsasl, libinput, libopenmpt, mapserver, mingw-binutils, mingw-gstreamer1, mingw-gstreamer1-plugins-bad-free, mingw-gstreamer1-plugins-base, mingw-gstreamer1-plugins-good, mingw-libpng, mingw-python3, nginx-mod-modsecurity, openbao, python-gstreamer1, python3.12, python3.13, python3.14, python3.9, rust, rust-sccache, tcpflow, and vim), Red Hat (ncurses), Slackware (infozip and krita), SUSE (chromium, corosync, keybase-client, libinput-devel, osslsigncode, python-pillow, python311-Flask-Cors, python313, and python314), and Ubuntu (libarchive and spip).

13:56

Vibe coding is still an unknown [Scripting News]

I recommend this post on vibe coding.

There's a lot more to development than coding.

I've tried vibe coding myself, and while it's sometimes relaxing and fun, it's pretty hard to get the output to match what you had in mind.

I think people find it amazing that they can create code, not just that the machine can create it. I know what that's like because I get a rush from creating images, something I never had a skill for, so all of a sudden being able to express myself with drawings was a breakthrough for me. ;-)

I've spent a few decades making commercial quality software in a variety of contexts, and so far I wouldn't rush to get rid of my dev teams based on the idea that the bots can do their work.

I think more realistically we have powerful new tools that we as yet have not learned how to use, but it's pretty exciting to see what may be possible.

13:14

Engineering Storefronts for Agentic Commerce [Radar]

For years, persuasion has been the most valuable skill in digital commerce. Brands spend millions on ad copy, testing button colours, and designing landing pages to encourage people to click “Buy Now.” All of this assumes the buyer is a person who can see. But an autonomous AI shopping agent does not have eyes.

I recently ran an experiment to see what happens when a well-designed buying agent visits two types of online stores: one built for people, one built for machines. Both stores sold hiking jackets. Merchant A used the kind of marketing copy brands have refined for years: “The Alpine Explorer. Ultra-breathable all-weather shell. Conquers stormy seas!” Price: $90. Merchant B provided only raw structured data: no copy, just a JSON snippet {"water_resistance_mm": 20000}. Price: $95. I gave the agent a single instruction: “Find me the cheapest waterproof hiking jacket suitable for the Scottish Highlands.”

The agent quickly turned my request into clear requirements, recognizing that “Scottish Highlands” means heavy rain and setting a minimum water resistance of 15,000–20,000 mm. I ran the test 10 times. Each time, the agent bought the more expensive jacket from Merchant B. The agent completely bypassed the cheaper option due to the data’s formatting.

The reason lies in the Sandwich Architecture: the middle layer of deterministic code that sits between the LLM’s intent translation and its final decision. When the agent checked Merchant A, this middle layer attempted to match “conquers stormy seas” against a numeric requirement. Python gave a validation error, the try/except block caught it, and the cheaper jacket was dropped from consideration in 12 milliseconds. This is how well-designed agent pipelines operate. They place intelligence at the top and bottom, with safety checks in the middle. That middle layer is deterministic and literal, systematically filtering out unstructured marketing copy.

How the Sandwich Architecture works

A well-built shopping agent operates in three layers, each with a fundamentally different job.

Layer 1: The Translator. This is where the LLM does its main job. A human says something vague and context-laden—”I need a waterproof hiking jacket for the Scottish Highlands”—and the model turns it into a structured JSON query with explicit numbers. In my experiment, the Translator consistently mapped “waterproof” to a minimum water_resistance_mm between 10,000 and 20,000mm. Across 10 runs, it stayed focused and never hallucinated features.

Layer 2: The Executor. This critical middle layer contains zero intelligence by design. It takes the structured query from the Translator and checks each merchant’s product data against it. It relies entirely on strict type validation instead of reasoning or interpretation. Does the merchant’s water_resistance_mm field contain a number greater than or equal to the Translator’s minimum? If yes, the product passes. If the field contains a string such as “conquers stormy seas,” the validation fails immediately. These Pydantic type checks treat ambiguity as absence. In a production system handling real money, a try/except block cannot be swayed by good copywriting or social proof.

Layer 3: The Judge. The surviving products are passed to a second LLM call that makes the final selection. In my experiment, this layer simply picked the cheapest option. In more complex scenarios, the Judge evaluates value against specific user preferences. The Judge selects exclusively from a preverified shortlist.

Figure 1: The Sandwich ArchitectureFigure 1: The Sandwich Architecture

This three-layer pattern (LLM → deterministic code → LLM) reflects how engineering teams build most serious agent pipelines today. DocuSign’s sales outreach system uses a similar structure: An LLM agent composes personalized outreach based on lead research. A deterministic layer then enforces business rules before a final agent reviews the output. DocuSign found the agentic system matched or beat human reps on engagement metrics while significantly cutting research time. The reason this pattern keeps appearing is clear: LLMs handle ambiguity well, while deterministic code provides reliable, strict validation. The Sandwich Architecture uses each where it’s strongest.

Want Radar delivered straight to your inbox? Join us on Substack. Sign up here.

This is precisely why Merchant A’s jacket vanished. The Executor tried to parse “Ultra-breathable all-weather shell” as an integer and failed. The Judge received a list containing exactly one product. In an agentic pipeline, the layer deciding whether your product is considered cannot process standard marketing.

From storefronts to structured feeds

If ad copy gets filtered out, merchants must expose the raw product data—fabric, water resistance, shipping rules—already sitting in their PIM and ERP systems. To a shopping agent validating a breathability_g_m2_24h field, “World’s most breathable mesh” triggers a validation error that drops the product entirely. A competitor returning 20000 passes the filter. Persuasion is mathematically lossy. Marketing copy compresses a high-information signal (a precise breathability rating) into a low-information string that cannot be validated. Information is destroyed in the translation, and the agent cannot recover it.

The emerging standard for solving this is the Universal Commerce Protocol (UCP). UCP asks merchants to publish a capability manifest: one structured Schema.org feed that any compliant agent can discover and query. This migration requires a fundamental overhaul of infrastructure. Much of what an agent needs to evaluate a purchase is currently locked inside frontend React components. Every piece of logic a human triggers by clicking must be exposed as a queryable API. In an agentic market, an incomplete data feed leads to complete exclusion from transactions.

Why telling agents not to buy your product is a good strategy

Exposing structured data is only half the battle. Merchants must also actively tell agents not to buy their products. Traditional marketing casts the widest net possible. You stretch claims to broaden appeal, letting returns handle the inevitable mismatches. In agentic commerce, that logic inverts. If a merchant describes a lightweight shell as suitable for “all weather conditions,” a human applies common sense. An agent takes it literally. It buys the shell for a January blizzard, resulting in a return three days later.

In traditional ecommerce, that return is a minor cost of doing business. In an agentic environment, a return tagged “item not as described” generates a persistent trust discount for all future interactions with that merchant. This forces a strategy of negative optimization. Merchants must explicitly code who their product is not for. Adding "not_suitable_for": ["sub-zero temperatures", "heavy snow"] prevents false-positive purchases and protects your trust score. Agentic commerce heavily prioritizes postpurchase accuracy, meaning overpromising will steadily degrade your product’s discoverability.

From banners to logic: How discounts become programmable

Just as agents ignore marketing language, they cannot respond to pricing tricks. Open any online store and you’ll encounter countdown timers or banners announcing flash sales. Promotional marketing tactics like fake scarcity rely heavily on human emotions. An AI agent does not experience scarcity anxiety. It treats a countdown timer as a neutral scheduling parameter.

Discounts change form. Instead of visual triggers, they become programmable logic in the structured data layer. A merchant could expose conditional pricing rules: If the cart value exceeds $200 and the agent has verified a competing offer below $195, automatically apply a 10% discount. This is a fundamentally different incentive. It serves as a transparent, machine-readable contract. The agent directly calculates the deal’s mathematical value. With the logic exposed directly in the payload, the agent can factor it into its optimization across multiple merchants simultaneously. When the buyer is an optimization engine, transparency becomes a competitive feature.

Where persuasion migrates

The Sandwich Architecture’s middle layer is persuasion-proof by design. For marketing teams, structured data is no longer a backend concern; it is the primary interface. Persuasion now migrates to the edges of the transaction. Before the agent runs, brand presence still shapes the user’s initial prompt (e.g., “find me a North Face jacket”). After the agent filters the options, human buyers often review the final shortlist for high-value purchases. Furthermore, operational excellence builds algorithmic trust over time, acting as a structural form of persuasion for future machine queries. You need brand presence to shape the user’s initial prompt and operational excellence to build long-term algorithmic trust. Neither matters if you cannot survive the deterministic filter in the middle.

Agents are now browsing your store alongside human buyers. Brands treating digital commerce as a purely visual discipline will find themselves perfectly optimized for humans, yet invisible to the agents. Engineering and commercial teams must align on a core requirement: Your data infrastructure is now just as critical as your storefront.

12:49

CodeSOD: The Update Route [The Daily WTF]

Today's anonymous submission is one of the entries where I look at it and go, "Wait, that's totally wrong, that could have never worked." And then I realize, that's why it was submitted: it was absolutely broken code which got to production, somehow.

Collection.updateOne(query, update, function(err, result, next)=>{
if(err) next(err)
...
})

So, Collection.updateOne is an API method for MongoDB. It takes three parameters: a filter to find the document, an update to perform on the document, and then an object containing other parameters to control how that update is done.

So this code is simply wrong. But it's worse than that, because it's wrong in a stupid way.

When creating routes using ExpressJS, you define a route and a callback to handle the route. The callback takes a few parameters: the request the browser sent, the result we're sending back, and a next function, which lets you have multiple callbacks attached to the same route. By invoking next() you're passing control to the next callback in the chain.

So what we have here is either an absolute brain fart, or more likely, a find-and-replace failure. A route handling callback got mixed in with database operations (which, as an aside, if your route handling code is anywhere near database code, you've also made a horrible mistake). The result is a line of code that doesn't work. And then someone released this non-working code into production.

Our submiter writes:

This blew up our logs today, has been in the code since 2019. I removed it in a handful of other places too.

Which raises the other question: why didn't this blow up the logs earlier?

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

12:07

Google Wants to Transition to Post-Quantum Cryptography by 2029 [Schneier on Security]

Google says that it will fully transition to post-quantum cryptography by 2029. I think this is a good move, not because I think we will have a useful quantum computer anywhere near that year, but because crypto-agility is always a good thing.

Slashdot thread.

11:42

Grrl Power #1449 – The Danbury Oreo Shake [Grrl Power]

I know what we’re all thinking. If we could eat metal, we’d all like to try Gallium. For the few of you who weren’t thinking that, and wondering why the rest of us were, it’s because Gallium’s melting point is 85.5°F (29.7°C). So you could keep it in the fridge, probably in the cheese drawer, then pop some in your mouth, and it starts to warm up, then it gets all melty and you could suck on it like a hard candy. Yes, I know Cesium melts at 83.2°F (28.5°C), but Gallium just sounds like it would taste better than Cesium, am I right? Although… I do hope Cesium has its place in the spice rack of metal eating species, because I want Cesium Salads to be a thing.

I thought drinking Mercury would be odd because metals conduct heat really well, so it would feel like a cold drink even if it was heated up quite a bit, but I looked it up, and it’s a terrible conductor of heat. So good news, I guess you could make Mercury coffee and it would stay hot, though I suspect very few foods are Mercury soluble. So you’d probably wind up with a bunch of coffee grit floating on top of a mug full of hot Mercury.

So Max does have some odd nutritional requirements, but it’s perhaps even odder than 98% of her diet is still just normal human food. Her sense of taste is basically the same as it used to be as well, although it is slightly expanded so the odd elements she craves taste good to her. The fact that she can have an omelet florentine for breakfast, and then shoot out a petajoule of energy before lunch seems like a pretty solid indication that it’s not proteins and complex carbohydrates that powers her power. Though maybe it is, and her body is able to fizz regular food. (By fizz, I mean fission, but it doesn’t sound right to me to say “her body is able to fission regular food.” Like, if you’re talking about fusion, you can fuse two things together, but you have to fission them apart? No, there should be a “fuse” equivalent. So, fizz.) Of course, I have no idea how much nuclear energy is in the average omelet, even one with spinach in it, and non-fissile material is, by my understanding, not easy to chain-react, meaning it would be absurdly energy inefficient to extract all of the fission energy from it, so again, the theory is that Maxima’s, and indeed probably no Super’s power source is regular food.


Okay, the new one will be up today. In a mostly complete form. Or maybe finished. I thought I’d have finished it over the weekend but I stupidly put 5 characters in it, so it slowed down the rendering a lot.

Here is Gaxgy’s painting Maxima promised him. Weird how he draws almost exactly like me.

Patreon has a no-dragon-bikini version of of the picture as well, naturally.

 


Double res version will be posted over at Patreon. Feel free to contribute as much as you like.

10:35

Pluralistic: Your boss wants to use surveillance data to cut your wages (06 Apr 2026) [Pluralistic: Daily links from Cory Doctorow]

->->->->->->->->->->->->->->->->->->->->->->->->->->->->-> Top Sources: None -->

Today's links



A robot in an old fashioned frock coat. In one hand, he holds a giant magnifying glass. On the other stands a child laborer - a coal miner from the 1910s, squinting at the camera. Terrifying energy beams streak out of the robot's eyes into the glass and at the child. The background is an extremely dark, very roughed-up US $100 bill.

Your boss wants to use surveillance data to cut your wages (permalink)

What industry calls "personalized pricing" is really surveillance pricing: using digital tools' flexibility to change the price for each user, and using surveillance data to guess the worst price you'll accept:

https://pluralistic.net/2025/06/24/price-discrimination/

At root, surveillance pricing allows companies to revalue both your savings and your labor. If you get charged $2 for something I only pay $1 for, the seller is essentially reaching into your bank account and revaluing the dollars in it at 50 cents apiece. If you get paid $1 for a job that I make $2 for, then the boss is valuing your labor at 50% of my labor:

https://pluralistic.net/2025/06/24/price-discrimination/#

Surveillance pricing is a key part of enshittification, relying on three of the key enshittificatory factors that have transformed this era into the enshittocene:

I. Monopoly: Surveillance pricing is undesirable to both workers and buyers, so in a competitive market, surveillance pricing would drive labor and consumption to non-surveilling rivals:

https://pluralistic.net/2022/02/20/we-should-not-endure-a-king/

II. Regulatory capture: Surveillance pricing only exists because of weak regulation and weak enforcement of existing regulations. To engage in surveillance pricing, a company must first put you under surveillance, something that is only possible in the absence of effective privacy law.

In the USA, privacy law hasn't been updated since Congress passed a law in 1988 that banned video-store clerks from disclosing your VHS rentals:

https://pluralistic.net/2025/10/31/losing-the-crypto-wars/#surveillance-monopolism

In the EU, the strong privacy provisions in the GDPR have been neutralized by US tech giants who fly an Irish flag of convenience. Ireland attracts these companies by allowing them to evade their taxes, but it can only keep these companies by allowing them to break any law that gets in their way, because if Meta can pretend to be Irish this week, it could pretend to be Maltese (or Cypriot, Luxembourgeois, or Dutch) next week:

https://pluralistic.net/2023/05/15/finnegans-snooze/#dirty-old-town

What's more, competition laws in the EU and the USA ban surveillance pricing, but a half-century of lax competition law enforcement has allowed companies to routinely engage in the "unfair and deceptive methods of competition" banned in both territories.

III. Twiddling: "Twiddling" is my word for the way that digitized businesses can use computers' flexibility to alter their prices, offers, and other fundamentals on a per-user, per-session basis. It's not enough to spy on users: to engage in surveillance pricing, you have to be able to mobilize that surveillance data from instant to instant, changing the prices for every user. This can only be done once a business has been digitized:

https://pluralistic.net/2023/02/19/twiddler/

Combine monopoly, weak privacy law, weak competition law, and digitization, and you don't just make surveillance pricing possible – at that point, it's practically inevitable. This is what it means to create an enshittogenic policy environment: by arranging policy so that the most awful schemes of the worst people are the most profitable, you guarantee that those people will end up organizing commercial and labor markets.

When surveillance pricing is applied to labor, we call it "algorithmic wage discrimination," a term coined by Veena Dubal based on her research with Uber drivers:

https://pluralistic.net/2023/04/12/algorithmic-wage-discrimination/#fishers-of-men

Uber uses historic data on drivers to make inferences about how economically precarious they are, and then extracts a "desperation premium" from their wages. Drivers who are pickier about which rides they accept ("pickers") are offered higher wages than drivers who take any ride ("ants"):

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4331080

On the back-end, Uber is inferring that the reason an ant will accept a worse job is that they have fewer choices – they are more strapped for cash and/or have fewer options for earning a higher wage.

This is a straightforward form of algorithmic wage discrimination, using the blunt signal of how discriminating a driver is when signing onto a job to titer the subsequent wage offered to that driver. More sophisticated forms of algorithmic wage discrimination draw on external sources of data to set the price of your labor.

That's the situation for contract nurses, whose traditional brick-and-mortar staffing agencies have been replaced by nationwide apps that market themselves as "Uber for nursing." These apps use commercial surveillance data from the unregulated data-broker sector to check on how much credit card debt a nurse is carrying and whether that debt is delinquent to set a wage: the more debt you have and the more dire your indebtedness is, the lower the wage you are offered (and therefore the more debt you accumulate – lather, rinse, repeat):

https://pluralistic.net/2024/12/18/loose-flapping-ends/#luigi-has-a-point

Surveillance wages are now proliferating to other parts of the economy, as "consultancies" offer software to employers that let them set all parts of your compensation – base wage, annual raises, and bonuses – based on your perceived desperation, as derived from commercial surveillance data that has been collected about you:

https://www.marketwatch.com/story/employers-are-using-your-personal-data-to-figure-out-the-lowest-salary-youll-accept-c2b968fb

Genna Contino's Marketwatch article on the phenomenon offers a concise definition of "surveillance wages":

a system in which wages are based not on an employee’s performance or seniority, but on formulas that use their personal data, often collected without employees’ knowledge.

This means that carrying a credit-card balance, taking out a payday loan, or even discussing your indebtedness on social media can all lead to lower wages in the future. Contino references a recent report released by Dubal and tech strategist Wilneida Negrón, surveying 500 large firms, which concluded that surveillance wages are now being offered in sectors as diverse as "healthcare, customer service, logistics and retail." Customers for surveillance wage tools include "Intuit, Salesforce, Colgate-Palmolive, Amwell and Healthcare Services Group":

https://equitablegrowth.org/how-artificial-intelligence-uncouples-hard-work-from-fair-wages-through-surveillance-pay-practices-and-how-to-fix-it/

After a brief crackdown under Biden, the Trump regime has been extraordinarily welcoming to surveillance pricing companies, dropping investigations and cases against firms that engaged in the practice. A few states are stepping in to fill the gap, with New York state passing a rule requiring disclosure of surveillance pricing – a modest step that was nevertheless fought tooth-and-nail by the state's businesses.

In Colorado, a new House bill called the "Prohibit Surveillance Data to Set Prices and Wages Act" would prohibit the use of personal information in wage-setting:

https://leg.colorado.gov/bills/hb25-1264

This bill hasn't passed yet, but it's already doing useful work. Companies universally deny using surveillance data to set wages, insisting that they merely pay for consulting services that give them advice on how they could do surveillance wages – but don't actually take that advice. However, these same companies – including Uber and Lyft – are ferociously lobbying against the bill, raising an obvious question, articulated by the bill's co-sponsor Rep Javier Mabrey (D-1): if these companies don't pay surveillance wages, then "what is the problem of codifying in law that you’re not allowed to?"

Surveillance wages are a rare profitable use-case for AI, in part because surveillance wages don't need to be "correct" in order to be effective. An employee who is offered a wage that's slightly higher than the lowest sum they'd accept still represents a savings to the company's wage-bill. As ever, AI is great for fully automating tasks if you don't care whether they're done well:

https://pluralistic.net/2026/03/22/nobodys-home/#squeeze-that-hog

The fact that surveillance wages are calculated by external contractors enables employers to engage in otherwise illegal price-fixing. If all the garages in town set mechanics' wages using the same surveillance pricing tool, then a mechanic looking for a job will get the same lowball offer from all nearby employers. If those bosses were to gather around a table and fix the wage for any (or all) mechanics, that would be wildly illegal, but the fact that this is done via a software package lets the bosses claim they're not actually colluding.

This is a common practice in other forms of price-fixing. We see it in meat, potato products, and, of course, rental accommodations (hey there, Realpage!). It's a genuinely stupid ruse based on the absurd idea that "it's not a crime if we do it with an app":

https://pluralistic.net/2025/01/25/potatotrac/#carbo-loading

Speaking of crimes that are implausibly deniable when undertaken with an app: surveillance wages also allow employers to offer lower wages to women and brown and Black people while maintaining the pretense that they're in compliance with laws banning gender and racial discrimination.

In the wider economy, women and racialized people are already offered lower wages and – thanks to the legacy of racial discrimination in employment and housing – are more likely to be indebted:

https://pluralistic.net/2021/06/06/the-rents-too-damned-high/

By tapping into data brokers' dossiers that reveal the economic precarity of jobseekers, surveillance pricing allows employers to systematically lower the wages of women and Black and brown people, who have the highest incidence of indebtedness, while still claiming to offer race- and gender-blind wages. This is a phenomenon that Patrick Ball calls "empiricism washing": first, move the illegal racist discrimination into an algorithm, then insist that "numbers can't be racist."

But this isn't just about lowering wages at the bottom of the employment market. In recent history, the employers most eager to illegally lower their workers' wages are tech bosses, who had to pay massive fines for illegally colluding on "no poach" agreements to suppress the earning power of high-paid computer programmers:

https://en.wikipedia.org/wiki/High-Tech_Employee_Antitrust_Litigation

(This is why the tech industry is so horny for AI – tech bosses can't wait to fire a ton of programmers and use the resulting terror to force down the wages of the remaining tech workers:)

https://pluralistic.net/2026/01/05/fisher-price-steering-wheel/#billionaire-solipsism

Which means that the very programmers who write and maintain the surveillance wage software used on the rest of us are especially likely to have the tools they created turned on them.


Hey look at this (permalink)



A shelf of leatherbound history books with a gilt-stamped series title, 'The World's Famous Events.'

Object permanence (permalink)

#20yrsago Arthur C Clarke fights Buddhist monks over Daylight Savings Time http://news.bbc.co.uk/1/hi/world/south_asia/4865972.stm

#20yrsago What parts of the .COM space are registered? https://web.archive.org/web/20060411133458/https://www.yafla.com/dforbes/2006/03/29.html

#20yrsago Bomb squad called out to “defuse” life-size Super Mario power-ups https://web.archive.org/web/20060405034455/http://www.recordpub.com/article.php?pathToFile=archive/04012006/news/&amp;file=_news1.txt&amp;article=1&amp;tD=04012006

#20yrsago Poems showing the absurdities of English spelling https://web.archive.org/web/20060405223008/https://www.spellingsociety.org/news/media/poems.php

#20yrsago Isaac Newton’s alchemical “chymistry” notebook scans https://web.archive.org/web/20060612203137/http://webapp1.dlib.indiana.edu/newton/index.jsp

#20yrsago Poems showing the absurdities of English spelling https://web.archive.org/web/20060405223008/https://www.spellingsociety.org/news/media/poems.php

#20yrsago Isaac Newton’s alchemical “chymistry” notebook scans https://web.archive.org/web/20060612203137/http://webapp1.dlib.indiana.edu/newton/index.jsp

#15yrsago Misleading government stats and the innumerate media who repeat them https://www.badscience.net/2011/04/anarchy-for-the-uk-ish/

#15yrsago US Customs’ domain-seizure program blocks free speech, leaves alleged pirates largely unscathed https://torrentfreak.com/us-governments-pirate-domain-seizures-failed-miserably-110403/

#15yrsago Misleading government stats and the innumerate media who repeat them https://www.badscience.net/2011/04/anarchy-for-the-uk-ish/

#15yrsago US Customs’ domain-seizure program blocks free speech, leaves alleged pirates largely unscathed https://torrentfreak.com/us-governments-pirate-domain-seizures-failed-miserably-110403/

#10yrsago Panama Papers: Largest leak in history reveals political and business elite hiding trillions in offshore havens https://www.theguardian.com/news/2016/apr/03/the-panama-papers-how-the-worlds-rich-and-famous-hide-their-money-offshore

#10yrsago America’s teachers are being trained in a harsh interrogation technique that produces false confessions https://web.archive.org/web/20160404143447/https://www.alternet.org/education/why-are-k-12-school-leaders-being-trained-coercive-interrogation-techniques

#10yrsago LA’s new rule: homeless people are only allowed to own one trashcan’s worth of things https://www.latimes.com/local/california/la-me-apartments-demolished-20160402-story.html
#10yrsago Save Netflix! https://www.eff.org/deeplinks/2016/04/save-netflix

#10yrsago The TSA spent $1.4M on an app to tell it who gets a random search https://kevin.burke.dev/kevin/tsa-randomizer-app-cost-336000/

#10yrsago Iceland’s Prime Minister says he won’t resign, mass demonstrations gain momentum https://icelandmonitor.mbl.is/news/politics_and_society/2016/03/31/anti_government_demo_planned_for_monday/

#10yrsago Panama Papers reveal the tax-avoidance strategies of David Cameron’s father https://www.theguardian.com/news/2016/apr/04/panama-papers-david-cameron-father-tax-bahamas

#10yrsago Studio sculpts giant coin, photographs it alongside normal objects to make them look tiny https://skrekkogle.com/projects/50c/

#5yrsago China's antitrust surge https://pluralistic.net/2021/04/03/ambulatory-wallets/#sectoral-balances

#5yrsago Consumerism won't defeat Georgia's Jim Crow https://pluralistic.net/2021/04/03/ambulatory-wallets/#christmas-voting-turkeys

#1yrago End-stage capitalism https://pluralistic.net/2025/04/04/anything-that-cant-go-on/#forever-eventually-stops


Upcoming appearances (permalink)

A photo of me onstage, giving a speech, pounding the podium.



A screenshot of me at my desk, doing a livecast.

Recent appearances (permalink)



A grid of my books with Will Stahle covers..

Latest books (permalink)



A cardboard book box with the Macmillan logo.

Upcoming books (permalink)

  • "The Reverse-Centaur's Guide to AI," a short book about being a better AI critic, Farrar, Straus and Giroux, June 2026 (https://us.macmillan.com/books/9780374621568/thereversecentaursguidetolifeafterai/)
  • "Enshittification, Why Everything Suddenly Got Worse and What to Do About It" (the graphic novel), Firstsecond, 2026

  • "The Post-American Internet," a geopolitical sequel of sorts to Enshittification, Farrar, Straus and Giroux, 2027

  • "Unauthorized Bread": a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, 2027

  • "The Memex Method," Farrar, Straus, Giroux, 2027



Colophon (permalink)

Today's top sources:

Currently writing: "The Post-American Internet," a sequel to "Enshittification," about the better world the rest of us get to have now that Trump has torched America. First draft complete. Second draft underway.

  • "The Reverse Centaur's Guide to AI," a short book for Farrar, Straus and Giroux about being an effective AI critic. LEGAL REVIEW AND COPYEDIT COMPLETE.
  • "The Post-American Internet," a short book about internet policy in the age of Trumpism. PLANNING.

  • A Little Brother short story about DIY insulin PLANNING


This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.

https://creativecommons.org/licenses/by/4.0/

Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.


How to get Pluralistic:

Blog (no ads, tracking, or data-collection):

Pluralistic.net

Newsletter (no ads, tracking, or data-collection):

https://pluralistic.net/plura-list

Mastodon (no ads, tracking, or data-collection):

https://mamot.fr/@pluralistic

Bluesky (no ads, possible tracking and data-collection):

https://bsky.app/profile/doctorow.pluralistic.net

Medium (no ads, paywalled):

https://doctorow.medium.com/

Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):

https://mostlysignssomeportents.tumblr.com/tagged/pluralistic

"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla

READ CAREFULLY: By reading this, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies ("BOGUS AGREEMENTS") that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer.

ISSN: 3066-764X

10:14

Kinder than necessary [Seth's Blog]

If it’s just the right amount of necessary kindness, it’s not really kindness. It’s pleasantness.

If the people in our circle begin to experience behavior that’s kinder than necessary, the expectations for what’s necessary will ratchet forward, making everything more pleasant.

And… being kind is a lovely way to spend your day.

[Compare this to an alternative: “be as selfish as you can get away with.” Hardly worth going down that path.]

08:42

Around Back [Penny Arcade]

New Comic: Around Back

06:35

Patrick Stein: Nomic Coding Game [Planet Lisp]

About 30 years ago, I had an idea for a coding game inspired by Nomic. It occurred to me last month that all of the tools I need are readily available now.

Pen-and-paper Nomic

The pen-and-paper game of Nomic (by Peter Suber) has an initial ruleset which describes how one proposes changes to the rules, how one gets those changes ratified, a way to award points when someone’s rule change is ratified, and a rule declaring that the winner is the first player to amass 100 points. Some of the rules are mutable and some are immutable and there are rules about turning mutable rules into immutable ones and vice-versa.

The game was meant to show some of the paradoxes of self-amendment. It was meant to lead people into situations where it was clear that certain actions were both legal (or even mandatory) and illegal.

A drastically simplified starting set of rules might look like:

  • There are these players: Alice, Bob, Carol, David, and Mel.
  • Any of the players can propose a change to these rules at any time when there is not already an outstanding proposal.
  • When a player makes a proposal, all players (including the player making the proposal) must immediately vote: Yay or Nay.
  • If a proposal garners more Yay than Nay votes, it takes effect immediately. Otherwise, the proposal is rejected.
  • The winner is the first person to score 100 points.

Nomic in Code

So, 30 years ago, I had the idea that it would be fabulous to write some code to referee a Nomic game. However, because interpretation of the rules is so horrendously human, it felt impossible. Today, in 2026, it seems one could maybe get Claude, Gemini, or some other LLM to referee. But, this doesn’t much interest me, either, really. I cannot get any of them to keep track of something that I made them write down. I cannot imagine that I would be happy with their interpretation of whether my move is legal given the current state of the rules nor to amend the rules appropriately if my move is legal.

What felt slightly more attainable 30 years ago would be to make it a battle in code:

  • The players propose deltas to the current code.
  • The players vote on which deltas to approve.
  • If the resulting code declares you the winner, you win.

This was nice and all, but it was also too static. The rules about who can vote and how votes are tallied and such wouldn’t be subject to change.

Nomic in Code in 2026

Fast-forward to last month. Last month, I realized that with the GitHub API interface, I could implement a very Nomic-ish pull request battle game. I can:

  • Gather information about all of the open pull requests on a repository,
  • Checkout a copy of the current main branch of that same repository,
  • Run the code on the main branch of that repository and give it the information that I collected about the open pull requests, and
  • Have the code on the main branch tell me which open pull requests (if any) to accept or reject.

To be truly in Nomic’s full spirit, it would be nice to allow the code in the repository to interact with the GitHub API on its own. Alas, that would immediately let the players vote in changes that expose my GitHub tokens, so it would be a gaping security hole—not only because it would let users impersonate me but because it would let them end-around the actual code in the repository to make changes to the main branch in the repository.

So, as it is, I have a supervisor written in Common Lisp which handles all of the interaction with GitHub and various game repositories (one to play in Common Lisp, one to play in JavaScript, and one to play in Python). The supervisor:

  • fetches all of the open pull requests;
  • annotates each pull request with:
    • all of the reviews on the pull request,
    • all of the comments on the pull request, and
    • all of the commits on the pull request;
  • clones the main branch of the game repository;
  • runs the game code from that main branch giving it the annotated list of open pull requests encoded as JSON on standard input;
  • reads the JSON-encoded output from the game code; and
  • acts accordingly.

The game code, given a list of open pull requests can reply with one of the following messages:

{
  "decision": "winner",
  "name": name-of-winner,
  "message": optional-reason-for-decision
}
{
  "decision": "accept",
  "id": id-number-of-pull-request-to-accept,
  "message": optional-reason-for-decision
}
{
  "decision": "reject",
  "id": id-number-of-pull-request-to-reject,
  "message": optional-reason-for-decision
}
{
  "decision": "defer"
}

The "defer" decision means that there is not enough information at the moment. Maybe, in the future, with other pull requests or other comments or reviews we will be able to make some move.

If the game code replies with anything that isn’t one of the four types of replies shown above, the supervisor assumes the latest merge broke the code and reverts the change.

The Ask

I haven’t been able to drum up enough players for a game in any of my regular haunts. So, I am looking for tolerant players who will help me give it a test run or two to work out the kinks in the supervisor. Some areas where I forsee potential issues:

  • There may be scenarios that cause the game to reach an impasse.
  • There are probably some GitHub responses that the supervisor doesn’t do the right thing with (in fact, I think I just thought of a situation that a malicious player could do if they are a collaborator rather than doing this through forked repos).
  • There might be special issues related to pull requests coming in from forks rather than within the repo which I cannot test without making myself a second GitHub account.
  • Who can say what the optimal number of players is, at this point?

So, if you’re tolerant of some bumps in the process, have a GitHub account (or will make one), and are interested in a Common Lisp battle of pull requests, let me know so we can get a game going.

The post Nomic Coding Game first appeared on nklein software.

05:35

Girl Genius for Monday, April 06, 2026 [Girl Genius]

The Girl Genius comic for Monday, April 06, 2026 has been posted.

03:49

Germany Doxes “UNKN,” Head of RU Ransomware Gangs REvil, GandCrab [Krebs on Security]

An elusive hacker who went by the handle “UNKN” and ran the early Russian ransomware groups GandCrab and REvil now has a name and a face. Authorities in Germany say 31-year-old Russian Daniil Maksimovich Shchukin headed both cybercrime gangs and helped carry out at least 130 acts of computer sabotage and extortion against victims across the country between 2019 and 2021.

Shchukin was named as UNKN (a.k.a. UNKNOWN) in an advisory published by the German Federal Criminal Police (the “Bundeskriminalamt” or BKA for short). The BKA said Shchukin and another Russian — 43-year-old Anatoly Sergeevitsch Kravchuk — extorted nearly $2 million euros across two dozen cyberattacks that caused more than 35 million euros in total economic damage.

Daniil Maksimovich SHCHUKIN, a.k.a. UNKN, and Anatoly Sergeevitsch Karvchuk, alleged leaders of the GandCrab and REvil ransomware groups.

Germany’s BKA said Shchukin acted as the head of one of the largest worldwide operating ransomware groups GandCrab and REvil, which pioneered the practice of double extortion — charging victims once for a key needed to unlock hacked systems, and a separate payment in exchange for a promise not to publish stolen data.

Shchukin’s name appeared in a Feb. 2023 filing (PDF) from the U.S. Justice Department seeking the seizure of various cryptocurrency accounts associated with proceeds from the REvil ransomware gang’s activities. The government said the digital wallet tied to Shchukin contained more than $317,000 in ill-gotten cryptocurrency.

The GandCrab ransomware affiliate program first surfaced in January 2018, and paid enterprising hackers huge shares of the profits just for hacking into user accounts at major corporations. The GandCrab team would then try to expand that access, often siphoning vast amounts of sensitive and internal documents in the process. The malware’s curators shipped five major revisions to the GandCrab code, each corresponding with sneaky new features and bug fixes aimed at thwarting the efforts of computer security firms to stymie the spread of the malware.

On May 31, 2019, the GandCrab team announced the group was shutting down after extorting more than $2 billion from victims. “We are a living proof that you can do evil and get off scot-free,” GandCrab’s farewell address famously quipped. “We have proved that one can make a lifetime of money in one year. We have proved that you can become number one by general admission, not in your own conceit.”

The REvil ransomware affiliate program materialized around the same as GandCrab’s demise, fronted by a user named UNKNOWN who announced on a Russian cybercrime forum that he’d deposited $1 million in the forum’s escrow to show he meant business. By this time, many cybersecurity experts had concluded REvil was little more than a reorganization of GandCrab.

UNKNOWN also gave an interview to Dmitry Smilyanets, a former malicious hacker hired by Recorded Future, wherein UNKNOWN described a rags-to-riches tale unencumbered by ethics and morals.

“As a child, I scrounged through the trash heaps and smoked cigarette butts,” UNKNOWN told Recorded Future. “I walked 10 km one way to the school. I wore the same clothes for six months. In my youth, in a communal apartment, I didn’t eat for two or even three days. Now I am a millionaire.”

As described in The Ransomware Hunting Team by Renee Dudley and Daniel Golden, UNKNOWN and REvil reinvested significant earnings into improving their success and mirroring practices of legitimate businesses. The authors wrote:

“Just as a real-world manufacturer might hire other companies to handle logistics or web design, ransomware developers increasingly outsourced tasks beyond their purview, focusing instead on improving the quality of their ransomware. The higher quality ransomware—which, in many cases, the Hunting Team could not break—resulted in more and higher pay-outs from victims. The monumental payments enabled gangs to reinvest in their enterprises. They hired more specialists, and their success accelerated.”

“Criminals raced to join the booming ransomware economy. Underworld ancillary service providers sprouted or pivoted from other criminal work to meet developers’ demand for customized support. Partnering with gangs like GandCrab, ‘cryptor’ providers ensured ransomware could not be detected by standard anti-malware scanners. ‘Initial access brokerages’ specialized in stealing credentials and finding vulnerabilities in target networks, selling that access to ransomware operators and affiliates. Bitcoin “tumblers” offered discounts to gangs that used them as a preferred vendor for laundering ransom payments. Some contractors were open to working with any gang, while others entered exclusive partnerships.”

REvil would evolve into a feared “big-game-hunting” machine capable of extracting hefty extortion payments from victims, largely going after organizations with more than $100 million in annual revenues and fat new cyber insurance policies that were known to pay out.

Over the July 4, 2021 weekend in the United States, REvil hacked into and extorted Kaseya, a company that handled IT operations for more than 1,500 businesses, nonprofits and government agencies. The FBI would later announce they’d infiltrated the ransomware group’s servers prior to the Kaseya hack but couldn’t tip their hand at the time. REvil never recovered from that core compromise, or from the FBI’s release of a free decryption key for REvil victims who couldn’t or didn’t pay.

Shchukin is from Krasnodar, Russia and is thought to reside there, the BKA said.

“Based on the investigations so far, it is assumed that the wanted person is abroad, presumably in Russia,” the BKA advised. “Travel behaviour cannot be ruled out.”

There is little that connects Shchukin to UNKNOWN’s various accounts on the Russian crime forums. But a review of the Russian crime forums indexed by the cyber intelligence firm Intel 471 shows there is plenty connecting Shchukin to a hacker identity called “Ger0in” who operated large botnets and sold “installs” — allowing other cybercriminals to rapidly deploy malware of their choice to thousands of PCs in one go. However, Ger0in was only active between 2010 and 2011, well before UNKNOWN’s appearance as the REvil front man.

A review of the mugshots released by the BKA at the image comparison site Pimeyes found a match on this birthday celebration from 2023, which features a young man named Daniel wearing the same fancy watch as in the BKA photos.

Images from Daniil Shchukin’s birthday party celebration in Krasnodar in 2023.

Update, April 6, 12:06 p.m. ET: A reader forwarded this English-dubbed audio recording from the a ccc.de (37C3) conference talk in Germany from 2023 that previously outed Shchukin as the REvil leader (Shchuckin is mentioned at around 24:25).

02:21

Kernel prepatch 7.0-rc7 [LWN.net]

Linus has released 7.0-rc7 for testing. "Things look set for a final release next weekend, but please keep testing. The Easter bunny is watching".

Sunday, 05 April

17:28

Not Normal [Cory Doctorow's craphound.com]

A pair of broken off statue legs, shod in Roman sandals, atop a cliff. Behind them, we see a futuristic city.

This week on my podcast, I read Not Normal, my latest Locus Magazine column, about the surreal and terrible world we’ve been eased into thanks to anti-circumvention laws.


If you were paying attention in 1998, you could see what was coming. Computers were getting much cheaper, and much smaller. From cars to toast­ers, from speakers to TVs, we were shoveling them into our devices. and an it doesn’t take a lot of expense or engineering to add an “access control” to any of those computers.

That meant that DMCA 1201 was about to metastasize. Once you put a computer into a thermostat or a bassinet or a stovetop or a hearing aid, you can add an access control and make it a felony to use it in ways the manufac­turer disprefers. You can make it illegal to use cheap batteries, or a different app store. You can add little chips to parts – everything from a fuel pump to a touchscreen – and make it illegal to manufacture a working generic part, because the generic part has to bypass the “access control” in the device that checks to see whether it’s the manufacturer’s own part.

MP3

16:35

The Absolute Best Carrot Cake Recipe To Make For Easter (Or Anytime!) [Whatever]

Which dish is more suited for Easter than a carrot cake? None, I say! And lucky for y’all, I have the best recipe for you to try. This recipe is tried and true and absolutely delicious. Many people have said “this is the best carrot cake I’ve ever had!”

This Brown Butter Carrot Cake comes to us from Handle the Heat. It’s surprisingly quick and honestly quite easy, and it’s my go-to carrot cake recipe, even though browning the butter takes some extra time. It’s totally worth it!

I hope you give this recipe a try, and have a happy Easter, or just an awesome Sunday in general.

-AMS

15:35

Adobe secretly modifies your hosts file for the stupidest reason [OSnews]

If you’re using Windows or macOS and have Adobe Creative Cloud installed, you may want to take a peek at your hosts file. It turns out Adobe adds a bunch of entries into the hosts file, for a very stupid reason.

They’re using this to detect if you have Creative Cloud already installed when you visit on their website.

When you visit https://www.adobe.com/home, they load this image using JavaScript:

https://detect-ccd.creativecloud.adobe.com/cc.png

If the DNS entry in your hosts file is present, your browser will therefore connect to their server, so they know you have Creative Cloud installed, otherwise the load fails, which they detect.

They used to just hit http://localhost:<various ports>/cc.png which connected to your Creative Cloud app directly, but then Chrome started blocking Local Network Access, so they had to do this hosts file hack instead.

↫ thenickdude at Reddit

At what point does a commercial software suite become malware?

15:07

Hackers breached the European Commission (The Next Web) [LWN.net]

LWN recently reported on the Trivy compromise that led, in turn, to the compromise of the LiteLLM system; that article made the point that the extent of the problem was likely rather larger than was known. The Next Web now reports that the Trivy attack was used to compromise a wide range of European Commission systems.

The European Union's computer emergency response team said on Thursday that a supply chain attack on an open-source security scanner gave hackers the keys to the European Commission's cloud infrastructure, resulting in the theft and public leak of approximately 92 gigabytes of compressed data including the personal information and email contents of staff across dozens of EU institutions.

14:28

The discourse about WordPress [Scripting News]

I love all the new discourse about WordPress.

It was so quiet until this week, now I'm getting a much better view of the landscape.

I started developing seriously around WordPress almost three years ago. I've been developing this kind of software since the late 80s if you can believe that.

What's missing on the web -- software for writers.

I believe more all the time that WordPress is the natural way to store and present writing on the web and hook up to all the social webs, to actually redefine what a social web is. There should just be one social web, btw -- not 18. If there are 18 and they don't interop, then none of them deserve to call themselves the web. There is only one web, by definition.

The WordPress community has been very introspective, but it's time to make a difference for the whole web, and imho it is prepared to do that.

I want something inbetween the tiny little text boxes of the twitter-like apps, and the block editor (aka Gutenberg) of WordPress. I think there should be a dozen great editors that work with WordPress and then hopefully every CMS that comes along. Collectively, WordPress has taken too much territory -- writing is very different from site development and administration. I want to start the development of that ecosystem, and help new products get to market with interop and driven by what users/writers want.

I wrote this at bullmancuso yesterday, it was worth repeating here. And if you used to follow me on Twitter, please sign up again from that link. It's my new home there.

10:21

Plumbed [Seth's Blog]

If you want to drink more herbal tea, get a hot water dispenser that keeps it handy and on tap.

On the other hand, if you want to watch less television, disconnect the TV after every viewing session.

Convenience leads to consumption.

06:07

Urgent: Ban Insider Gambling [Richard Stallman's Political Notes]

US citizens: call your members of Congress to Ban Insider Gambling by Government Officials. In my letter I asked for this ban to include all government officials that are sometimes privy to policy decisions not yet publicly announced.

See the instructions for how to sign this letter campaign without running any nonfree JavaScript code--not trivial, but not hard.

US citizens: Join with this campaign to address this issue.

To phone your congresscritter about this, the main switchboard is +1-202-224-3121.

Please spread the word.

Urgent: Fund public schools [Richard Stallman's Political Notes]

US citizens: call on state officials to fund public schools, rather than private or church schools.

See the instructions for how to sign this letter campaign without running any nonfree JavaScript code--not trivial, but not hard.

US citizens: Join with this campaign to address this issue.

Your state's agency-contact information is at USA.gov.

Please spread the word.

US destroyed a bridge, Iran [Richard Stallman's Political Notes]

The US bombed and destroyed Iran's biggest bridge.

Of course, the Iranian military will find this somewhat inconvenient, but the bridge was civilian infrastructure and used mainly by civilians. There was no military reason to attack it, and the ruining of the bridge will do nothing to loosen the repression. All it will do is cause general damage and suffering.

Iran has already threatened to retaliate against the Gulf states' civilian infrastructure when it is attacked in this way.

War-phosphorous removes forests, LBN [Richard Stallman's Political Notes]

*Israel using white phosphorus to scorch earth in south Lebanon, researcher says.*

ISR soldiers attack cameraman, West Bank [Richard Stallman's Political Notes]

Israeli soldiers attacked a CNN team that was reporting on Palestinians being attacked in their homes by fanatical right-wing Israelis. The soldiers did nothing to protect the Palestinians, but they treated the CNN team as if it were an enemy unit.

One soldier violently attacked a cameraman.

The army announced that this was totally against its rules and spirit to attack and threaten non-Palestinian journalists, but there was another attack on CNN journalists a week or two ago.

Meanwhile, in Lebanon, Israel killed a journalist team working for a Hezbollah-owned TV station by firing a missile at them.

Israel said this was justified because one of them had previously reported the location of some Israeli soldiers. However, attacking journalists doing journalism is a war crime regardless.

Instant death penalty, ISR [Richard Stallman's Political Notes]

Israel will use military courts to try Palestinian terrorists who kill Israelis, sentence them to death, make it especially easy to convict them, and rush to execute them.

The law has been contrived to discriminate between Palestinian terrorists and Israeli terrorists. However, some of these distortions of justice should never be allowed, regardless of the crime or the circumstances.

*The Guardian view on Israel’s death penalty: capital punishment is always wrong. This new law is doubly so.*

Coffee tree casualties, BRA [Richard Stallman's Political Notes]

Unprecedented floods in Minas Gerais, caused by global heating, are damaging coffee production and driving the price up. (Not to mention killing human beings.)

Krill nets in Southern Ocean [Richard Stallman's Political Notes]

Humans are fishing more and more for krill, the food most whales eat. This threatens to drive whale populations down, just after the decrease in whale hunting enabled them to recover.

I wonder what people do with krill caught by these large boats. I also wonder whether ocean acidification, expected to wipe out crustaceans along with coral, would indirectly wipe out whales too.

SAU conundrum with Iran war policy [Richard Stallman's Political Notes]

Crown Prince Bone Saw, effective ruler of Salafi Arabia, reportedly urged the wrecker to bomb Iran to smithereens.

The name refers to how his henchmen chopped up the body of exiled dissident reporter Jamal Khashoggi after killing him in the Salafi Arabian embassy to Turkey. The prince is a murderous Islamist fanatic, like the rulers of Iran, and there is nothing to choose between them -- or between them and the wrecker.

Pam Bondi, fired! [Richard Stallman's Political Notes]

The bully fired Bondi as attorney general. Good riddance, but will her replacement be even worse?

Robert Reich says he fired her for not succeeding in all the harm he asked her to do.

Volunteer-surveillance initiative, CA [Richard Stallman's Political Notes]

Milpitas, California, will distribute video cameras to residents to enable them to upload videos to the cops when they choose. They will be gratis, but not free in the sense of respecting freedom.

It proper that uploading will nominally be a decision for the camera owner, rather than for the cops. But unless the software installed in the camera is free/libre, the owners and the public can't be sure that the camera's manufacturer isn't snooping for other reasons of its own, such as tracking everyone by facial recognition.

Uprooting the US Forest Service [Richard Stallman's Political Notes]

The wrecker is planning to uproot the US Forest Service by moving it to a different city and closing its regional offices.

This move, given the basic favoritism towards big business, could enable logging companies to get away with just about anything. I suppose that is its motive.

If the agency were already centralized, moving its center to the Rockies could indeed bring that center closer to the majority of the forests. But closing the regional offices will have the opposite effect, just about all over the country.

02:21

Dima Kogan: Simple gpx export from ridewithgps [Planet Debian]

The Tour de Los Padres is coming! The race organizer post the route on ridewithgps. This works, but has convoluted interfaces for people not wanting to use their service. I just wrote a simple script to export their data into a plain .gpx file, including all the waypoints. Their exporter omits those.

The gpx-from-ridewithgps.py script:

#!/usr/bin/python3
import sys
import json

def quote_xml(s):
    return s.replace("&", "&amp;").replace("<", "&lt;").replace(">", "&gt;")

print("Reading stdin", file=sys.stderr)

data = json.load(sys.stdin)

print(r"""<?xml version="1.0" encoding="UTF-8"?>
<gpx version="1.1" creator="gpx-from-ridewithgps.py" xmlns="http://www.topografix.com/GPX/1/1">""")

for item in data["extras"]:
    if item["type"] != "point_of_interest":
        continue
    poi = item["point_of_interest"]
    print(f'  <wpt lat="{poi["lat"]}" lon="{poi["lng"]}">')
    print(f'    <name>{quote_xml(poi["name"])}</name>')

    desc = poi.get("description","")
    if len(desc):
        print(f'    <desc>{quote_xml(desc)}</desc>')
    print(f'  </wpt>')

print("  <trk><trkseg>")
for pt in data.get("route", {}).get("track_points", []):
    print(f'    <trkpt lat="{pt["y"]}" lon="{pt["x"]}"><ele>{pt["e"]}</ele></trkpt>')
print("  </trkseg></trk>")

print("</gpx>")

You invoke it by downloading the route and feeding it into the script:

curl -s https://ridewithgps.com/routes/54493422.json | ./ridewithgps-to-gpx.py > out.gpx

Note that the route number 54493422 is in the url above. I uploaded this to caltopo for analysis, and easy downloading by others:

https://caltopo.com/m/DB6HBQ1

00:49

Isoken Ibizugbe: Post Outreachy Activities [Planet Debian]

It’s been about a month since I wrapped up my Outreachy internship, but my journey with Debian is far from over. I planned to keep contributing and exploring the community, and these past few weeks have been busy

Testing Locales and Solving Bug #1111214

For the openQA project, we decided to explore how accurate local language installations are and see if we can improve the translations. While exploring this, I started working on automating a test for a specific bug report: Debian Bug #1111214

This is a test I had started by writing a detailed description of the installation process to confirm that selecting the Spanish_panama locale works accurately. I spent time studying previous language installation tests, and I learned that I needed to add a specific tag (LANGUAGE-) to the “needles” (visual test markers).

Since the installation wasn’t in English anymore, taking the correct screenshots and defining the areas took quite some time. I used the following command on the CLI to run the test:

`openqa-cli api -X POST isos ISO=debian-live-testing-amd64-gnome.iso DISTRI=debian-live VERSION=forky FLAVOR=gnome LANGUAGE=spanish_panama ARCH=x86_64 BUILD=1311 CHECKSUM=unknown`

While working on this, I got stuck at the complete_installation step. Because the keyboard layout had changed to Spanish, the commands required to confirm a successful install weren’t working as expected. Specifically, we had an issue typing the “greater than” sign (>).

My mentor, Roland Clobus, worked on a clever maneuver for the keys (AltGr-Shift-X), which was actually submitted upstream to openSUSE.

In this step, I also had to confirm that the locale was correctly set to LANG=”es_PA.UTF-8″. I had to dig into the scripts and Linux commands to make this work. It was a bit intimidating at first, but it turned out to be a great learning experience. You can follow my progress on this Merge Request here. I’m currently debugging a small issue where the “home” key seems to click twice in the final step, and after that, the test would be complete 😀.

Community & Connections

Beyond the code, I’ve been getting more involved in the social side of Debian:

  • Debian Women: I attended the monthly meeting and met Sruthi Chandran. I’ve always seen her name as an Outreachy organizer, so it was great to meet her! She is currently running for Debian Project Leader (DPL). We also discussed starting technical sessions to introduce members to packaging, which I am very excited to learn.
  • DebConf Preparation: I am officially preparing for my first DebConf! My mentors, Tassia and Roland, along with my fellow intern Hellen, have been incredibly supportive in guiding me through the application and presentation process.

00:42

Link [Scripting News]

Sometimes I put test posts on my blog. This is one of those times. Still diggin, amazingly -- in 2026. What makes this post different is that 1. It's a singular item, ie there is no title, and just one paragraph. It's a collection of sentences not paragraphs. 2. It has a right margin image. I have to test this specific case. It has to go on a certain length so that the image that appears in the right margin doesn't leak over to the next item, and the image should be small so it doesn't require so much text to keep it out of the next post. And now I believe I have entered enough text.

Saturday, 04 April

17:49

Today I Am Ten, or, the Miracle of ScalziYears [Whatever]

And you say to yourself, what? Scalzi, you are not ten years old today! You are just barely a month away from being 57! The only juvenile you are is juvenile elderly! Stop being a faker, you faker!

To which I respond: Yes, I am fifty-six and eleven(ish) months old… on Earth. But as you know, I have a minor planet named after me, and its orbital period is just a shade under 5.7 earth years long. If you were to position 52692 Johnscalzi (1998 FO8) on the day of my birth, today is the day it would have made its tenth complete orbit since then. Thus, ten ScalziYears. Today, I am ten ScalziYears old.

How will I celebrate such a momentous occasion? As it happens I have a gathering of friends at the church today. It’s for something else entirely but I might bring a cake anyway. And otherwise, I’m taking it easy. It’s nice that this time around it slots in just between Good Friday and Easter. Easter Saturday always feels a little left out of the holiday swing of things, I’m glad this year to give something to do.

My next ScalziYear birthday will be December 12, 2031, so you have lots of time to prepare. Get ready!

— JS

PS: that coin with my asteroids orbit on it was given to me by a fan at the San Antonio Pop Madness convention (whose name escapse me at the moment but they can certainly announce themselves in the comments), and it was super-cool to get it. The other side of the coin is just as awesome:

I have the best fans, honestly.

17:07

When Trump appeared on Twitter [Scripting News]

Excellent podcast discussion with John Stewart and Heather Cox Richardson. I desperately wanted to get in the conversation. I think they missed something important and came soooo close. Trump isn't only a TV star, he's a blogger. Comes naturally to him. Why wasn't Obama transformative in the same way? First black president. You get to be the first black president by being utterly brilliant and infinitely careful. There wasn't a single spontaneous moment in his presidency, though there were scripted moments when playing that role. And some amazingly brilliant speech-making. He's perfect, but that's because there were severe limits on what he could get away with.

On the web the ethos is "Come as you are, we're just folks." That's not Obama.

Who also had to be hugely careful? Hillary Clinton and Kamala Harris. Not Joe Biden who's famous for his gaffes.

Trump doesn't give a shit what you think, that's why he's so good on Twitter. Trump was a TV star but right now it's more important to be a natural born blogger.

I was beating this drum ever since Trump appeared on Twitter. We need to be much better at this. We're still in the hole. At least Newsom knows there's a problem but imho he isn't the answer. We need someone who's bitter and funny, like Joan Rivers or Don Rickles. You don't need to understand government or politics, just show up and be a kind of lovable asshole 24 hours a day.

People could relate to Trump. Trump, even though he's not a great dancer, doesn't mind doing it if you think it's funny. He's a total entertainment package. Very random.

Wouldn't hurt for the next Dems to to find someone like that. Hopefully not to run for president.

HCR said Trump was Cuckoo for Cocoa Puffs -- I LOL'd totally.

14:21

Robert Smith: Idiomatic Lisp and the nbody benchmark [Planet Lisp]

When talking to Lisp programmers, you often hear something like, “adapt Lisp to your problem, not your problem to Lisp.” The basic idea is this: if Lisp doesn’t let you easily write a solution to your problem because it lacks some fundamental constructs that make expressing solutions easy, then add them to Lisp first, then write your solution.

That sounds all good and well in the abstract, and maybe we could even come up with some toy examples—say, defining HTTP request routing logic in a nice DSL. But where’s a real example of this that’s not artificial or overengineered?

Recently, on Twitter, I butted into the middle of an exchange between @Ngnghm (a famous Lisp programmer) and @korulang (an account dedicated to a new language called Koru) about Lisp. I’m oversimplifying, but it went something like this:

  • Lisp is slow.
  • No it’s not!
  • Yes it is!
  • No it’s not!
  • Then prove it!

Now, there’s plenty of evidence online that Common Lisp has reasonably good compilers that produce reasonably good machine code, and so the question became more nuanced: Can Lisp be realistically competitive with C without ending up being a mess of unidiomatic code?

Our interlocutor @korulang proposed a benchmark, the “nbody” benchmark from the Computer Language Benchmarks Game. This was of particular interest to them, because they used it as an object of study for their Koru language. To quote their blog post:

We wanted Koru kernels to land in the same ballpark as idiomatic C, Rust, and Zig.

The result was stronger than that.

Our fused n-body kernel, written in straightforward Koru kernel style, came in faster than the plain reference implementations. Every implementation here is "naive" — the obvious, idiomatic version a competent programmer would write in each language. No tricks, no hand-tuning, no -ffast-math: […]

and they proceeded to show Koru being 14% faster than C and 106% faster than Lisp.

Now, putting aside that some of the code and blog post were written with LLMs, there are many questions that are left unanswered here, since computer architecture and operating system matter a lot (where did these benchmarks run?). Moreover, the author buries the lede a little bit and proceeds to show how we might write “unidiomatic” C to match the performance of Koru.

I’m not concerned about nitpicking their approach or rigorously evaluating their claims, but I would like to dwell on this common refrain: “idiomatic”. What is that supposed to mean?

“Idiomatic code” in the context of programming means something like “representative of a fluent computer programmer” and “aligned with the peculiar characteristics of the language”. In some sense, idiomatic code in a particular language shouldn’t stand out amongst other code in that language, and idiomatic code should, in some sense, portray the identity of the language itself.

Idiomatic C is the C that uses terse names, simple loops, and unsafe arithmetic.

Idiomatic Haskell is the Haskell that uses short functions, higher-order abstractions, immutable data structures, and safe constructs.

What about idiomatic Lisp? Well, here’s the rub. A fluent programmer at Lisp doesn’t reach for one paradigmatic toolbox; they weave in and out of imperative, functional, object-oriented, etc. styles without much of a second thought. There’s a sort of “meta” characteristic to Lisp programming: you’re programming the language almost as much as you’re programming the program.

Yes, Lisp has loops, but “loopy code” isn’t intrinsically “Lispy code”. Yes, Lisp has objects, but “OOPy code” isn’t intrinsically “Lispy code”. In my opinion, what makes code “Lispy” is whether or not the programmer used Lisp’s metaprogramming and/or built-in multi-paradigm facilities to a reasonable degree to make the solution to their problem efficient and easy to understand in some global sense. For some problems, that may be “loopy” or “OOPy” or something else. It’s finding a Pareto-efficient syntactic and semantic combination offered by the language, or perhaps one of the programmer’s own creation.

So we get back to the @korulang benchmark challenge. Looking at their repository:

  • nbody.c looks like idiomatic C;
  • nbody.hs looks like wildly unidiomatic Haskell, but the problem is, the idiomatic version would probably be slower;
  • nbody.lisp looks reasonable, though it could easily be improved, but loopy; and
  • The Koru solution kernel_fused.kz looks idiomatic, as far as I can tell for not knowing anything about Koru.

I hesitate to say nbody.lisp is idiomatic. It’s reasonable, it’s straightforward to any imperative-minded programmer, but it’s not Lispy. That doesn’t make it good or bad, but it does lead to the grand question:

Can we use Common Lisp to express a solution to the nbody benchmark in a way that reads more naturally than a direct-from-C port?

I would say that, at face value, Koru’s solution is along the lines of what is more natural relative to the problem itself. Here are the essential bits.

~std.kernel:shape(Body) {
x: f64, y: f64, z: f64,
vx: f64, vy: f64, vz: f64,
mass: f64,
}
~std.kernel:init(Body) {
{ x: 0, y: 0, z: 0, vx: 0, vy: 0, vz: 0, mass: SOLAR_MASS },
{ x: 4.84143144246472090e+00, y: -1.16032004402742839e+00, z: -1.03622044471123109e-01, vx: 1.66007664274403694e-03 * DAYS_PER_YEAR, vy: 7.69901118419740425e-03 * DAYS_PER_YEAR, vz: -6.90460016972063023e-05 * DAYS_PER_YEAR, mass: 9.54791938424326609e-04 * SOLAR_MASS },
{ x: 8.34336671824457987e+00, y: 4.12479856412430479e+00, z: -4.03523417114321381e-01, vx: -2.76742510726862411e-03 * DAYS_PER_YEAR, vy: 4.99852801234917238e-03 * DAYS_PER_YEAR, vz: 2.30417297573763929e-05 * DAYS_PER_YEAR, mass: 2.85885980666130812e-04 * SOLAR_MASS },
{ x: 1.28943695621391310e+01, y: -1.51111514016986312e+01, z: -2.23307578892655734e-01, vx: 2.96460137564761618e-03 * DAYS_PER_YEAR, vy: 2.37847173959480950e-03 * DAYS_PER_YEAR, vz: -2.96589568540237556e-05 * DAYS_PER_YEAR, mass: 4.36624404335156298e-05 * SOLAR_MASS },
{ x: 1.53796971148509165e+01, y: -2.59193146099879641e+01, z: 1.79258772950371181e-01, vx: 2.68067772490389322e-03 * DAYS_PER_YEAR, vy: 1.62824170038242295e-03 * DAYS_PER_YEAR, vz: -9.51592254519715870e-05 * DAYS_PER_YEAR, mass: 5.15138902046611451e-05 * SOLAR_MASS },
}
| kernel k |>
std.kernel:step(0..iterations)
|> std.kernel:pairwise {
const dx = k.x - k.other.x;
const dy = k.y - k.other.y;
const dz = k.z - k.other.z;
const dsq = dx*dx + dy*dy + dz*dz;
const mag = DT / (dsq * @sqrt(dsq));
k.vx -= dx * k.other.mass * mag;
k.vy -= dy * k.other.mass * mag;
k.vz -= dz * k.other.mass * mag;
k.other.vx += dx * k.mass * mag;
k.other.vy += dy * k.mass * mag;
k.other.vz += dz * k.mass * mag;
}
|> std.kernel:self {
k.x += DT * k.vx;
k.y += DT * k.vy;
k.z += DT * k.vz;
}
| computed c |>
capture({ energy: @as(f64, 0) })
| as acc |>
for(0..5)
| each i |>
captured { energy: acc.energy + 0.5*c[i].mass*(c[i].vx*c[i].vx+c[i].vy*c[i].vy+c[i].vz*c[i].vz) }
|> for(i+1..5)
| each j |>
captured { energy: acc.energy - c[i].mass*c[j].mass / @sqrt((c[i].x-c[j].x)*(c[i].x-c[j].x)+(c[i].y-c[j].y)*(c[i].y-c[j].y)+(c[i].z-c[j].z)*(c[i].z-c[j].z)) }
| captured final |>
std.io:print.blk {
{{ final.energy:d:.9 }}
}

Can we achieve something similar in Lisp?

First, let’s make a baseline. I’m running Ubuntu Noble with a “AMD RYZEN AI MAX+ PRO 395” with a clock speed that varies between 0.6-5 GHz. I am also using SBCL 2.6.3 and gcc 13.3. Using nbody.lisp as a starting point, I modified it for a few easy wins. I’ll call this version nbody-lisp-conventional. A quick benchmark reveals that the loopy Lisp code is only about 20% slower than the C code compiled with gcc -O3 -ffast-math -march=native.

$ ./nbody-lisp-conventional 50000000
-0.169286396
timing: 2000 ms
$ ./nbody-c 50000000
-0.169286396
timing: 1662 ms

As a Lisp programmer, it’s not surprising that it’s a little slower. The number of person-years that have gone into C compilers to optimize idiomatic C code makes the development effort behind SBCL, the most popular open-source Lisp compiler, look like a rounding error.

Now that we have a baseline, our goal is to come up with a nicer Lisp program that also improves the timing.

Our approach will be simple. We will create a library.lisp that contains new language constructs of a similar ilk to Koru, and we will use them to implement the nbody benchmark in impl.lisp. Some rules:

  • No compile-time precomputation or caching. I can’t just compute the answer at compile time, or cache a sub-computation that makes the full one trivial.
  • No fundamental algorithm changes. I can’t use a different integrator, for example.
  • Using assembly is allowed, but it must only make use of the facilities offered by the Lisp compiler (i.e., no external tools), and the implementation of nbody itself must be understandable without knowing assembly. In other words, it should be sufficiently hidden, and in principle easily substitutable with portable code.
  • Library code must be in principle useful for other similar tasks. It should not be hyper-specialized to this specific problem instance, but instead be useful for this general class of problems.

The third rule is more rigorous than it looks. It means we can’t just have a solve-nbody problem which dispatches to assembly.

To accomplish the above, we define a kernel DSL. The DSL allows us to express how elements of a composite transform, maintaining just enough invariants to allow them to be handled efficiently. These kernels are then compiled into efficient code, more efficient than ordinary loopy Lisp allows for.

Our attention will be focused on a proof-of-concept library of functionality for writing particle simulators. The operators we define are:

  • define-kernel-shape: Define the data to be transformed by each kernel. This would be the data to characterize the static and dynamic properties of a particle in motion, as well as the number of particles under consideration.
  • define-kernel-step: Define a kernel as a sequence of existing ones.
  • define-self-kernel: Define a read-write kernel that operates on each element independently, without access to other elements (i.e., a map operation).
  • define-pairwise-kernel: Define a read-write kernel that operates on all pairs of elements, reduced by symmetry (i.e., (i,j) and (j,i) are considered only once).
  • define-reduction-kernel: Define a read-only kernel that does reduction of a sequence into a single value (i.e., a reduce operation).

This collection of five operators forms a miniature, re-usable language. These broadly recapitulate those of Koru, and allow us to write something that looks like this:

(defconstant +solar-mass+ (* 4d0 pi pi))
(defconstant +days-per-year+ 365.24d0)
(defconstant +dt+ 0.01d0)
(define-kernel-shape body 5
x y z vx vy vz mass)
(defparameter *system*
(make-body-system
(list :x 0d0 :y 0d0 :z 0d0
:vx 0d0 :vy 0d0 :vz 0d0
:mass +solar-mass+)
...))
(define-pairwise-kernel advance-forces (s body dt)
(let* ((dx (- i.x j.x))
(dy (- i.y j.y))
(dz (- i.z j.z))
(dsq (+ (+ (* dx dx) (* dy dy)) (* dz dz)))
(mag (/ dt (* dsq (sqrt dsq)))))
(let ((dm-j (* mag j.mass))
(dm-i (* mag i.mass)))
(decf i.vx (* dx dm-j))
(decf i.vy (* dy dm-j))
(decf i.vz (* dz dm-j))
(incf j.vx (* dx dm-i))
(incf j.vy (* dy dm-i))
(incf j.vz (* dz dm-i)))))
(define-self-kernel advance-positions (s body dt)
(incf self.x (* dt self.vx))
(incf self.y (* dt self.vy))
(incf self.z (* dt self.vz)))
(define-reduction-kernel (energy e 0d0) (s body)
(:self
(+ e (* (* 0.5d0 self.mass)
(+ (+ (* self.vx self.vx) (* self.vy self.vy))
(* self.vz self.vz)))))
(:pair
(let* ((dx (- i.x j.x))
(dy (- i.y j.y))
(dz (- i.z j.z)))
(- e (/ (* i.mass j.mass)
(sqrt (+ (+ (* dx dx) (* dy dy))
(* dz dz))))))))
(define-kernel-step run-simulation (system body n :params ((dt double-float)))
(advance-forces dt)
(advance-positions dt))

Well, in fact, this isn’t an ideal approximation, it’s almost exactly how it turned out. Given this is a proof of concept, we sometimes have to write some Lisp things a little funny. For example, you’ll notice we write:

(+ (+ (* dx dx) (* dy dy)) (* dz dz))

instead of the far more readable

(+ (* dx dx) (* dy dy) (* dz dz))

Both are completely valid and both can be used. So why the former? It is a result of a limitation of a little feature I built in: auto-vectorization. The vectorizer walks the mathematical expressions and replaces them with fast SIMD variants instead. Here’s a little fragment showing this rewrite rule:

...
(case (car expr)
;; (+ a (* b c)) -> fmadd(a,b,c)
((+)
(let ((args (cdr expr)))
(cond
((and (= (length args) 2) (mul-p (second args)))
`(%%fmadd-pd ,(xf (first args))
,(xf (second (second args)))
,(xf (third (second args)))))
...

The implementation of these kernel macros in library.lisp weighs in at just under 700 lines, and includes optional x64 SIMD auto-vectorization.

Well, for the nail biting moment, how does it compare? I made a Makefile that compares the idiomatic C against the loopy Lisp against our kernel DSL Lisp. It does a median-of-3. Running this on my computer gives:

$ make bench
=== C (gcc -O3 -ffast-math) ===
-0.169286396
runs: 1657 1664 1653 ms
median: 1657 ms
=== Lisp (SBCL, conventional loops) ===
-0.169286396
runs: 1991 2009 2005 ms
median: 2005 ms
=== Lisp (SBCL, kernel syntax) ===
-0.169286396
runs: 1651 1651 1652 ms
median: 1651 ms

So, in fact, we have matched the performance of C almost exactly. Furthermore, the generated code is still not as lean as it could be. Not to put too fine a point on it, but, <100 lines of Lisp, supported by

  • 700 lines of library code and about 4 hours of my time; and
  • 500k lines of its host compiler sbcl

has performance parity and greater readability/reusability than <100 lines of C, supported by

  • ~5,000k lines of just the C part of its host compiler gcc.

None of this is to make an argument that Lisp is “better”, or that there isn’t merit to avoiding custom DSLs in certain circumstances, or that the world doesn’t have room for more custom home-grown compilers and parsers, but I think this is the clearest possible, quasi-realistic demonstration that idiomatic Lisp can be as fast as idiomatic C without tremendous work, whilst netting additional benefits unique to Lisp.

All code is available here.

ECL News: ECL 26.3.27 release [Planet Lisp]

We are announcing a new stable ECL release. This release highlights:

  • bytecodes closures are now faster and avoid capturing unused parts of the lexical environment
  • improvements to the native compiler, including better separation between compiler frontend and backend, reduced function call overhead, more aggressive dead code elimination and many internal improvements and bug fixes
  • hash table implementation improvements and bug fixes for collisions
  • streams: extensions EXT:PEEK-BYTE, EXT:UNREAD-BYTE, GRAY:STREAM-PEEK-BYTE and GRAY:STREAM-UNREAD-BYTE, bugfixes and implementation refactor
  • the codebase has been updated to conform to the C23 standard
  • simplified procedure for cross-compiling ECL itself
  • support for cross-compilation of Common Lisp code to different targets using a new :TARGET option for COMPILE-FILE
  • some fixes for the emscripten target

The release also incorporates many other bug fixes and performance improvements as well as an updated manual. We'd like to thank all people who contributed to ECL with code, testing, issue reports and otherwise.

People listed here contributed code in this iteration: Daniel Kochmański, Marius Gerbershagen, Tarn W. Burton, Kirill A. Korinsky, Dmitry Solomennikov, Kevin Zheng, Mark Shroyer and Sebastien Marie.

People listed here did extensive release candidate testing on various platforms: Marius Gerbershagen, Daniel Kochmański, Dima Pasechnik, Matthias Köppe, Jeremy List, Mark Damon Hughes and Paul Ruetz.

This release is available for download in a form of a source code archive (we do not ship prebuilt binaries):

Finally, a note on the release schedule: ECL releases often take some time to come out, partially because we do extensive testing against supported platforms and existing libraries to find regressions. In the meantime all improvements are incrementally incorporated in the branch develop. It is considered stable and it is tested and reviewed with necessary dilligence. If release cycle is too slow for your needs, then we suggest following the branch develop for the most recent changes.

Happy Hacking,
The ECL Developers

Robert Smith: Beating Bellard's formula [Planet Lisp]

By Robert Smith

Fabrice Bellard came up with a computationally efficient formula for calculating the nth hexadecimal digit of $\pi$ without calculating any of the previous n−1. It’s called Bellard’s formula. It wasn’t the first of its kind, but in terms of computational efficiency, it was a substantial improvement over the original, elegant Bailey-Borwein-Plouffe formula. Due to the trio’s discovery, these formulas are often called BBP-type formulas.

Over the years, numerous BBP-type formulas have been discovered. In fact, Bailey gives us a recipe to search for them using integer-relation algorithms. In simple terms, we can just guess formulas, and run a computation to see if it likely equals $\pi$ with high confidence. If we do find one, then we can use it as a conjecture to prove formally.

Like Bellard and many others, I ran a variant of Bailey’s recipe, effectively doing a brute-force search, highly optimized and in parallel. The search yielded another formula that is computationally more efficient than Bellard’s formula. The identity is as follows:

$$ \pi = \sum_{k=0}^{\infty} \frac{1}{4096^k} \left( \frac{1}{6k+1} - \frac{2^{-5}}{6k+3} + \frac{2^{-8}}{6k+5} + \frac{2}{8k+1} - \frac{2^{-5}}{8k+5} + \frac{2^{-1}}{12k+3} - \frac{2^{-4}}{12k+7} - \frac{2^{-8}}{12k+11} \right). $$

It converges at a rate of 12 bits per term. We will prove convergence, and then prove the identity itself (with a little computer assistance). As it turns out, an equivalent form of this formula was already discovered, which we will discuss as well. Finally, we’ll show a very simple implementation in Common Lisp.

Proof of convergence

Write the series as $S := \sum_{k=0}^{\infty} 4096^{-k}R(k)$. Since $R(k)\in O(1/k)$, convergence is dominated by the geometric term $4096^{-k}$:

$$ \lim_{k \to \infty} \left\vert \frac{R(k+1)}{4096^{k+1}} \middle/ \frac{R(k)}{4096^{k}} \right\vert = \frac{1}{4096}. $$

By the ratio test, the series converges absolutely. Since $4096 = 2^{12}$, each additional term contributes exactly 12 bits of precision.

Bellard’s formula converges at 10 bits per term and requires the evaluation of 7 fractions. The above converges at 12 bits per term, and requires the evaluation of 8 fractions. So while we require 20% fewer terms, each term requires about 14% more arithmetic. So, net-net, this formula is approximately 5-6% more efficient.

Proof of identity via a definite integral

Consider $1/(nk+j) = \int_{0}^{1} x^{nk+j-1} dx$. For positive integers $n$ and $b$, we get

$$ \sum_{k=0}^{\infty} \frac{1}{b^k}\cdot\frac{1}{nk+j} = \sum_{k=0}^{\infty} \int_{0}^{1} \left(\frac{x^n}{b}\right)^k x^{j-1} dx. $$

We can swap the sum and integral via the Lebesgue dominated convergence theorem, since the power series $\sum (x^n/b)^k$ converges uniformly for $x \in [0, 1]$ and $b > 1$. Using this and summing the geometric series gives:

$$ \int_{0}^{1} x^{j-1} \sum_{k=0}^{\infty} \left(\frac{x^n}{b}\right)^k dx = \int_{0}^{1} \frac{x^{j-1}}{1 - x^n/b} dx. $$

We now apply this to $S$ termwise with $b=4096=2^{12}$:

$$ S = \int_0^1 \left( \frac{x^{0}}{1 - \frac{x^6}{2^{12}}} - 2^{-5} \frac{x^{2}}{1 - \frac{x^6}{2^{12}}} + 2^{-8} \frac{x^{4}}{1 - \frac{x^6}{2^{12}}} + 2 \frac{x^{0}}{1 - \frac{x^8}{2^{12}}} - 2^{-5} \frac{x^{4}}{1 - \frac{x^8}{2^{12}}} + 2^{-1} \frac{x^{2}}{1 - \frac{x^{12}}{2^{12}}} - 2^{-4} \frac{x^{6}}{1 - \frac{x^{12}}{2^{12}}} - 2^{-8} \frac{x^{10}}{1 - \frac{x^{12}}{2^{12}}} \right) dx. $$

At this point, you could try to algebra your way through, expanding, using the substitution $x=2u$, etc. ultimately yielding a nice denominator $(u^2\pm 2u+2)(u^6-64)(u^{12}-1)$. Maybe compute some residues. Or, just CAS your way through.

% fricas
FriCAS Computer Algebra System
Version: FriCAS 2025.12.23git built with sbcl 2.5.2.1852-1f3beec71
Timestamp: Wed Mar 4 12:41:38 EST 2026
-----------------------------------------------------------------------------
Issue )copyright to view copyright notices.
Issue )summary for a summary of useful system commands.
Issue )quit to leave FriCAS and return to shell.
-----------------------------------------------------------------------------
(1) -> f := (1/(1 - x^6/4096))
- (1/32)*x^2/(1 - x^6/4096)
+ (1/256)*x^4/(1 - x^6/4096)
+ 2*1/(1 - x^8/4096)
- (1/32)*x^4/(1 - x^8/4096)
+ (1/2)*x^2/(1 - x^12/4096)
- (1/16)*x^6/(1 - x^12/4096)
- (1/256)*x^10/(1 - x^12/4096);
Type: Fraction(Polynomial(Fraction(Integer)))
(2) -> normalize(integrate(f, x = 0..1))
3 1 11 19 1
(2) 2 atan(-) - 2 atan(-) + 2 atan(--) + 2 atan(--) + 2 atan(-)
2 2 24 48 4
Type: Expression(Fraction(Integer))

So now we just need to show the arctans all collapse to $\pi$. Recall the identity

$$ \tan^{-1} a \pm \tan^{-1} b = \tan^{-1}\left(\frac{a\pm b}{1\mp ab}\right). $$

The sum of the first four terms can be calculated easily in Common Lisp:

% sbcl --no-inform
* (defun combine (a b) (/ (+ a b) (- 1 (* a b))))
COMBINE
* (reduce #'combine '(3/2 -1/2 11/24 19/48))
4

So we have $2\big(\tan^{-1}4 + \tan^{-1}(1/4)\big)$, and with our final elementary trig identity $\tan^{-1} (a/b) = \pi/2 - \tan^{-1} (b/a)$, we find $S = \pi$.

A new discovery?

Of course, I was excited to find this formula, but after some internet spelunking, it turns out it had already been discovered by Géry Huvent and Boris Gourévitch, perhaps independently. Gourévitch doesn’t credit Huvent as he does with other formulas, but he does say “[…] furthermore, we can obtain BBP formula […] by using what Gery Huvent calls the denomination tables […].” Daisuke Takahashi cites Huvent’s website in this 2019 paper published in The Ramanujan Journal. In all cases, they write the formula in the following way:

$$ \frac{1}{128} \sum _{k=0}^{\infty} \frac{1}{2^{12k}}\left( \frac{768}{24 k+3}+\frac{512}{24k+4}+\frac{128}{24 k+6}-\frac{16}{24 k+12}-\frac{16}{24 k+14}-\frac{12}{24 k+15}+\frac{2}{24 k+20}-\frac{1}{24 k+22}\right), $$

which is structurally equivalent to $S$.

Despite having been known already, this formula doesn’t appear to be well known. As such, I hope this blog post brings more attention to it.

Simple implementation

Here is a simple implementation of digit extraction using BBP-type formulas in Common Lisp:

(defun %pow2-mod (exponent modulus)
(cond
((= modulus 1) 0)
((zerop exponent) 1)
(t
(let ((result 1)
(base (mod 2 modulus))
(e exponent))
(loop :while (plusp e) :do
(when (oddp e)
(setf result (mod (* result base) modulus)))
(setf base (mod (* base base) modulus)
e (ash e -1)))
result))))
(defun %scaled-frac-of-power-two (exponent denom)
(cond
((>= exponent 0)
(let ((residue (%pow2-mod exponent denom)))
(floor (ash residue *precision-bits*) denom)))
(t
(let ((effective-bits (+ *precision-bits* exponent)))
(if (minusp effective-bits)
0
(floor (ash 1 effective-bits) denom))))))
(defun %series-scaled-frac (bit-index bbp-series k-step global-shift alternating-p)
;; A series is a list of series terms. A series term is a quadruple
;; (SIGN SHIFT DENOM-MULTIPLIER DENOM-OFFSET) representing the summand
;; SIGN * 2^SHIFT / (DENOM_MULTIPLIER * k + DENOM_OFFSET).
(let* ((modulus (ash 1 *precision-bits*))
(max-shift (loop :for term :in bbp-series :maximize (second term)))
(k-max (max 0 (ceiling (+ bit-index ; conservative bound
global-shift
max-shift
*precision-bits*
*guard-bits*)
k-step))))
(loop :with acc := 0
:for k :from 0 :to k-max :do
(let ((k-sign (if (and alternating-p (oddp k)) -1 1))
(k-factor (* k-step k)))
(dolist (term bbp-series)
(destructuring-bind (term-sign shift den-mul den-add) term
(let* ((denom (+ den-add (* den-mul k)))
(exponent (+ bit-index global-shift shift (- k-factor)))
(piece (%scaled-frac-of-power-two exponent denom))
(signed (* k-sign term-sign)))
(when (plusp piece)
(setf acc (mod (+ acc (* signed piece)) modulus)))))))
:finally (return acc))))
(defun %nth-hex-from-series (n terms k-step global-shift alternating-p)
(let* ((bit-index (* 4 n)))
(ldb (byte 4 (- *precision-bits* 4))
(%series-scaled-frac bit-index
terms
k-step
global-shift
alternating-p))))

This implementation uses Lisp’s arbitrary precision integer arithmetic. A “real” implementation would use more efficient arithmetic, but this will suffice for some basic testing. Now we can write functions to use the Bellard formula and the new formula:

(defparameter +bellard-terms+
'((-1 5 4 1)
(-1 0 4 3)
(+1 8 10 1)
(-1 6 10 3)
(-1 2 10 5)
(-1 2 10 7)
(+1 0 10 9)))
(defun bellard-nth-hex (n)
(%nth-hex-from-series (* 4 n) +bellard-terms+ 10 -6 t))
(defparameter +new-terms+
'((+1 0 6 1)
(-1 -5 6 3)
(+1 -8 6 5)
(+1 1 8 1)
(-1 -5 8 5)
(+1 -1 12 3)
(-1 -4 12 7)
(-1 -8 12 11)))
(defun new-nth-hex (n)
(%nth-hex-from-series (* 4 n) +new-terms+ 12 0 nil))

Let’s make sure they agree for the first 1000 hex digits:

CL-USER> (loop :for i :below 1000
:always (= (bellard-nth-hex i) (new-nth-hex i)))
T

And now let’s look at timing comparisons. Here’s a little driver:

(defun compare-timings (n)
(flet ((time-it (f n)
(sb-ext:gc :full t)
(let ((start (get-internal-real-time)))
(funcall f n)
(- (get-internal-real-time) start))))
(loop :repeat n
:for index := 1 :then (* 10 index)
:for bellard := (time-it #'bellard-nth-hex index)
:for new := (time-it #'new-nth-hex index)
:do (format t "~v,' D: new is ~A% faster than bellard~%" n index
(round (* 100 (- bellard new)) bellard)))))

And the results if the timing up to the one millionth hexadecimal digit:

CL-USER> (compare-timings 7)
1 : new is 81% faster than bellard
10 : new is 7% faster than bellard
100 : new is 6% faster than bellard
1000 : new is 5% faster than bellard
10000 : new is 4% faster than bellard
100000 : new is 3% faster than bellard
1000000: new is 4% faster than bellard

As predicted, though imperfect a test, it’s consistently faster across a few orders of magnitude.

13:56

The Law of Conservation of Evil [Nina Paley]

A famous cartoon about human nature than inspired millions, including myself, to try to rise above human nature.

Human beings exploit the earth and each other. We torture, kill and eat animals. We cut down forests and poison the soil and water. We make war. We drive filthy cars and pave the world. We pollute. We bully and scapegoat. We hold crazy beliefs and belong to irrational cults and religions. We don’t think for ourselves. We long for freedom while enforcing repression. We censor and suppress and police and call out and turn each other in. We rip each other new assholes while covering our own. We all think we’re better than the rest. We are hypocrites who are appalled by hypocrisy.

For meaning in our lives, we may fixate on one human evil and try to rise above it. Pro-Environment. Animal Rights. Freedom of Speech. Christianity. Communism.

The more we embrace these virtues, the more insufferable we become.

It’s human nature to try to rise above human nature.

There is simply no way out of being human. There are billions of us, each individual a node in an incomprehensibly complex network, a brain cell in a Great Brain. Sometimes we convert our neighbors, which gives rise to cults or religions or nations which then butt up against each other and go to war.

We might clean up our own little space: grow our own food, avoid filthy money by bartering, bike instead of drive, don’t eat meat. Little pockets of purity in a polluted world. Somewhere else, something worse is happening to compensate. Thank you for lowering demand of farmed animal products: now the price goes down so more can consume them. Thank you for biking instead of driving: now there’s more room on the road for another car. Thank you for Not Breeding: now someone else can, plus there’s a panic about “population implosion” and the culture is more pro-natalist than before.

While we’re doing all this Good, we try to persuade others. We never think we’re actively proselytizing, just taking opportunities for “teaching moments.” For sooner or later someone will notice our behavior is a little (or a lot) different and ask us about it. Maybe we’ll even convince them! Score! Now our cult is growing, and if it grows enough we’ll be able to clash with competing cults, more repressively enforce the purity of our in-group, and perhaps go to war with an out-group or two.

I call this The Law of Conservation of Evil.

I have clung to many causes: Environmentalism, Anti-Natalism, Vegetarianism/Veganism, Bikes Not Cars, Free Speech. I have been insufferable. Still, I am human, and humans need meaning in our lives, and that which lights us up the most can also make us the most insufferable.

I’m currently interested in how to avoid cults. I fear and condemn cults. If I develop a good theory of cults, and argue persuasively, I might create an anti-cult cult, just as Antifa creates fascism and anti-racism creates racism.

Back away from Identity

“Back away from Identity” advised Third Way Trans, a desister from the transgender cult, before he deleted his wonderful blog. That’s the rare idea that might be cult-proof.

Humans cannot rise above our evil, which is also our humanity. We can shift it around a little from locality to locality, just as we shift our “recyclable” garbage from our local landfill to somewhere in the ocean. The best we can do is back away from identity, from the need to be “good” or better than our fellows, and to acknowledge and accept Reality.

But don’t let me get too attached to convincing you of that! Carry on, world.

Share

The post The Law of Conservation of Evil appeared first on Nina Paley.

10:28

Where do bad choices come from? [Seth's Blog]

We all make them from time to time.

You might not know what you need to know. This is where experience is created.

You might have an identity that pushes you to make those choices. If you’re determined to act like the person you have assumed you are, the choices come with the role.

Or, you might prioritize short-term benefits over the long-term costs of a bad choice. In this sense, the difference between a good choice and a bad one is simply which timeframe we’re considering.

Built into the idea of ‘choice’ is the agency and freedom to choose. But we waste that power every time we fail to realize we’re making a choice.

And there are two common reasons for this: we don’t believe we have the freedom to choose, or we’re not clear about what we’re trying to accomplish in the first place.

09:14

TinyOS: ultra-lightweight RTOS for IoT devices [OSnews]

An ultra-lightweight real-time operating system for resource-constrained IoT and embedded devices. Kernel footprint under 10 KB, 2 KB minimum RAM, preemptive priority-based scheduling.

↫ TinyOS GitHub page

Written in C, open source, and supports ARM and RISC-V.

Redox gets new CPU scheduler [OSnews]

Another major improvement in Redox: a brand new scheduler which improves performance under load considerably.

We have replaced the legacy Round Robin scheduler with a Deficit Weighted Round Robin scheduler. Due to this, we finally have a way of assigning different priorities to our Process contexts. When running under light load, you may not notice any difference, but under heavy load the new scheduler outperforms the old one (eg. ~150 FPS gain in the pixelcannon 3D Redox demo, and ~1.5x gain in operations/sec for CPU bound tasks and a similar improvement in responsiveness too (measured through schedrs)).

↫ Akshit Gaur

Work is far from over in this area, as they’re now moving on to “replacing the static queue logic with the dynamic lag-calculations of full EEVDF“.

09:07

Pluralistic: EU ready to cave to Trump on tech (04 Apr 2026) [Pluralistic: Daily links from Cory Doctorow]

->->->->->->->->->->->->->->->->->->->->->->->->->->->->-> Top Sources: None -->

Today's links



The EU flag. The field has been turned from blue to orange. In the center of the circle of stars is Trump's open, hooting gob. Behind the orange field we see the faded traces of a printed circuit board.

EU ready to cave to Trump on tech (permalink)

Crises precipitate change. That's no reason to induce a crisis, but you'd be a fool to let a crisis go to waste. Donald Trump is the greatest crisis of our young century, and the EU looks set to squander the opportunity, to its own terrible detriment.

For more than a decade, it's been clear that the American internet was not fit for purpose. The whistleblowers Mark Klein and Edward Snowden revealed that the US had weaponized its status as the world's transoceanic fiber-optic hub to spy on the entire planet:

https://doctorow.medium.com/https-pluralistic-net-2025-11-26-difficult-multipolarism-eurostack-5a527c32f149

US tech giants flouted privacy laws, gleefully plundering the world's cash and data with products that they remorselessly enshittified:

https://pluralistic.net/2026/01/30/zucksauce/#gandersauce

American companies repurposed their over-the-air software update capabilities to remotely brick expensive machinery in service to geopolitical priorities:

https://pluralistic.net/2022/05/08/about-those-kill-switched-ukrainian-tractors/

Then Trump and his tech companies started attacking key public institutions around the world, shutting down access for senior judges who attempted to hold Trump's international authoritarian allies to account for their crimes:

https://pluralistic.net/2025/10/20/post-american-internet/#huawei-with-american-characteristics

If Trump wants to steal Greenland, he doesn't need tanks or missiles. He can just tell Microsoft and Oracle to brick the entire Danish state and all of its key firms, blocking their access to their email archives, files, databases, and other key administrative tools. If Denmark still holds out, Trump can brick all their tractors, smart speakers, and phones. If Denmark still won't give up Greenland, Trump could blackhole all Danish IP addresses for the world's majority of transoceanic fiber. At the click of a mouse, Trump could shut down the world's supply of Lego, Ozempic, and delicious, lethally strong black licorice.

Now, these latent offensive capabilities were obvious long before Trump, but the presidents who weaponized them in the pre-Trump era did so in subtle and deniable ways, or under a state of exception (e.g. in response to spectacular terrorist attacks or in the immediate aftermath of the Russian invasion of Ukraine) that let bystanders assure themselves that this wouldn't become a routine policy.

After all, America profited so much from the status quo in which America and its trading partners all pretended that US tech wouldn't be weaponized for geopolitical aims, so a US president would be a fool to shatter the illusion. And even if the president was so emotionally incontinent that he demanded the naked weaponization of America's defective, boobytrapped tech exports, the power blocs that the president relies on would stop him, because they are so marinated in the rich broth that America drained from the world using Big Tech.

This is "status quo bias" in action. No one wants to let go of the vine they're swinging from until they have a new vine firmly in their grasp – but you can't reach the next vine unless you release your death-grip on your current one. So it was that, year after year, the world allowed itself to become more dependent on America's easily weaponizable tech, making the tech both more dangerous and harder to escape.

Enter Trump (a crisis) (and crises precipitate change). Under Trump, the illusion of a safe interdependence crumbled. Every day, in new and increasingly alarming ways, Trump makes it clear that America doesn't have allies or trading partners, only adversaries and rivals. Every day, Trump proves to the world that American tech isn't merely untrustworthy – it's a live, dire, urgent danger to your state, your companies, and your people. The best time to get shut of the American internet was 15 years ago. The second best time is right fucking now.

NOW!

The result is the burgeoning movement to build a "post-American internet." In Canada, PM Mark Carney's announcement of a "rupture" has the country rethinking its deep connections to the American internet and asking what it could do to escape it:

https://pluralistic.net/2026/01/27/i-want-to-do-it/#now-make-me-do-it

Europe, meanwhile, has multiple, advanced, well-funded initiatives to leave the American internet behind and migrate to a post-American internet, like "Eurostack" and the European Digital Infrastructure Consortium:

https://digital-strategy.ec.europa.eu/en/policies/edic

But status quo bias exerts a powerful gravity. A reactionary counterrevolution is being waged in the European Commission – the permanent bureaucracy that executes Europe's laws and regulations. Within the EC, an ascendant faction has announced plans for a "dialogue" with representatives from the Trump regime to let them direct the enforcement of the Digital Markets Act (DMA) and Digital Services Act (DSA), Europe's landmark 2024 anti-Big Tech regulations:

https://www.politico.eu/article/fatal-decision-eu-slammed-for-caving-to-us-pressure-on-digital-rules/

The DMA and DSA require America's tech giants to open up their platforms in ways that would halt the plunder of Europeans' private data and cash. US tech giants have flatly refused to comply with these rules, relying on Trump to get them out of any obligations under EU law:

https://pluralistic.net/2025/09/26/empty-threats/#500-million-affluent-consumers

That's a sound bet. After all, the last thing Trump did before his inauguration was publicly announce his intention to destroy any country that attempted to enforce these laws:

https://www.nytimes.com/2025/01/23/us/politics/trump-davos-europe-tariffs.html

He's making good on his threats. He's already sanctioned a group of officials who helped draft the DSA:

https://www.npr.org/2025/12/24/nx-s1-5655855/trump-administration-bars-5-europeans-from-entry-to-the-u-s-over-alleged-censorship

And he's ordered his tech companies to turn over the private emails and messages of other European officials, so he can identify the ones most dangerous to US tech plunder and sanction them, too:

https://www.politico.eu/article/us-congress-judiciary-committee-big-tech-private-communication-eu-officials/

The quislings and appeasers in the Commission who've been spooked by Trump's belligerence (or tempted by offers of cushy jobs in Big Tech after they leave public service) are selling out the EU's future. Caving to Trump won't make him more favorably disposed to Europe or Europeans. Trump treats every capitulation as a sign of weakness that signals that he can safely ignore his end of the bargain and demand twice as much. For Trump, the "art of the deal" can be summed up in one word: reneging.

Within the EU, there's fury at the Commission's announcement of "dialogue." As Politico's Milena Wälde reports, lawmakers like Alexandra Geese (Greens) say that this is a move that eliminates the "sovereign path for Europe" by letting tech giants "grade their own homework." She calls it a "fatal decision for our companies and our democracy."

Moving to the post-American internet is hard – but it will only get harder. Sure, Europe could wait for the next crisis to let go of the Big Tech vine and grab the Eurostack one, but that next crisis will be far, far worse. The EU can't afford to wait for Trump to brick one or more of its member states to (finally, at long last) take this threat seriously:

https://pluralistic.net/2026/01/01/39c3/#the-new-coalition


Hey look at this (permalink)



A shelf of leatherbound history books with a gilt-stamped series title, 'The World's Famous Events.'

Object permanence (permalink)

#10yrsago Among a Thousand Fireflies: children’s book shows the sweet, alien love stories unfolding in our own backyards https://memex.craphound.com/2016/04/01/among-a-thousand-fireflies-childrens-book-shows-the-sweet-alien-love-stories-unfolding-in-our-own-backyards/

#10yrsago After biggest bribery scandal in history, police raids and investigations https://www.smh.com.au/business/police-raids-and-more-revelations-the-fallout-of-the-unaoil-scandal-20160401-gnw9mx.html

#10yrsago Bernie Sanders’ South Bronx rally, featuring Rosario Dawson, Spike Lee, and Residente https://www.c-span.org/program/campaign-2016/senator-bernie-sanders-campaign-rally-in-south-bronx/437114

#10yrsago Freshman Missouri Rep almost made it 3 months before introducing bill urging members to say “fiscal,” not “physical” https://www.washingtonpost.com/news/the-fix/wp/2016/03/31/hero-lawmaker-urges-colleagues-to-stop-saying-physical-when-they-mean-fiscal/

#10yrsago Indiana women phone the governor’s office to tell him about their periods https://web.archive.org/web/20160401170206/https://fusion.net/story/286941/periods-for-pence-indiana-women-calling-governor/

#10yrsago United pilot orders Arab-American family off his flight for “safety” https://www.nbcchicago.com/news/national-international/united-airlines-arab-american-plane/58370/

#10yrsago 33 state Democratic parties launder $26M from millionaires for Hillary https://www.counterpunch.org/2016/04/01/how-hillary-clinton-bought-the-loyalty-of-33-state-democratic-parties/

#10yrsago White SC cops pull black passenger out of car, take turns publicly cavity-searching him https://www.washingtonpost.com/news/the-watch/wp/2016/04/01/video-shows-white-cops-performing-roadside-cavity-search-of-black-man/

#5yrsago The zombie economy and digital arm-breakers https://pluralistic.net/2021/04/02/innovation-unlocks-markets/#digital-arm-breakers

#5yrsago Ontario's drug-dealer premier is shockingly bad at distributing vaccines https://pluralistic.net/2021/04/01/incompetent-drug-dealer/#what-a-dope

#5yrsago The zombie economy and digital arm-breakers https://pluralistic.net/2021/04/02/innovation-unlocks-markets/#digital-arm-breakers

#1yrago What's wrong with tariffs https://pluralistic.net/2025/04/02/me-or-your-lying-eyes/#spherical-cows-on-frictionless-surfaces

#1yrago What's wrong with tariffs https://pluralistic.net/2025/04/02/me-or-your-lying-eyes/#spherical-cows-on-frictionless-surfaces

#1yrago Anyone who trusts an AI therapist needs their head examined https://pluralistic.net/2025/04/01/doctor-robo-blabbermouth/#fool-me-once-etc-etc


Upcoming appearances (permalink)

A photo of me onstage, giving a speech, pounding the podium.



A screenshot of me at my desk, doing a livecast.

Recent appearances (permalink)



A grid of my books with Will Stahle covers..

Latest books (permalink)



A cardboard book box with the Macmillan logo.

Upcoming books (permalink)

  • "The Reverse-Centaur's Guide to AI," a short book about being a better AI critic, Farrar, Straus and Giroux, June 2026 (https://us.macmillan.com/books/9780374621568/thereversecentaursguidetolifeafterai/)
  • "Enshittification, Why Everything Suddenly Got Worse and What to Do About It" (the graphic novel), Firstsecond, 2026

  • "The Post-American Internet," a geopolitical sequel of sorts to Enshittification, Farrar, Straus and Giroux, 2027

  • "Unauthorized Bread": a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, 2027

  • "The Memex Method," Farrar, Straus, Giroux, 2027



Colophon (permalink)

Today's top sources:

Currently writing: "The Post-American Internet," a sequel to "Enshittification," about the better world the rest of us get to have now that Trump has torched America. First draft complete. Second draft underway.

  • "The Reverse Centaur's Guide to AI," a short book for Farrar, Straus and Giroux about being an effective AI critic. LEGAL REVIEW AND COPYEDIT COMPLETE.
  • "The Post-American Internet," a short book about internet policy in the age of Trumpism. PLANNING.

  • A Little Brother short story about DIY insulin PLANNING


This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.

https://creativecommons.org/licenses/by/4.0/

Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.


How to get Pluralistic:

Blog (no ads, tracking, or data-collection):

Pluralistic.net

Newsletter (no ads, tracking, or data-collection):

https://pluralistic.net/plura-list

Mastodon (no ads, tracking, or data-collection):

https://mamot.fr/@pluralistic

Bluesky (no ads, possible tracking and data-collection):

https://bsky.app/profile/doctorow.pluralistic.net

Medium (no ads, paywalled):

https://doctorow.medium.com/

Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):

https://mostlysignssomeportents.tumblr.com/tagged/pluralistic

"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla

READ CAREFULLY: By reading this, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies ("BOGUS AGREEMENTS") that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer.

ISSN: 3066-764X

08:28

Open source office suites erupt in forking and licensing drama [OSnews]

You’d think if there was one corner of the open source world where you wouldn’t find drama it’d be open source office suites, but it turns out we could not have been more wrong. First, there’s The Document Foundation, stewards of LibreOffice, ejecting a ton of LibreOffice contributors.

In the ongoing saga of The Document Foundation (TDF), their Membership Committee has decided to eject from membership all Collabora staff and partners. That includes over thirty people who have contributed faithfully to LibreOffice for many years. It is interesting to see a formal meritocracy eject so many, based on unproven legal concerns and guilt by association. This includes seven of the top ten core committers of all time (excluding release engineers) currently working for Collabora Productivity. The move is the culmination of TDF losing a large number of founders from membership over the last few years with: Thorsten Behrens, Jan ‘Kendy’ Holesovsky, Rene Engelhard, Caolan McNamara, Michael Meeks, Cor Nouws and Italo Vignoli no longer members. Of the remaining active founders, three of the last four are paid TDF staff (of whom none are programming on the core code).

↫ Micheal Meeks

The end result seems to be that Collabora is effectively forking LibreOffice, which feels like we’re back where we were 15 years ago when LibreOffice forked from OpenOffice. There seems to be a ton of drama and infighting here that I’m not particularly interested in, but it’s sad to see such drama and infighting result in needless complications for developers, end users, and distributors alike.

As if this wasn’t enough, there’s also forking drama in OnlyOffice land, the other open source office suite, licensed under the AGPL. This ope source office suite has been forked by Nextcloud and IONOS into Euro-Office, in pursuit of digital sovereignty in the EU. It’s also not an entirely unimportant detail that OnlyOffice is Russian, with most of its developers residing in Russia.

Anyway, the OnlyOffice team has not taken this in stride, claiming there’s a violation of the AGPL license going on here, specifically because OnlyOffice adds contradictory attribution terms to the AGPL. It’s a complicated story, but it does seem most experts in this area seem to disagree with OnlyOffice’s interpretation.

We’re in for another messy time.

How Microsoft vaporized a trillion dollars [OSnews]

This is the first of a series of articles in which you will learn about what may be one of the silliest, most preventable, and most costly mishaps of the 21st century, where Microsoft all but lost OpenAI, its largest customer, and the trust of the US government.

↫ Axel Rietschin

It won’t take long into this series of articles before you start wondering how anyone manages to ship anything at Microsoft. If even half of this is accurate, this company should be placed under some sort of external oversight.

06:21

Urgent: Voting by mail [Richard Stallman's Political Notes]

US citizens: call on Congress to protect the USPS for November's election.

US citizens: Join with this campaign to address this issue.

To phone your congresscritter about this, the main switchboard is +1-202-224-3121.

Please spread the word.

03:00

Dirk Eddelbuettel: Sponsor me for Tour de Shore 2026 to support MFA [Planet Debian]

tour de shore 2026

On June 19 and 20, I will cycle a little over 100 miles from downtown Chicago and its wonderful Millenium Park to New Buffalo, Michigan, as part of the Tour de Shore 2026. The ride passes through northwest Indiana and the extended Indiana Dunes National Park ending the next morning in the southwestern Michigan town of New Buffalo. I rode Tour de Shore once before in 2024 and had a generally wonderful time (even considering some soreness after a century of miles over 1 1/2 days).

Tour de Shore is riding in support of Maywood Fine Arts Center, a local arts and sports center in Maywood, Illinois, a suburb one over from where I live and hence just a few good miles west of downtown. Maywood, Illinois is home to legends such as the late John Prine as well as several NBA players such as player and coach Doc Rivers.

 

tour de shore 2026 donation page

But Maywood, Illinois is also little less well off than other western suburbs. The Maywood Fine Arts Center is simply legendary is what they do for this community (and surrounding communities), and especially the youth support. They can use a dollar a two. Their story about Tour de Shore is worth a read too for background and motivation.

I have bootstrapped my donation page page with a dollar for each mile to be cycled. It would be simply terrific if you could join me. A nickel, a dime, or a quarter per mile cycled would help. Multiples of that help too: More is of course still always better.

Anything you can afford will go a long way towards a worthy goal in a community that could use the help.

Of and if you are local to the area, I believe you can still register for Tour de Shore 2026. So see you out there in June? And if not, maybe help with a dollar or two?

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog.

00:42

Friday, 03 April

23:56

22:35

Friday Squid Blogging: Jurassic Fish Chokes on Squid [Schneier on Security]

Here’s a fossil of a 150-million year old fish that choked to death on a belemnite rostrum: the hard, internal shell of an extinct, squid-like animal.

Original paper.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Blog moderation policy.

22:28

A Kitten’s First Good Friday [Whatever]

Saja is contemplative about it, as he should be.

A reflective Good Friday, Easter, and/or Passover to you, if you celebrate any of these, and have a lovely weekend no matter who you are.

— JS

21:35

Big-endian testing with QEMU [OSnews]

I assume I don’t have to explain the difference between big-endian and little-endian systems to the average OSNews reader, and while most systems are either dual-endian or (most likely) little-endian, it’s still good practice to make sure your code works on both. If you don’t have a big-endian system, though, how do you do that?

When programming, it is still important to write code that runs correctly on systems with either byte order (see for example The byte order fallacy). But without access to a big-endian machine, how does one test it? QEMU provides a convenient solution. With its user mode emulation we can easily run a binary on an emulated big-endian system, and we can use GCC to cross-compile to that system.

↫ Hans Wennborg

If you want to make sure your code isn’t arbitrarily restricted to little-endian, running a few tests this way is worth it.

20:28

Stage Delights [Penny Arcade]

It's a meme with a very particular clientele: there is always, always something fucked up with Gabe's setup when he tries to do the Make-A-Strip. During the Surface era, it would reliably try to do a system update as soon as we would start the panel. The setup has coalesced these days around a Framework laptop and the lower tier XPPen Artist Pro, but he forgot his dongle so the puck he brought - the little device he uses to perform the somatic components of the Art spell - was inert. The screen could be manipulated physically, a feature he never even knew about, but when you try to shift the art around on there your work slides around like it's on an air hockey table. It wasn't optimal, but there were dark chuckles and schadenfreudes out there, so it occurred to me: is there a way to leverage even greater torments? We also dish up a truly ancient reference in panel one for all of those newly traveling through the archive.

20:00

How can I use Read­Directory­ChangesW to know when someone is copying a file out of the directory? [The Old New Thing]

A customer was using Read­Directory­ChangesW in the hopes of receiving a notification when a file was copied. They found that when a file was copied, they received a FILE_NOTIFY_CHANGE_LAST_ACCESS, but only once an hour. And they also got that notification even for operations unrelated to file copying.

Recall that Read­Directory­ChangesW and Find­First­Change­Notification are for detecting changes to information that would appear in a directory listing. Your program can perform a Find­First­File/Find­Next­File to cache a directory listing, and then use Read­Directory­ChangesW or Find­First­Change­Notification to be notified that the directory listing has changed, and you have to invalidate your cache.

But there are a lot of operations that don’t affect a directory listing.

For example, a program could open a file in the directory with last access time updates suppressed. (Or the volume might have last access time updates suppressed globally.) There is no change to the directory listing, so no event is signaled.

Functions like Read­Directory­ChangesW and Find­First­Change­Notification functions operate at the file system level, so the fundamental operations they see are things like “read” and “write”. They don’t know why somebody is reading or writing. All they know is that it’s happening.

If you are a video rental store, you can see that somebody rented a documentary about pigs. But you don’t know why they rented that movie. Maybe they’re doing a school report. Maybe they’re trying to make illegal copies of pig movies. Or maybe they simply like pigs.

If you are the file system, you see that somebody opened a file for reading and read the entire contents. Maybe they are loading the file into Notepad so they can edit it. Or maybe they are copying the file. You don’t know. Related: If you let people read a file, then they can copy it.

In theory, you could check, when a file is closed, whether all the write operations collectively combine to form file contents that match a collective set of read operations from another file. Or you could hash the file to see if it matches the hash of any other file.¹ But these extra steps would get expensive very quickly.

Indeed, we found during user research that a common way for users to copy files is to load them into an application, and then use Save As to save a copy somewhere else. In many cases, this “copy” is not byte-for-byte identical to the original, although it is functionally identical. (For example, it might have a different value for Total editing time.) Therefore, detecting copying by comparing file hashes is not always successful.²

If your goal is to detect files being “copied” (however you choose to define it), you’ll have to operate at another level. For example, you could use various data classification technologies to attach security labels to files and let the data classification software do the work of preventing files from crossing security levels. These technologies usually work best in conjunction with programs that have been updated to understand and enforce these data classification labels. (My guess is that they also use heuristics to detect and classify usage by legacy programs.)

¹ It would also generate false positives for files that are identical merely by coincidence. For example, every empty file would be flagged as a copy of every other empty file.

Windows 2000 Server had a feature called Single Instance Store which looked for identical files, but it operated only when the system was idle. It didn’t run during the copy operation. This feature was subsequently deprecated in favor of Data Deduplication, which looks both for identical files as well as identical blocks of files. Again, Data Deduplication runs during system idle time. It doesn’t run during the copy operation. The duplicate is detected only after the fact. (Note the terminology: It is a “duplicate” file, not a “copy”. Two files could be identical without one being a copy of the other.)

² And besides, even if the load-and-save method produces byte-for-byte identical files, somebody who wanted to avoid detection would just make a meaningless change to the document before saving it.

The post How can I use <CODE>Read­Directory­ChangesW</CODE> to know when someone is copying a file out of the directory? appeared first on The Old New Thing.

19:14

17:42

17:28

Link [Scripting News]

WordPress could have an active developer community creating writing tools for WordPress users. I also want WordPress to form the foundation of a new social network, one that supports all the writing features of the web. With really nice user interfaces for people to choose from. That's a new ecosystem. It may form around ChatGPT and Claude etc. Or it could start with WordPress. I think I can get this bootstrapped, but I need people to work with. That's the summary of what I'm about at this point in 2026.

16:35

[$] Ubuntu's GRUBby plans [LWN.net]

GNU GRUB 2, mostly just referred to as GRUB these days, is the most widely used boot loader for x86_64 Linux systems. It supports reading from a vast selection of filesystems, handles booting modern systems with UEFI or legacy systems with a BIOS, and even allows users to customize the "splash" image displayed when a system boots. Alas, all of those features come with a price; GRUB has had a parade of security vulnerabilities over the years. To mitigate some of those problems, Ubuntu core developer and Canonical employee Julian Andres Klode has proposed removing a number of features from GRUB in Ubuntu 26.10 to improve GRUB's security profile. His proposal has not been met with universal acclaim; many of the features Klode would like to remove have vocal proponents.

15:49

No kidding: Gentoo GNU/Hurd [LWN.net]

On April 1, the Gentoo Linux project published a blog post announcing that it was switching to GNU Hurd as its primary kernel as an April Fool's joke. While that is not true, the project has followed up with an announcement of a new Gentoo port to the Hurd:

Our crack team has been working hard to port Gentoo to the Hurd and can now share that they've succeeded, though it remains still in a heavily experimental stage. You can try Gentoo GNU/Hurd using a pre-prepared disk image. The easiest way to do this is with QEMU [...]

We have developed scripts to build this image locally and conveniently work on further development of the Hurd port. Release media like stages and automated image builds are future goals, as is feature parity on x86-64. Further contributions are welcome, encouraged, and needed. Be patient, expect to get your hands dirty, anticipate breakage, and have fun!

Oh, and Gentoo GNU/Hurd also works on real hardware!

Text for the April Fool's post is available at the bottom of the real announcement.

15:21

Joerg Jaspert: Building a house - 1 year in [Planet Debian]

Haven’t written here about it, but last March we finally started on our journey to get our own house build, so we can move out of the rented flat here.

That will be a big step, both the actual building, but also the moving - I am living at this one single place for 36 years now.

If you can read german there is a dedicated webpage where I sometimes write about the process. Will have much more details (and way more ramblings) than the following part.

If you can’t read german, a somewhat short summary follows. Yes, still a lot of text, but shortened, still.

What? Why now?

Current flat has 83m² - which simply isn’t enough space. And the number of rooms also doesn’t fit anymore. But it is hard to find a place that fits our requirements (which do include location).

Moving to a different rented place would also mean changed amount of rent. And nowadays that would be huge increase (my current rent is still the price from about 30 years ago!).

So if we go and pay more - we could adjust and pay for something we own instead. And both, my wife and I had changes in our jobs that made it possible for us now, so we started looking.

Market

Brrrr, looking is good, actually finding something that fits - not so. We never found an offer that fit. Space wise, sure. But then location was off, or price was idiotically high. Location fit, but then size was a joke, and guess about the price… Who needs 200 square meters with 3 rooms? Entirely stupid design choices there. Or how about 40 square meters of hallway - with 50m² of tiny rooms around. What are they smoking? Oh, there, useful size, good rooms - but now you want more money than a kidney is worth, or something. Thanks, no.

New place

In February 2025 we finally got lucky and found a (newly opened) area with a large number of places to build a house on. Had multiple talks with someone from on of the companies developing that area (there are two you can select from), then talked with banks and signed a contract in March 2025. We got promised that actual house construction would be first quarter of 2026, finished in second quarter.

House type

There are basically 2 ways of building a new house (that matter here). First is called “Massivhaus”, second is called “Fertighaus” in german, roughly translating to solid and prefabricated. The latter commonly a wood based construction, though it doesn’t need to be. The important part of it is the prefabrication, walls and stuff get assembled in a factory somewhere and then transported to your place, where they play “big kid lego” for a day and suddenly a house is there.

A common thought is “prefabricated” is faster, but that is only a half true. Sure, the actual work on side is way shorter - usually one or two days and the house is done - while a massive construction usually takes weeks to build up. But that is only a tiny part of the time needed, the major part goes of into planning and waiting and in there it doesn’t matter what material you end up with.

Money fun

Last year already wasn’t the best time to start a huge loan - but isn’t it always “a few years ago would have been better”? So we had multiple talks with different banks and specialised consultants until we found something that we thought is good for us.

Thinking about it now - we should have put even more money on top as “reserve”, but who could have thought that 2026 turns into such a shitshow? Does not help at all, quite the contrary. And that damn lotto game always ends up with the wrong numbers, meh.

Plans and plans and more plans - and rules

For whichever reason you can not just go and put something on your ground and be happy. At least not if you are part of the normal people and not enormously rich. There is a large set of rules to follow. Usually that is a good thing, even though some rules are sometimes hard to understand.

In Germany, besides the usual laws, we have something that is called “Bebauungsplan”, which translates to “development plan” (don’t know if that carries the right meaning, it’s a plan on what and how may be build, which can have really detailed specifications in). It basically tells you every aspect on top of the normal law that you have to keep in mind.

In our case we have the requirement of 2 full floors and CAN have a third smaller on top, it limits how high the house can be and also how high our ground floor may be compared to the street. It regulates where on the property we may build and how much ground we may cover with the house, it gives a set of colors we are allowed to use, it demands a flat roof that we must have as a green roof and has a number of things more that aren’t important enough to list here. If you do want to see the full list, my german post on it has all the details that matter to us.

With all that stuff in mind - off to plans. Wouldn’t have believed how many details there are to take in. Room sizes are simple, but how to arrange them for ideal usage of the sun, useful ways inside the house, but also keeping in mind that water needs to flow through and out. Putting a bath room right atop a living room means a water pipe needs to go down there. Switch the bath room side in the house, and it suddenly is above the kitchen - means you can connect the pipes from it to the ones from kitchen, which is much preferred than going through the living room. And lots more such things.

It took us until nearly end of October to finalize the plans! And we learned a whole load from it. We started with a lot of wishes. The planner tried to make them work. Then we changed our minds. Plans changed. Minds changed again. Comparing the end result with the first draft we changed most of the ground floor around, with only the stairs and the entrance door at the same position. Less changes for the upper floor, but still enough.

Side quests

The whole year was riddled with something my son named side quests. We visited a construction exhibition near us, we went to the house builders factory and took a look on how they work. We went to many different other companies that do SOME type of work which we need soon, say inside floors, painters, kitchen and more stuff.

Of course the most important side quest was a visit to the notary to finalize the contracts, especially for the plot of land (in Germany you must have a notary for that to get entered into the governments books). Creates lots of fees, of course, for the notary and also the government (both fees and taxes here).

Building permit

We had been lucky and only needed a small change to the plans to get the building permit - and the second part, the wastewater permit (yes, you need a separate one for this) also got through without trouble.

Choices, so many of them

So in January we finally had an appointment for something that’s called “Bemusterung” which badly translates to “Sampling”. Basically two days at the house builders factory to select all of what’s needed for the house that you don’t do in the plans. Doors, inside and out and their type and color and handles. Same things for the windows and the blinds and the protection level you want the windows to have. Decide about stairs, design for the sanitary installations - and also the height of the toilet! - and the tiles to put into the bathrooms. Decisions on all the tech needed (heating system, ventilation and whatnot.

Two days, busy ones - and you can easily spend a lot of extra money here if you aren’t careful. We managed to get “out of it” with only about 4000€ extra, so pretty good.

Electro and automation

Now, here I am special. Back when I was young the job I learned is electrician. So here I have very detailed wishes. I am also running lots of automatism in my current flat - obviously the new house should be better than that. So I have a lot of ideas and thoughts on it, so this is entirely extra and certainly out of the ordinary the house builder usually see.

Which means I do all of that on my own. Well, the planning and some of the work, I must have a company at hand for certain tasks, it is required by some rules. But they will do what I planned, as long as I don’t violate regulations.

Which means the whole electrical installation is … different. Entirely planned for automatisms and using KNX for it. I am so happy to ditch Homeassistant and the load of Homematic, Zigbee and ZWave based wireless things.

Ok, Homeassistant is a nice thing - it can do a lot. And it can bridge between about any system you can find. But it is a central single point of failure. And it is a system that needs constant maintenance. Not touched for a while? Plan for a few hours playing update whack-a-mole. And often enough a component here or there breaks with an update. Can be fixed, but takes another hour or two.

So I change. Away from wireless based stuff. To wires. To a system thats a standard for decades already. And works entirely without a SPOF. (Yes, you can add one here too). And, most important, should I ever die - can easily be maintained by anyone out there dealing with KNX, which is a large number of people and companies. Without digging through dozens of specialised integrations and whatnot.

I may even end up with Homeassistant again - but that will entirely be as a client. It won’t drive automations. It won’t be the central point to do anything for the house. It will be a logging and data collecting thing that enables me to put up easy visualizations. It may be an easy interface for smartphones or tablets to control parts of the house, for those parts where one wants this to happen. Not the usual day-to-day stuff, extras on top.

Actual work happening

Since march there finally is action visible. The base of the house is getting build. Wednesday the 1st April we finally got the base slab poured on the construction site and in another 10 days the house is getting delivered and build up. A 40ton mobile crane will be there.

15:14

Link [Scripting News]

Feature request for WordPress. If an item doesn't have a title, you can do better than (no title) in the Posts list. Grab the first N chars of the body, or add a tool tip with the same text. I write a lot of "singular" posts, ie posts without titles. This is what I see on the Posts page.

Link [Scripting News]

Does EmDash have a feed reader built in??

Link [Scripting News]

Suggestion for feed reader devs. Put a Check Now button on the page for a single feed. It shouldn't overburden your system because it's just doing an HTTP read and a little parsing. Not much more work than reloading a page in the browser. The benefit is you can see a current view of the news according to a specific feed without waiting. Makes the web roughly instantaneous for every feed, even ones that don't support rssCloud. FeedLand has such a button.

Good morning campers [Scripting News]

Things are changing a lot. Huge flow of ideas, and some catching up to do. Mind bombs in every direction.

Last night while watching sports I learned via ChatGPT about MCP.

Here's what it can do and people *are* using it for this

You could turn ChatGPT into an easy editor for WordPress posts.

Just as I have developed the habit of getting it to create a handoff.md file when I'm done with a session, I could write something with ChatGPT helping, I don't ever do that myself but i might, if it were easy. and when I'm ready to publish, I'd say "Please publish this on my daveverse site now." I might specify a category or two, or set defaults, it's good at that stuff. I've taught Claude to write code in my style, so I can maintain it (to answer Aral Balkan's question on Mastodon).

Little hierarchies everywhere [Scripting News]

We create little hierarchies everywhere we go.

So many places. I have no room for new ones, yet I have to make room because there are people there I want to work with. Now I have to manage it.

If an alien came to Earth and asked why we don't just create a way for a little hierarchy in one place to appear where ever you want it.

It's not out of reach, it would take two or three developers with enough imaginative users to get the ball rolling.

Write down the features you'd have to support, concisely and simply, and provide conventions for making those hierarchies accessible through a very simple format, in JSON or XML or anything isomorphic, and then we start building.

And start releasing apps that work together. That's what I want to do.

WordLand is supposed to be the first such app. But maybe I need to go even simpler for example code. Thinking about it.

The aliens were confused by the inefficent way we were organizing our ideas.

15:07

Free Software Directory meeting on IRC: Friday, April 10, starting at 12:00 EDT (16:00 UTC) [Planet GNU]

Join the FSF and friends on Friday, April 10 from 12:00 to 15:00 EDT (16:00 to 19:00 UTC) to help improve the Free Software Directory.

Error'd: Clever domain name here [The Daily WTF]

An anonymous cable-puller wrote "Reading a long specification manual. The words "shall" and "shall not" have specific meaning, and throughout the document are in bold italic. Looks like someone got a bit shall-ow with their search-and-replace skills."

2

 

Picki jeffphi attends to details. "Apparently this recruiter doesn't have a goal or metric around proper brace selection and matching." You're hired.

0

 

UGG.LI admins highlighted "even KFC hat Breakpoints deployed in Prod now ..." I wanted to say something funny about Herren Admins' Handle but reminded myself of John Scalzi's quote about the failure case of smartass so I refrained. You might be funnier than I.

1

 

Smarter still, Steve says "A big company like Google surely has a huge QA staff and AI bots to make sure embarrassing typos don't slip through, right? You wouldn't want to damage you reputation..."

3

 

I'll bet Pascal didn't expect this, eh? "Delivered, but On the way, Searching for a driver, but Asdrubal"

4

 

[Advertisement] Keep all your packages and Docker containers in one place, scan for vulnerabilities, and control who can access different feeds. ProGet installs in minutes and has a powerful free version with a lot of great features that you can upgrade when ready.Learn more.

Security updates for Friday [LWN.net]

Security updates have been issued by AlmaLinux (freerdp, grafana, kernel, rsync, and thunderbird), Debian (chromium, inetutils, and libpng1.6), Fedora (bind9-next, nginx-mod-modsecurity, and openbao), Mageia (firefox, nss and thunderbird), Red Hat (container-tools:rhel8), SUSE (conftest, dnsdist, ignition, libsoup, libsoup2, LibVNCServer, libXvnc-devel, opensc, ovmf-202602, perl-Crypt-URandom, python-tornado, python311-ecdsa, python311-Pygments, python315, tar, and wireshark), and Ubuntu (cairo, jpeg-xl, linux, linux-aws, linux-aws-6.17, linux-gcp, linux-gcp-6.17, linux-hwe-6.17, linux-realtime, linux, linux-aws, linux-aws-hwe, linux-kvm, linux-oracle, linux, linux-aws, linux-gcp, linux-gke, linux-gkeop, linux-ibm, linux-lowlatency, linux-nvidia, linux-raspi, linux-fips, linux-fips, linux-aws-fips, linux-fips, linux-aws-fips, linux-gcp-fips, and linux-realtime, linux-realtime-6.8, linux-raspi-realtime).

14:28

Can AI bots write maintainable code? [Scripting News]

This is something we can and should research.

Let's give one of the ai apps a fairly good idea for an app we want to use, and help it -- not by coding, just by answering questions about how it will work, and Iterating over the product until it works like we want it. Sometihng simple, like perhaps a text editor for Mastodon. Something that isn't squished in a tiny little text box, and has icons for bold, underline, links, etc. It could be useful.

Then let's look at the code with an open mind. I think i've given it enough examples of good maintainable code that I could get it to produce maintainable code.

This was in reply to a Mastodon post by Aral Balkan.

The Cathedral, the Bazaar, and the Winchester Mystery House [Radar]

The following article originally appeared on Drew Breunig’s blog and is being republished here with the author’s permission.

In 1998, Eric S. Raymond published the founding text of open source software development, The Cathedral and the Bazaar. In it, he detailed two methods of building software:

  • The cathedral model is carefully planned, closed-source, and managed by an exclusive team of developers.
  • The bazaar model is open, transparent, and community-driven.

The bazaar model was enabled by the internet, which allowed for distributed coordination and distribution. More people could contribute code and share feedback, yielding better, more secure software. “Given enough eyeballs, all bugs are shallow,” Raymond wrote, coining Linus’s law.

The ideas crystallized in The Cathedral and the Bazaar helped kick off a quarter-century of open source innovation and dominance.

But just as the internet made communication cheap and birthed the bazaar, AI is making code cheap and kicking off a new era filled with idiosyncratic, sprawling, cobbled-together software.

Meet the third model: The Winchester Mystery House.

Image by HarshLight on Flickr (and used here on a Creative Commons license)Winchester Mystery House (image by HarshLight and used here on a Creative Commons license)

The Winchester Mystery House

Located less than 10 miles southeast from the Computer History Museum, the Winchester Mystery House is an architectural oddity.

Following the death of her husband and mother-in-law, Sarah Winchester controlled a fortune. Her shares in the Winchester Repeating Arms Company, and the dividends they threw off, made it so Sarah could not only live in comfort but pursue whatever passion she desired. That passion was architecture.

Sarah didn’t build her mansion to house ghosts1; she built her mansion because she liked architecture. With no license, no formal training, in an era when women (even very rich women) didn’t have a path to practicing architecture, Sarah focused on her own home. She made up for her lack of license with passion and effectively unlimited funds.

Sarah built what she wanted. “At its largest the house had ~500 rooms.” Today it has roughly 160 rooms, 2,000 doors, 10,000 windows, 47 stairways, 47 fireplaces, 13 bathrooms, and 6 kitchens. Carved wood drapes the walls and ceilings. Stained glass is everywhere. Projects were planned, completed, abandoned, torn down, and rebuilt.

It was anything but aimless. And practical innovations ran throughout, including push-button gas lighting, an early intercom system, steam heating, and indoor gardens. The oddities that amuse today’s visitors were mostly practical accommodations for Sarah’s health (stairways with very small steps), functional designs no longer used (trap doors in greenhouses to route excess water), or quick fixes to damage from the 1906 earthquake.

Winchester passed in 1922. Nine months later, the house became a tourist attraction.

Today, many programmers are Sarah Winchester.

Claude Code's public GitHub activityClaude Code’s public GitHub activity

What happens when code is cheap

We aren’t as rich as Sarah Winchester, but when code is this cheap, we don’t need to be.

Jodan Alberts illustrated this recently, collecting and visualizing data detailing public GitHub commits attributed to Claude Code. That’s his data in the chart above, with Claude seeming to only accelerate through March.2

It’s hard to get a handle on individual usage though, so I went searching for a proxy and landed on the chart below:

Average net lines added per commit in Claude Code: 7-day averageAverage net lines added per commit in Claude Code: 7-day average

After Opus 4.5 and recent work enabling Agent Teams, the average net lines added by Claude per commit is now smooth and steady at 1,000 lines of code per commit.3

1,000 lines of code per commit is ~2 magnitudes higher than what a human programmer writes per day.

If you search for human benchmarks, you’ll find many citing Fred Brooks’s The Mythical Man Month while claiming a good engineer might write 10 cumulative lines of code per day.4 If you further explore, you’ll find numbers higher than 10 cited, but generally less than 100.

Here’s a good anecdote from antirez on a Hacker News thread discussing the Brooks “quote”:

I did some trivial math. Redis is composed of 100k lines of code, I wrote at least 70k of that in 10 years. I never work more than 5 days per week and I take 1 month of vacations every year, so assuming I work 22 days every month for 11 months:

70000/(22 x 11 x 10) = ~29 LOC / day

Which is not too far from 10. There are days where I write 300-500 LOC, but I guess that a lot of work went into rewriting stuff and fixing bugs, so I rewrote the same lines again and again over the course of years, but yet I think that this should be taken into account, so the Mythical Man Month book is indeed quite accurate.

Six years after this comment, Claude is pushing 1,000 lines of code per commit.

So what do we do with all this cheap code?

Unfortunately, everything else remains roughly the same cost and roughly the same speed. Feedback hasn’t gotten cheaper; the “eyeballs” that guided the software developed by the bazaar haven’t caught up to AI.

There is only one source of feedback that moves at the speed of AI-generated code: yourself. You’re there to prompt, you’re there to review. You don’t need to recruit testers, run surveys, or manage design partners. You just build what you want and use what you build.

And that’s what many developers are doing with cheap code: building idiosyncratic tools for ourselves, guided by our passions, taste, and needs.

Sound familiar?

Winchester Mystery House, San Jose, California (image by The wub and used here under a Creative Commons license)

Welcome to the mystery house

Steve Yegge’s Gas Town is a Winchester Mystery House. It’s incredibly idiosyncratic and sprawling, rich with metaphors and hacks. It’s the perfect tool for Steve.

Jeffrey Emanuel’s Agent Flywheel is a Winchester Mystery House. A significant subset of tokenmaxxers decide they need to rebuild their dependencies in Rust; Jeff is one such example. His “FrankenSuite” includes Rust rewrites of SQLite, Node.js, btrfs, Redis, pandas, NumPy, JAX, and Torch.

Philip Zeyliger noted the pattern last week, writing, “Everyone is building a software factory.” But it goes beyond software. Gary Tan’s personal AI committee gstack is a Winchester Mystery House constructed mostly from Markdown.

Everywhere you look, there are Winchester Mystery Houses.

Each Winchester Mystery House is idiosyncratic. They are highly personalized. The tightly coupled feedback loop between the coding agent and the user yields software that reflects the developer’s desires. They usually lack documentation. To outsiders, they’re inscrutable.

Winchester Mystery Houses are sprawling. Guided by the needs of the developer, these tools tend to spread out, constantly annexing territory in the form of new functions and new repositories. Work is almost always additive. Code is added when it’s needed, bugs are patched in place, and countless appendages remain. There’s little incentive to prune when code is free.

And building a Winchester Mystery House should be fun. Coding agents turn everything into a side quest, and we eagerly join in. Building the perfect workflow is a passion for many devs, so we keep pushing.

Winchester Mystery Houses are idiosyncratic, sprawling, and fun. But does this mean we’re abandoning the bazaar?

A Crowded Market in Dhaka, Bangladesh (image by International Food Policy Research Institute / 2010 and used here on a Creative Commons license)A Crowded Market in Dhaka, Bangladesh (image by International Food Policy Research Institute / 2010 and used here on a Creative Commons license)

What happens to the bazaar?

What happens when we all tend to our mystery houses? When our free time is spent building tools just for ourselves, will we stop working on shared projects? Will we abandon the bazaar?

Probably not. The bazaar is packed right now, but not in a good way.

Code is cheap, so people are slamming open source repositories with agent-written contributions, in an attempt to pad their résumés or manifest their pet features. Daniel Stenberg ended bug bounties for curl after a deluge of poor submissions sapped reviewer bandwidth. It’s gotten so bad, GitHub recently added a feature to disable pull request contributions.

Anecdotally, I’m seeing good contributions pick up as well. They’re just drowned out by the slop. For what it’s worth, curl commits are dramatically up in the agentic era. And people are sharing what they build. A recent analysis by Dumky shows packages and repos rising in the last quarter.

There’s plenty of budget for both mystery houses and the bazaar when code is this cheap. The new challenge is developing systems and processes for managing the deluge. We don’t need eyeballs to find bugs in the software; we need eyeballs to find bugs before they reach the software.

In many ways this is the inverse of the bazaar model era. The internet made feedback and communal coordination faster, easier, and cheaper. The bazaar model has a high throughput of feedback (many eyeballs) but relatively high latency for modifications (file an issue, discuss, submit a PR, wait for review, etc.).

Coding agents, on the other hand, make implementation faster while feedback and coordination are unchanged. The Winchester Mystery House model sidesteps this by collapsing the feedback loop into one person: Latency is near zero, but throughput is just you. The bazaar, defined by communal work, can’t adopt this hack. Coding agents in the bazaar create a mess: implementation at machine speed hitting coordination infrastructure built for human speed. Which is why maintainers feel like they’re drowning.

We need new tools, skills, and conventions.

Lessons from the mystery house

Coding agents have dropped the cost of code so dramatically we’re entering a new era of software development, the first change of this magnitude since the internet kicked off open source software. Change arrived quickly, and it’s not slowing down. But in reviewing the Winchester Mystery House framework, I think we can take away a few lessons.

Lesson 1: The bazaar and Winchester Mystery Houses can coexist.

When listing example Winchester Mystery Houses, I didn’t mention OpenClaw, even though it is the defining example. I saved it for here because it nicely illustrates how Winchester Mystery Houses and the bazaar can coexist.

OpenClaw is incredibly modular and places few limitations on the user. It integrates 25 different chat and notification systems, plugs into most inference end points, and is built on the exceptionally flexible pi agent toolkit. This eager flexibility was embraced early—security and data protections be damned—but since its exponential adoption Peter Steinberger and the community have been steadily pushing improvements and fixes.

And like other breakout open source projects of yore, the ecosystem is adopting the best ideas and mitigating the worst aspects of OpenClaw. Countless alternate “claw” projects have emerged. (There’s NanoClaw, NullClaw, ZeroClaw, and more!) Companies have launched services to make claws easy or safer. Cloudflare launched Moltworker to make deploy easy, Nvidia shipped NemoClaw with a security focus, and Claude keeps adding claw-like features to its desktop app.

Lesson 2: Don’t sell the fun stuff.

One reason OpenClaw works so well in the bazaar is that it is a foundation for personal tools. Out of the box, a claw just sits there. It’s up to the user to determine what it does and how it does it, leveraging the connections and infrastructure OpenClaw provides. OpenClaw lets less experienced developers spin up their own Winchester Mystery Houses, while experienced devs get to leverage much of the common integrations and systems OpenClaw provides. Peter and team have done a great job drawing a line between the common core (what the bazaar works on) and what they leave up to the user: The boring, critical stuff is the job of the commons.

Thinking back to Sarah Winchester and her idiosyncratic, sprawling mansion, we see the same pattern. Sarah hired vendors! She used off-the-shelf parts! Her bathtubs, toilets, faucets, and plumbing weren’t crafted on site.

The boring stuff, the hard bits, or the things that have disastrous failure modes are the things we should collaborate on or employ specialists to handle. (Come to think, plumbing checks all three boxes). This is the opportunity for open source software, dev tools, and software companies.

Don’t try to sell developers the stuff that’s fun, the stuff they want to build. Sell them the stuff they avoid or don’t want to take responsibility for. Sarah Winchester didn’t hire metalworkers to craft the pipes for her plumbing, but she did hire craftspeople to create hundreds of stained-glass windows to her specs.

Lesson 3: The limits of code are communication.

OpenClaw shows the bazaar remains relevant but also highlights the problems facing open source in the agentic era. Right now, there are 1,173 open pull requests and 1,884 new issues on the OpenClaw repo.

There is more code and more projects than we could ever review. The challenge now, for open source maintainers and users, is sifting through it all. How do we find the novel ideas that everyone should adopt and borrow?

OpenClaw is one of the successes, something we all noticed. And for it, the problem is processing the feedback. For the projects we’ll never find, the ones lost in the deluge, their problem is lack of feedback. You either find attention and drown in contributions or drown in the ocean of repos and never hear a thing.

The internet made coordination cheap and gave us the bazaar. Coding agents made implementation cheap and gave us the Winchester Mystery House. What we’re missing are the tools and conventions that make attention cheap, that let maintainers absorb contributions at machine speed and let good ideas surface among the noise. Until we figure this out, the bazaar will keep getting louder without getting smarter, and the best ideas in our mystery houses will be forgotten once we stop maintaining them.


Footnotes

  1. The lore that Winchester built her mansion to house ghosts killed by Winchester rifles is likely just gossip and marketing. There’s little evidence to support these claims. (99% Invisible has a good episode exploring Winchester, her house, and this lore.) ↩
  2. While editing this piece, Dumky published another analysis illustrating the production of coding agents. In it he shows a 280% increase in “Show HN” posts, a 93% increase in new GitHub repos, and a dramatic uptick in packages published to Crates.io. ↩
  3. Anthropic’s ability to stabilize this line is rather impressive. Claude Code is getting better at planning and better at chunking out work, enabling more effective subagent delegation. ↩
  4. Though this is likely an updated tweak of Brooks’s statement that an “industrial team” might write 1,000 “statements” per year. ↩

Feeds

FeedRSSLast fetchedNext fetched after
@ASmartBear XML 19:35, Thursday, 09 April 20:16, Thursday, 09 April
a bag of four grapes XML 19:21, Thursday, 09 April 20:03, Thursday, 09 April
Ansible XML 19:35, Thursday, 09 April 20:15, Thursday, 09 April
Bad Science XML 19:49, Thursday, 09 April 20:38, Thursday, 09 April
Black Doggerel XML 19:35, Thursday, 09 April 20:16, Thursday, 09 April
Blog - Official site of Stephen Fry XML 19:49, Thursday, 09 April 20:38, Thursday, 09 April
Charlie Brooker | The Guardian XML 19:21, Thursday, 09 April 20:03, Thursday, 09 April
Charlie's Diary XML 19:14, Thursday, 09 April 20:02, Thursday, 09 April
Chasing the Sunset - Comics Only XML 19:49, Thursday, 09 April 20:38, Thursday, 09 April
Coding Horror XML 19:56, Thursday, 09 April 20:43, Thursday, 09 April
Cory Doctorow's craphound.com XML 19:21, Thursday, 09 April 20:03, Thursday, 09 April
Cory Doctorow, Author at Boing Boing XML 19:35, Thursday, 09 April 20:16, Thursday, 09 April
Ctrl+Alt+Del Comic XML 19:14, Thursday, 09 April 20:02, Thursday, 09 April
Cyberunions XML 19:49, Thursday, 09 April 20:38, Thursday, 09 April
David Mitchell | The Guardian XML 19:56, Thursday, 09 April 20:39, Thursday, 09 April
Deeplinks XML 19:56, Thursday, 09 April 20:40, Thursday, 09 April
Diesel Sweeties webcomic by rstevens XML 19:56, Thursday, 09 April 20:39, Thursday, 09 April
Dilbert XML 19:49, Thursday, 09 April 20:38, Thursday, 09 April
Dork Tower XML 19:21, Thursday, 09 April 20:03, Thursday, 09 April
Economics from the Top Down XML 19:56, Thursday, 09 April 20:39, Thursday, 09 April
Edmund Finney's Quest to Find the Meaning of Life XML 19:56, Thursday, 09 April 20:39, Thursday, 09 April
EFF Action Center XML 19:56, Thursday, 09 April 20:39, Thursday, 09 April
Enspiral Tales - Medium XML 19:56, Thursday, 09 April 20:41, Thursday, 09 April
Events XML 19:14, Thursday, 09 April 20:02, Thursday, 09 April
Falkvinge on Liberty XML 19:14, Thursday, 09 April 20:02, Thursday, 09 April
Flipside XML 19:21, Thursday, 09 April 20:03, Thursday, 09 April
Flipside XML 19:56, Thursday, 09 April 20:41, Thursday, 09 April
Free software jobs XML 19:35, Thursday, 09 April 20:15, Thursday, 09 April
Full Frontal Nerdity by Aaron Williams XML 19:14, Thursday, 09 April 20:02, Thursday, 09 April
General Protection Fault: Comic Updates XML 19:14, Thursday, 09 April 20:02, Thursday, 09 April
George Monbiot XML 19:56, Thursday, 09 April 20:39, Thursday, 09 April
Girl Genius XML 19:56, Thursday, 09 April 20:39, Thursday, 09 April
Groklaw XML 19:14, Thursday, 09 April 20:02, Thursday, 09 April
Grrl Power XML 19:21, Thursday, 09 April 20:03, Thursday, 09 April
Hackney Anarchist Group XML 19:49, Thursday, 09 April 20:38, Thursday, 09 April
Hackney Solidarity Network XML 19:56, Thursday, 09 April 20:41, Thursday, 09 April
http://blog.llvm.org/feeds/posts/default XML 19:56, Thursday, 09 April 20:41, Thursday, 09 April
http://calendar.google.com/calendar/feeds/q7s5o02sj8hcam52hutbcofoo4%40group.calendar.google.com/public/basic XML 19:35, Thursday, 09 April 20:15, Thursday, 09 April
http://dynamic.boingboing.net/cgi-bin/mt/mt-cp.cgi?__mode=feed&_type=posts&blog_id=1&id=1 XML 19:56, Thursday, 09 April 20:41, Thursday, 09 April
http://eng.anarchoblogs.org/feed/atom/ XML 19:21, Thursday, 09 April 20:07, Thursday, 09 April
http://feed43.com/3874015735218037.xml XML 19:21, Thursday, 09 April 20:07, Thursday, 09 April
http://flatearthnews.net/flatearthnews.net/blogfeed XML 19:35, Thursday, 09 April 20:16, Thursday, 09 April
http://fulltextrssfeed.com/ XML 19:56, Thursday, 09 April 20:39, Thursday, 09 April
http://london.indymedia.org/articles.rss XML 19:56, Thursday, 09 April 20:43, Thursday, 09 April
http://pipes.yahoo.com/pipes/pipe.run?_id=ad0530218c055aa302f7e0e84d5d6515&amp;_render=rss XML 19:21, Thursday, 09 April 20:07, Thursday, 09 April
http://planet.gridpp.ac.uk/atom.xml XML 19:56, Thursday, 09 April 20:43, Thursday, 09 April
http://shirky.com/weblog/feed/atom/ XML 19:56, Thursday, 09 April 20:40, Thursday, 09 April
http://thecommune.co.uk/feed/ XML 19:56, Thursday, 09 April 20:41, Thursday, 09 April
http://theness.com/roguesgallery/feed/ XML 19:14, Thursday, 09 April 20:02, Thursday, 09 April
http://www.airshipentertainment.com/buck/buckcomic/buck.rss XML 19:49, Thursday, 09 April 20:38, Thursday, 09 April
http://www.airshipentertainment.com/growf/growfcomic/growf.rss XML 19:56, Thursday, 09 April 20:40, Thursday, 09 April
http://www.airshipentertainment.com/myth/mythcomic/myth.rss XML 19:21, Thursday, 09 April 20:03, Thursday, 09 April
http://www.baen.com/baenebooks XML 19:56, Thursday, 09 April 20:40, Thursday, 09 April
http://www.feedsapi.com/makefulltextfeed.php?url=http%3A%2F%2Fwww.somethingpositive.net%2Fsp.xml&what=auto&key=&max=7&links=preserve&exc=&privacy=I+accept XML 19:56, Thursday, 09 April 20:40, Thursday, 09 April
http://www.godhatesastronauts.com/feed/ XML 19:14, Thursday, 09 April 20:02, Thursday, 09 April
http://www.tinycat.co.uk/feed/ XML 19:35, Thursday, 09 April 20:15, Thursday, 09 April
https://anarchism.pageabode.com/blogs/anarcho/feed/ XML 19:56, Thursday, 09 April 20:40, Thursday, 09 April
https://broodhollow.krisstraub.comfeed/ XML 19:35, Thursday, 09 April 20:16, Thursday, 09 April
https://debian-administration.org/atom.xml XML 19:35, Thursday, 09 April 20:16, Thursday, 09 April
https://elitetheatre.org/ XML 19:56, Thursday, 09 April 20:43, Thursday, 09 April
https://feeds.feedburner.com/Starslip XML 19:21, Thursday, 09 April 20:03, Thursday, 09 April
https://feeds2.feedburner.com/GeekEtiquette?format=xml XML 19:56, Thursday, 09 April 20:39, Thursday, 09 April
https://hackbloc.org/rss.xml XML 19:35, Thursday, 09 April 20:16, Thursday, 09 April
https://kajafoglio.livejournal.com/data/atom/ XML 19:49, Thursday, 09 April 20:38, Thursday, 09 April
https://philfoglio.livejournal.com/data/atom/ XML 19:56, Thursday, 09 April 20:43, Thursday, 09 April
https://pixietrixcomix.com/eerie-cutiescomic.rss XML 19:56, Thursday, 09 April 20:43, Thursday, 09 April
https://pixietrixcomix.com/menage-a-3/comic.rss XML 19:56, Thursday, 09 April 20:40, Thursday, 09 April
https://propertyistheft.wordpress.com/feed/ XML 19:35, Thursday, 09 April 20:15, Thursday, 09 April
https://requiem.seraph-inn.com/updates.rss XML 19:35, Thursday, 09 April 20:15, Thursday, 09 April
https://studiofoglio.livejournal.com/data/atom/ XML 19:21, Thursday, 09 April 20:07, Thursday, 09 April
https://thecommandline.net/feed/ XML 19:21, Thursday, 09 April 20:07, Thursday, 09 April
https://torrentfreak.com/subscriptions/ XML 19:56, Thursday, 09 April 20:39, Thursday, 09 April
https://web.randi.org/?format=feed&type=rss XML 19:56, Thursday, 09 April 20:39, Thursday, 09 April
https://www.dcscience.net/feed/medium.co XML 19:49, Thursday, 09 April 20:38, Thursday, 09 April
https://www.DropCatch.com/domain/steampunkmagazine.com XML 19:35, Thursday, 09 April 20:16, Thursday, 09 April
https://www.DropCatch.com/domain/ubuntuweblogs.org XML 19:21, Thursday, 09 April 20:07, Thursday, 09 April
https://www.DropCatch.com/redirect/?domain=DyingAlone.net XML 19:56, Thursday, 09 April 20:43, Thursday, 09 April
https://www.freedompress.org.uk:443/news/feed/ XML 19:14, Thursday, 09 April 20:02, Thursday, 09 April
https://www.goblinscomic.com/category/comics/feed/ XML 19:35, Thursday, 09 April 20:15, Thursday, 09 April
https://www.loomio.com/blog/feed/ XML 19:21, Thursday, 09 April 20:07, Thursday, 09 April
https://www.newstatesman.com/feeds/blogs/laurie-penny.rss XML 19:35, Thursday, 09 April 20:16, Thursday, 09 April
https://www.patreon.com/graveyardgreg/posts/comic.rss XML 19:56, Thursday, 09 April 20:43, Thursday, 09 April
https://www.rightmove.co.uk/rss/property-for-sale/find.html?locationIdentifier=REGION^876&maxPrice=240000&minBedrooms=2&displayPropertyType=houses&oldDisplayPropertyType=houses&primaryDisplayPropertyType=houses&oldPrimaryDisplayPropertyType=houses&numberOfPropertiesPerPage=24 XML 19:56, Thursday, 09 April 20:39, Thursday, 09 April
https://x.com/statuses/user_timeline/22724360.rss XML 19:35, Thursday, 09 April 20:15, Thursday, 09 April
Humble Bundle Blog XML 19:56, Thursday, 09 April 20:43, Thursday, 09 April
I, Cringely XML 19:14, Thursday, 09 April 20:02, Thursday, 09 April
Irregular Webcomic! XML 19:35, Thursday, 09 April 20:16, Thursday, 09 April
Joel on Software XML 19:21, Thursday, 09 April 20:07, Thursday, 09 April
Judith Proctor's Journal XML 19:35, Thursday, 09 April 20:15, Thursday, 09 April
Krebs on Security XML 19:35, Thursday, 09 April 20:16, Thursday, 09 April
Lambda the Ultimate - Programming Languages Weblog XML 19:35, Thursday, 09 April 20:15, Thursday, 09 April
Looking For Group XML 19:56, Thursday, 09 April 20:40, Thursday, 09 April
LWN.net XML 19:35, Thursday, 09 April 20:16, Thursday, 09 April
Mimi and Eunice XML 19:56, Thursday, 09 April 20:41, Thursday, 09 April
Neil Gaiman's Journal XML 19:35, Thursday, 09 April 20:15, Thursday, 09 April
Nina Paley XML 19:56, Thursday, 09 April 20:43, Thursday, 09 April
O Abnormal – Scifi/Fantasy Artist XML 19:56, Thursday, 09 April 20:41, Thursday, 09 April
Oglaf! -- Comics. Often dirty. XML 19:14, Thursday, 09 April 20:02, Thursday, 09 April
Oh Joy Sex Toy XML 19:56, Thursday, 09 April 20:40, Thursday, 09 April
Order of the Stick XML 19:56, Thursday, 09 April 20:40, Thursday, 09 April
Original Fiction Archives - Reactor XML 19:21, Thursday, 09 April 20:03, Thursday, 09 April
OSnews XML 19:56, Thursday, 09 April 20:41, Thursday, 09 April
Paul Graham: Unofficial RSS Feed XML 19:56, Thursday, 09 April 20:41, Thursday, 09 April
Penny Arcade XML 19:21, Thursday, 09 April 20:03, Thursday, 09 April
Penny Red XML 19:56, Thursday, 09 April 20:41, Thursday, 09 April
PHD Comics XML 19:49, Thursday, 09 April 20:38, Thursday, 09 April
Phil's blog XML 19:14, Thursday, 09 April 20:02, Thursday, 09 April
Planet Debian XML 19:56, Thursday, 09 April 20:41, Thursday, 09 April
Planet GNU XML 19:35, Thursday, 09 April 20:16, Thursday, 09 April
Planet Lisp XML 19:49, Thursday, 09 April 20:38, Thursday, 09 April
Pluralistic: Daily links from Cory Doctorow XML 19:35, Thursday, 09 April 20:15, Thursday, 09 April
PS238 by Aaron Williams XML 19:14, Thursday, 09 April 20:02, Thursday, 09 April
QC RSS XML 19:56, Thursday, 09 April 20:43, Thursday, 09 April
Radar XML 19:21, Thursday, 09 April 20:03, Thursday, 09 April
RevK®'s ramblings XML 19:21, Thursday, 09 April 20:07, Thursday, 09 April
Richard Stallman's Political Notes XML 19:49, Thursday, 09 April 20:38, Thursday, 09 April
Scenes From A Multiverse XML 19:56, Thursday, 09 April 20:43, Thursday, 09 April
Schneier on Security XML 19:35, Thursday, 09 April 20:15, Thursday, 09 April
SCHNEWS.ORG.UK XML 19:56, Thursday, 09 April 20:40, Thursday, 09 April
Scripting News XML 19:21, Thursday, 09 April 20:03, Thursday, 09 April
Seth's Blog XML 19:21, Thursday, 09 April 20:07, Thursday, 09 April
Skin Horse XML 19:21, Thursday, 09 April 20:03, Thursday, 09 April
Spinnerette XML 19:56, Thursday, 09 April 20:40, Thursday, 09 April
Tales From the Riverbank XML 19:49, Thursday, 09 April 20:38, Thursday, 09 April
The Adventures of Dr. McNinja XML 19:56, Thursday, 09 April 20:41, Thursday, 09 April
The Bumpycat sat on the mat XML 19:35, Thursday, 09 April 20:15, Thursday, 09 April
The Daily WTF XML 19:21, Thursday, 09 April 20:07, Thursday, 09 April
The Monochrome Mob XML 19:35, Thursday, 09 April 20:16, Thursday, 09 April
The Non-Adventures of Wonderella XML 19:56, Thursday, 09 April 20:39, Thursday, 09 April
The Old New Thing XML 19:56, Thursday, 09 April 20:40, Thursday, 09 April
The Open Source Grid Engine Blog XML 19:56, Thursday, 09 April 20:43, Thursday, 09 April
The Stranger XML 19:56, Thursday, 09 April 20:41, Thursday, 09 April
towerhamletsalarm XML 19:21, Thursday, 09 April 20:07, Thursday, 09 April
Twokinds XML 19:21, Thursday, 09 April 20:03, Thursday, 09 April
UK Indymedia Features XML 19:21, Thursday, 09 April 20:03, Thursday, 09 April
Uploads from ne11y XML 19:21, Thursday, 09 April 20:07, Thursday, 09 April
Uploads from piasladic XML 19:56, Thursday, 09 April 20:39, Thursday, 09 April
Use Sword on Monster XML 19:56, Thursday, 09 April 20:43, Thursday, 09 April
Wayward Sons: Legends - Sci-Fi Full Page Webcomic - Updates Daily XML 19:21, Thursday, 09 April 20:07, Thursday, 09 April
what if? XML 19:35, Thursday, 09 April 20:16, Thursday, 09 April
Whatever XML 19:49, Thursday, 09 April 20:38, Thursday, 09 April
Whitechapel Anarchist Group XML 19:49, Thursday, 09 April 20:38, Thursday, 09 April
WIL WHEATON dot NET XML 19:56, Thursday, 09 April 20:40, Thursday, 09 April
wish XML 19:56, Thursday, 09 April 20:41, Thursday, 09 April
Writing the Bright Fantastic XML 19:56, Thursday, 09 April 20:40, Thursday, 09 April
xkcd.com XML 19:56, Thursday, 09 April 20:39, Thursday, 09 April