Thursday, 19 February

19:07

The Big Idea: Gideon Marcus [Whatever]

On occasion, you know the ending of your story before you start writing. Most other times, you find the path as you go, each twisting turn appearing before you as you continue on your merry way. The latter seems to be the case for author Gideon Marcus, who says in his Big Idea that he wasn’t always sure how to wrap up his newest novel, Majera.

GIDEON MARCUS:

What’s the big idea with Majera? That’s a hard one, because there are lots of threads: the unstated, obvious, valued diversity of the future, which helps define the setting as the future. That’s a familiar technique—Tom Purdom pioneered it, and Star Trek popularized it. There’s a focus on relationships: found family, love in myriad combinations. There’s the foundation of science, a real universe underpinning everything.

But I guess what I associate with Majera most strongly is conclusion.

Starting an exciting adventure is easy. Finishing stories is hard. George R. R. Martin, Pat Rothfuss. Hideaki Anno all have famously struggled with it. When Kitra and her friends first got catapulted ten light years from home in Kitra, I started them on a journey whose ending I only had the vaguest outline of. I had adventure seeds: the failing colony sleeper ship in Sirena, the insurrection in Hyvilma, and the dead planet in Majera, but the personal journeys of the characters I left up to them.

I know a lot of people don’t write the way I do. I think writers mirror the opposing schools of acting: on one end, the Method of sliding deep into character; on the other, George C. Scott’s completely external creation of an alternate personality. In the Scott school of writing, characters are puppets acting out an intricate dance created by the author. In the Method school of writing, of which I am a member, the characters have independent lives. I know that seems contradictory—how can fictional agglomerations of words achieve sentience?

And yet, they do! I didn’t plan Kitra and Marta’s rekindling of their relationship. Pinky’s jokes come out of the ether. Heck, I didn’t even come up with the solution that saved the ship in Kitra—Fareedh and Pinky did (people often congratulate me on how well I set up that solution from the beginning; news to me! I just write what the characters tell me to…)

All this is to say, I didn’t know how this arc of The Kitra Saga was going to end. But I knew it had to end well, it had to end satisfyingly, for the reader and for the characters. There had to be a reason the Majera crew would stop and take a breather from their string of increasingly exotic adventures. The worldbuilding! All of the little tidbits I’d developed had to be kept consistent: historical, scientific, character-related. There had to be a plausible resolution to the love pentangle that the Majera crew found themselves in, one that was respectful to all the characters and, more importantly, the reader’s sensitivies and credulity.

That’s why this book took longer to put to bed than all the others. It’s not the longest, but it was the hardest. Frankly, I don’t think I could even have written this book five years ago. I needed the life experience to fundamentally grok everyone’s internal workings, from Pinky’s wrestling with being an alien in a human world, to Peter’s coming to grips with his fears, to Kitra’s understanding of her role vis. a vis. her friends, her crew, her partners. In other words, I had to be 51 to authentically write a gaggle of 20-year-olds!

Beyond that, I had to, even in the conclusion, lay seeds for the rest of the saga, for there is a central mystery to the galaxy that has only been hinted at (not to mention a lot more tropes to subvert…)

Conclusions are hard. I think I’ve succeeded. I hope I’ve succeeded. I guess it’s for you to judge!


Majera: Amazon|Amazon (eBook)|Audible|Barnes & Noble|Bookshop|Kobo

Author socials: Website|Bluesky

18:07

Slog AM: Woman Killed by Driver in Capitol Hill Identified, Dog Enters Olympic Skiing, We’re Suing Trump Again [The Stranger]

The Stranger's morning news roundup. by Micah Yip

Woman Killed By Driver in Capitol Hill Identified: Her name was Lilliana Moreno. On Monday night, the 27-year-old was crossing East Pike Street when she was hit by a car making a right turn from Bellevue Avenue. She was trapped under the car for 20 minutes and died at the scene.

ICE Arrests: According to the Deportation Data Project, ICE arrested 2,000 people in Washington between late-January and mid-October of last year, a 140 percent increase from the same period in 2024. Roughly 47 percent of those arrested had no criminal history.

Seahawks for Sale: Paul G. Allen’s estate has begun the sale process, the team announced on Instagram. Allen’s will directed his sister/estate chair to sell all his sports holdings and donate the proceeds to  “philanthropic efforts.” But don’t worry, the team is unlikely to leave the city.

          View this post on Instagram                      

A post shared by Seattle Seahawks (@seahawks)

8 Skiers Dead, 1 Missing: On Tuesday, a deadly avalanche in California overtook a group of 15 skiers and guides in the Sierra Nevadas near Lake Tahoe. Eight are dead and one is missing. Six were rescued and one was still in the hospital last night. An avalanche warning was issued Tuesday morning. Authorities are investigating the decision to proceed anyway.

College (Re)Bound: After a pandemic-era decline, community college enrollment in Washington has rebounded. New data from the Washington Student Achievement Council shows a 7.5 percent increase in community college enrollment between 2024 and 2025. Four-year universities aren’t so lucky—undergrad enrollment only rose 1 percent last year, and actually declined 7.5 percent among first-term freshmen. 

We’re Suing Trump Again: Attorney General Nick Brown and 14 other state attorneys general are suing the federal government for decimating clean energy programs created and funded by Congress. Trump is not supposed to do that!

Weather: It’s COLD!  It’ll be mostly cloudy with a high near 40 and there’s a slight chance of rain and snow before 1 p.m. Tonight will also be cloudy with a low of 27.

Underdog: I have very little interest in the Winter Olympics. The only real clip I’ve watched is of this dog crashing a cross-country skiing course to join two skiers crossing the finish line. And really, it’s the only clip I need.

 

Ex-Prince Arrested: Andrew Mountbatten-Windsor was arrested today on suspicion of misconduct in public office for allegedly sharing confidential documents with Jeffrey Epstein. The former prince was stripped of his titles in October for his association with the convicted sex offender. 

Put it On My Card: Actually, don’t—I don’t want to pay extra. Starting March 1, you’ll be charged a 3 percent fee when you use a credit or debit card to pay for a Washington State Ferry fare. Added in the 2025-26 state transportation budget, the new fee is meant to offset the cost of processing card payments. Officials estimate it’ll bring in $7.4 million over the next two years. 

Lunar New Year Began Tuesday: Looking to celebrate? Here’s EverOuts’ list of Lunar New Year events around the city.

Ramadan Also Began Tuesday: WBUR’s Here & Now put out this segment yesterday on the importance of the date fruit when breaking the fast, and follows reporter Hana Baba as she shops for Ramadan and talks with other Muslims about the date. 

17:42

Link [Scripting News]

President Obama going to the NBA All-Star game made the freaking All-Star game worth something. Perfect place for him to show up.

17:21

Cover Reveal: Monsters of Ohio [Whatever]

Just look at this cover for Monsters of Ohio. Look at it! It is amazing. I am so happy with it. It’s the work of artist Michael Koelsch (whose art has graced my work before, notably the Subterranean Press editions of the Dispatcher sequels Murder by Other Means and Travel by Bullet) , and he’s knocked it out of the park. I am, in a word, delighted.

And what is Monsters of Ohio about? Here’s the current jacket copy for it:

In many ways Richland, Ohio is the same tiny, sleepy rural village it has been for the last 150 years: The same families, the same farms, the same heartland beliefs and traditions that have sustained it for generations. But right now times are especially hard, as social and economic forces inside and outside the community roil the surface of the once-placid town.

Richland, in other words, is primed to explode… just not the way that anyone anywhere could ever have expected. And when things do explode, well, that’s when things start getting really weird.

Mike Boyd left Richland decades back, to find his own way in the world. But when he is called back to his hometown to tie up some loose ends, he finds more going on than he bargained for, and is caught up in a sequence of events that will bring this tiny farm village to the attention of the entire world… and, perhaps, spell its doom.

Ooooooooooh! Doooooom! Perhaaaaaaaps!

If that was too much text for you, here is the two-word version: Cozy Cronenberg.

Yeah, it’s gonna be fun.

When can you get it? November 3rd in North America and November 5 in the UK and most of the rest of the world. But of course you can pre-order this very minute at your favorite bookseller, whether that be your local indie, your nearby bookstore chain, or online retailer of your choice. Why wait! Put your money down! The book’s already written, after all. It’s guaranteed to ship!

Oh, and, for extra fun, here’s the author photo for the novel:

Yup, that pretty much sets the tone.

I hope you like Monsters of Ohio when you get a chance to read it. In November!

— JS

17:07

Seven stable kernels for Thursday [LWN.net]

Greg Kroah-Hartman has announced the 6.19.3, 6.18.13, 6.12.74, 6.6.127, 6.1.164, 5.15.201, and 5.10.251 stable kernels. As usual, each includes important fixes and users are advised to upgrade.

16:56

Link [Scripting News]

It's interesting to see the ATProto solution to a problem we solved in RSS-land a few years ago, how to include Markdown along with other source formats (HTML, OPML).

16:35

Exploring the signals the dialog manager uses for dismissing a dialog [The Old New Thing]

There are a few different built-in ways to close a dialog box in the classic Windows dialog manager. Let’s run them down.

First, there’s hitting the ESC key.

The ESC key, as with all keyboard navigation, is handled by the Is­Dialog­Message function. Assuming that the dialog control with focus did not use the WM_GET­DLG­CODE message to override default keyboard handling, the Is­Dialog­Message function converts the ESC to a simulated button click of whatever dialog control has the ID IDCANCEL. Specifically, the message is WM_COMMAND, the notification code is BN_CLICKED, the control ID is IDCANCEL, and the window handle is the handle to whatever dialog control has the ID IDCANCEL (or nullptr if there is no such control).

Exception: If there is a control whose ID is IDCANCEL, and that control is disabled, then the Is­Dialog­Message function merely beeps and otherwise ignores the ESC key.

Okay, what about the Close button in the title bar, the one that looks like an ×?

The Close button in the title bar, double-clicking the dialog box icon (if there is one), selecting Close from the system menu, and pressing Alt+F4 all behave the same way: They generate a WM_SYSCOMMAND message whose wParam & 0xFFF0 is SC_CLOSE. The default window procedure turns this into a WM_CLOSE message. The default dialog procedure responds to the WM_CLOSE in basically the same way that Is­Dialog­Message does: It generates a simulated button click of whatever dialog control has the ID IDCANCEL. Again, this is done by converting it to the WM_COMMAND message, with a notification code of BN_CLICKED, a control ID of IDCANCEL, and the handle to whatever dialog control has the ID IDCANCEL (or nullptr if there is no such control). It also has the same exception: If there is a control whose ID is IDCANCEL, and that control is disabled, then the default dialog procedure just beeps and otherwise ignores the message.

Now that we understand what happens, next time we can look at ways of customizing the behavior.

Bonus chatter: You can see from this that the dialog manager is wired to treat a control with the ID IDCANCEL as if it were a Cancel button, so if you have a Cancel button, give it the ID IDCANCEL. Conversely, if you have a control whose ID is IDCANCEL, it had better be a button if you know what’s good for you.

The post Exploring the signals the dialog manager uses for dismissing a dialog appeared first on The Old New Thing.

15:35

[$] Modernizing swapping: virtual swap spaces [LWN.net]

The kernel's unloved but performance-critical swapping subsystem has been undergoing multiple rounds of improvement in recent times. Recent articles have described the addition of the swap table as a new way of representing the state of the swap cache, and the removal of the swap map as the way of tracking swap space. Work in this area is not done, though; this series from Nhat Pham addresses a number of swap-related problems by replacing the new swap table structures with a single, virtual swap space.

14:49

openSUSE governance proposal advances [LWN.net]

Douglas DeMaio has announced that Jeff Mahoney's new governance proposal for openSUSE, which was published in January, is moving forward. The new structure would have three governance bodies: a new technical steering committee (TSC), a community and marketing committee (CMC), as well as the existing openSUSE board.

The discussions during the meeting proposed that the Technical Steering Committee should begin with five members with a chair elected by the committee. The group would establish clear processes for reviewing and approving technical changes, drawing inspiration from Fedora's FESCo model. Decisions for the TSC would use a voting system of +1 to approve, 0 for neutral, or -1 to block. A proposal passes without objection. A -1 vote would require a dedicated meeting, where a majority of attendees would decide the outcome. Objections must include a clear, documented rationale.

Discussions related to the Community and Marketing Committee would focus on outreach, advocacy, and community growth. It could also serve as an initial escalation point for disputes. If consensus cannot be reached at that level, matters would advance to the Board.

[...] No timeline for final adoption was announced. Project contributors will continue discussions through the GitLab repository and future community meetings.

Security updates for Thursday [LWN.net]

Security updates have been issued by AlmaLinux (edk2, glibc, gnupg2, golang, grafana, nodejs:24, and php), Debian (gimp and kernel), Fedora (fvwm3), Mageia (microcode and vim), Oracle (edk2, glibc, kernel, nodejs:24, and php), Red Hat (python-s3transfer), SUSE (abseil-cpp, avahi, azure-cli-core, fontforge, go1.24, go1.25, golang-github-prometheus-prometheus, libpcap, libsoup2, libxml2-16, mupdf, nodejs22, openCryptoki, openjpeg2, patch, python-aiohttp, python-Brotli, python-pip, python311-asgiref, rust1.93, and traefik), and Ubuntu (inetutils, libssh, linux-gcp, linux-gke, linux-hwe-6.8, linux-lowlatency-hwe-6.8, linux-intel-iotg-5.15, linux-xilinx-zynqmp, linux-lowlatency, linux-nvidia-lowlatency, and trafficserver).

Pluralistic: Six Years of Pluralistic (19 Feb 2026) [Pluralistic: Daily links from Cory Doctorow]

->->->->->->->->->->->->->->->->->->->->->->->->->->->->-> Top Sources: None -->

Today's links



A screengrab from the first episode of 'The Prisoner,' showing Number Six (Patrick McGoohan) lying unconscious on the beach after being bowled over by a giant white sphere (a 'Rover'). The image has been altered; my head has been superimposed on McGoohan's body; TV scan lines have been added, and the image has been given a vertical 'ripple' of the sort that appears in a badly tuned broadcast TV signal.

Six years of Pluralistic (permalink)

Six years ago today, after 19 years with Boing Boing, during which time I wrote tens of thousands of blog posts, I started a new, solo blog, with the semi-ironic name "Pluralistic." I didn't know what Pluralistic was going to be, but I wasn't writing Boing Boing anymore, and I knew I wanted to keep writing the web in some fashion.

Six years and more than 1,500 posts later, I am so satisfied with how Pluralistic is going. I spent a couple of decades processing everything that seemed interesting or significant through a blog, which created a massive database (and mnemonically available collection of partially developed thoughts) that I'm now reprocessing as a series of essays that make sense of today in light of everything that I've thought about for my whole adult life, which are, in turn, fodder for books, both fiction and nonfiction. I call this "The Memex Method":

https://pluralistic.net/2021/05/09/the-memex-method/

"The Memex Method" is also the title of a collection of essays (from this blog) that I've sold to Farrar, Straus and Giroux, but that book keeps getting bumped because of other books I end up writing based on the work I do here, starting with last year's Enshittification. I'm now fully two books ahead of myself, with The Reverse Centaur's Guide to Life After AI coming in June, and The Post-American Internet in early 2027 (in addition to two graphic novels and a short story collection). Professionally speaking, these are the most successful books I've written, in a long, 30+ book career with many notable successes. Intellectually and artistically speaking, I'm incredibly satisfied with the direction my career has moved in over my six Pluralistic years.

Blogging is – and always has been – a lot of work for me, but it's work that pays off, even if I don't always know what form that payoff will take.

One essential part of this blog is my daily retrospective of posts from this day through my blogging history – 25 years ago, 20 years ago, 15 years ago, 10 years ago, 5 years ago, and last year. I used to call this "This day in history" but now I call it "Object permanence," for the developmental milestone when toddlers gain the ability to remember and reason about things that have recently happened (roughly, it's the point at which "peek-a-boo" stops being fun).

The daily business of reviewing and selecting blog posts from different parts of my life started as a trivial exercise, but it's become one of the most important things I do. I liken it to working dough and folding the dry crumbly edges back into the center; in this case, I'm folding all the fragments that are in danger of escaping my working memory back into the center of my attention.

Six years ago, I didn't know what Pluralistic was going to be. Today, I still don't know. But because this is a labor of love, and a solo project, I get to try anything and either give it up or carry it on based on how it makes me feel and what effect it has on my life. I'm always tinkering with the format: this year, I also added a subhead to the Object Permanence section that tries to call out (in as few characters as possible) the most important elements of the day's list.

I also dropped some things this year, notably, my "linkdump" posts. A couple years ago, at the suggestion of Mitch Wagner, I added a new section called "Hey look at this," which featured three bare links to things I thought were noteworthy but didn't have time or inclination to delve into in depth. Later, I expanded this section to five.

However, even with five bare links per edition, I often found myself with a backlog of noteworthy things. So I started writing the occasional Saturday "linkdump" essay in which I wove together the whole backlog into a giant, meandering essay. These made for interesting rhetorical challenges, as I found elegant ways to bridge completely disparate subjects – a kind of collaging, perhaps akin to how a mashup artist mixes two very different tracks together. Mentally, I thought of this as "ringing the changes," but ultimately, I decided to drop these linkdump posts (for now, at least). They ended up being too much work, and of little value to me, because I found myself unable to remember what I wrote in them and thus to call them up to refer to them for future posts. Here's all 33 linkdumps; they're not gone forever (not so long as the links pile up in my backlog), but when they come back, they'll be in a different form:

https://pluralistic.net/tag/linkdump/

This really is a labor of love, in the sense that I love doing it, and because it's hard work. The fact that it's hard work is a feature, not a bug. Working hard on stuff is really important to me, because when I am working hard, I gain respite from both physical and mental discomfort. As a guy with serious chronic pain living through the Trump years, I've got plenty of both kinds of discomfort. I can't overstate how physically and mentally beneficial it is to me to have an activity that takes me out of the moment. This year, I wrote several editions of Pluralistic from an infusion couch at the Kaiser Sunset hematology center in LA, where I was receiving immunotherapy for a cancer diagnosis that I'm assured is very treatable, but which – to be totally honest – sometimes gets my old worrier running hot:

https://pluralistic.net/2024/11/05/carcinoma-angels/#squeaky-nail

Making Pluralistic is several kinds of hard work. Over the past six years, I've become an ardent collagist, spending more and more time on the weird, semi-grotesque images that run atop every edition. Anything you devote substantial time to on a near-daily basis is something that gives you insight – into yourself, and into the thing you're doing. I've always had a certain familiarity with computer image editing (I think I got my start writing Apple ][+ BASIC programs that spat out ASCII art, before graduating to making pixel-art for Broderbund's "Print Shop"), but I've never applied myself to any visual field in a serious way, until now.

Amazingly, after 50 years of thinking of myself as someone who is "bad at visual art," I find myself identifying as a visual artist. I find myself pondering visual works the same way I think about prose – mentally tearing it apart to unpick how it is done, and thinking about how I could productively steal some new techniques for my own work. I'm also privileged to have some accomplished visual artists in my circle, like my pal Alistair Milne, who generously share technical and aesthetic tips. It's got to the point where I published a book of my art, and I think I'll probably do it again next year:

https://pluralistic.net/2025/09/04/illustrious/#chairman-bruce

There's also a ton of technical work that goes into publishing each edition of this newsletter. Things have moved on somewhat since I published an in-depth process-post in 2021, though I'm still totally reliant on Loren Kohnfelder's python scripts that help me turn the XML file I compose every day into files that are (nearly) ready to publish:

https://pluralistic.net/2021/01/13/two-decades/#hfbd

Much of the technical work is down to the fact that I'm still completely wed to the idea of "POSSE" (Post Own Site, Syndicate Everywhere):

https://pluralistic.net/2022/02/19/now-we-are-two/#two-much-posse

This means that after I write the day's post, I reformat it and republish it as a text-only newsletter, a Medium post, a Tumblr post, a Twitter thread and a Mastodon thread. This involves a ton of manual work, because none of the services I post to are designed to facilitate this, so I'm always wrestling with them. This year, all of them got worse (incredibly).

Medium – where I used to have a paid column – has dropped its free-flag for my account, which now limits me to how many posts I can schedule. This doesn't come up often, but when I do schedule a post, it's generally because I'm going to be on a plane or a stage and won't be able to do it manually. There's no way I'm going to pay for this feature: I'm happy to give Medium my work gratis, but I will not and do not pay anyone to publish my work, and I never will.

Tumblr did something to its post-composing text editor that completely broke it and I've given up on fixing it. I can't even type into a new post field! I have to paste in some styled text, then delete it, then start typing. It's ghastly. So now I just have a text file full of formatted HTML snippets and I work exclusively in the Tumblr HTML editor, pasting in blobs of preformatted HTML (including the florid, verbose HTML Tumblr uses for its own formatting) and then laboriously flip back and forth to the "visual" editor to see the parts that went wrong. Here's how busted that visual editor is: searching for a word then double-clicking on it does not select it. You have to click once, wait about 1.5 seconds, click again, wait again, and then you can select the word.

Twitter has entered a period of terminal technical decline. I know, I know, we always talk about how fucked Twitter's content moderation is, for obvious and good reasons, but from a technical perspective, Twitter just sucks. If I make a post with an image and alt text in anticipation of later using it to start a thread, it often goes "stale" and will not publish until I delete the image and re-attach it and re-paste the alt text. Meanwhile, the thread editor is also decaying into uselessness. Fill in a 25-post thread and hit publish and, the majority of times, the thread publication will die midway through, displaying lots of weird failure modes (phantom empty posts at the end of the thread that need to be individually selected and deleted are a common one, but not the only one). The old Twitter's ability to add a new thread to an existing one has been dead for at least a year, so every post after the 25th stanza has to be manually tacked on to the previous one, which is made far harder by the fact that Twitter no longer reliably shows you the post you just made after it publishes.

Mastodon still lacks a decent thread editor, one that has even the minimal functionality of Twitter circa 2020. Meanwhile, the Fediverse HOA continues to surface from time to time, with someone who's had a Masto account for ten seconds scolding me for posting threads – from my account whose bio starts "I post long threads." It's genuinely tedious to be shouted at for "using Mastodon wrong" by someone who started using Mastodon yesterday (I opened my first Mastodon account in 2018!), and even worse when they double down after I point them to the essay I've written to explain why I post the way I do, and what to do if you want to read my work somewhere that's not your Mastodon timeline ("Can you believe this asshole wrote a whole essay to explain why he posts his stupid Mastodon threads?"):

https://pluralistic.net/2023/04/16/how-to-make-the-least-worst-mastodon-threads/

Then there's email: I continue to love email, but email doesn't love me back. After years of being blackholed by AT&T and then Google, this turns out to be the year that Microsoft bounces thousands of messages to its Hotmail and Outlook users because they have arbitrarily and without warning added my mail-server to a blacklist. Thank you to the Fediverse friends who escalated my trouble ticket – but man, this is a headache I could certainly do without:

https://pluralistic.net/2021/10/10/dead-letters/

My sysadmin, the incomparable and tireless Ken Snider, tells me that he's got the long-overdue new hardware installed at the colo and he's nearly ready to stand up my long-anticipated personal Mastodon server, which will let me solve all kinds of problems. He's also going to stand up my own Bluesky server, at which point I will part ways with Twitter. I wish I could have used the regular Bluesky service while I waited, but just setting up an account permanently binds you to totally unacceptable and dangerous terms of service:

https://pluralistic.net/2025/08/15/dogs-breakfast/#by-clicking-this-you-agree-on-behalf-of-your-employer-to-release-me-from-all-obligations-and-waivers-arising-from-any-and-all-NON-NEGOTIATED-agreements

What's the point of a service that has account- and data-portability if signing up for it makes you permanently surrender your rights, even if you switch servers? This might be the stupidest social media unforced error of the post-zuckermuskian era.

There is one technology that has made my POSSE life better, and it might surprise you. This year, I installed Ollama – an open-source LLM – on my laptop. It runs pretty well, even without a GPU. Every day, before I run Loren's python publication scripts, I run the text through Ollama as a typo-catcher (my prompt is "find typos"). Ollama always spots three or four of these, usually stuff like missing punctuation, or forgotten words, or double words ("the the next thing") or typos that are still valid words ("of top of everything else").

The reason this is so valuable to me is that errors magnify through each stage of POSSE. Errors that make it through the python publication script take 10x the time to fix that they would if I caught them beforehand. Errors that I catch after running the scripts and publishing the posts take 10x time more. Errors that I have to fix later on – once I've closed all the relevant tabs and editors – take 10x again more time. Some POSSE channels (email, Twitter) can't be fixed at all.

So catching these typos at the start of the process is a huge time-saver. I have some very generous readers who have the proofreader's gene and are very helpful in catching my typos (hi, Gregory and 9o6!), and I feel bad about depriving them of their fun, but there's still the odd error that slips through, and they always catch it.

Ollama is a pretty good typo-catcher. Probably half of the "errors" it points out are false positives, which is better than the false positive rate for Google Docs' grammar-checker. As someone who uses a lot of jargon, made up words, etc in his prose, I'm used to overriding my text-editor. I wouldn't simply trust an LLM's edits any more than I would accept every suggestion from a spell-checker. Hell, yesterday I sent back a professionally copyedited manuscript (the intro for the paperback of Enshittification) and marked "STET" on about a third of the queries.

Doubtless some of you are affronted by my modest use of an LLM. You think that LLMs are "fruits of the poisoned tree" and must be eschewed because they are saturated with the sin of their origins. I think this is a very bad take, the kind of rathole that purity culture always ends up in.

Let's start with some context. If you don't want to use technology that was created under immoral circumstances or that sprang from an immoral mind, then you are totally fucked. I mean, all the way down to the silicon chips in your device, which can never be fully disentangled from the odious, paranoid racist William Shockley, who won the Nobel Prize for co-inventing the silicon transistor:

https://pluralistic.net/2021/10/24/the-traitorous-eight-and-the-battle-of-germanium-valley/

Further, we wouldn't have the packet-switched network that delivered these words to you without the contributions of the literal war-criminals at the RAND corporation:

https://en.wikipedia.org/wiki/ARPANET

Refusing to use a technology because the people who developed it were indefensible creeps is a self-owning dead-end. You know what's better than refusing to use a technology because you hate its creators? Seizing that technology and making it your own. Don't like the fact that a convicted monopolist has a death-grip on networking? Steal its protocol, release a free software version of it, and leave it in your dust:

https://www.eff.org/deeplinks/2019/07/samba-versus-smb-adversarial-interoperability-judo-network-effects

That's how we make good tech: not by insisting that all its inputs be free from sin, but by purging that wickedness by liberating the technology from its monstrous forebears and making free and open versions of it:

https://pluralistic.net/2025/01/14/contesting-popularity/#everybody-samba

Purity culture is such an obvious trap, an artifact of the neoliberal ideology that insists that the solution to all our problems is to shop very carefully, thus reducing all politics to personal consumption choices:

https://pluralistic.net/2025/07/31/unsatisfying-answers/#systemic-problems

I mean, it was extraordinarily stupid for the Nazis to refuse Einstein's work because it was "Jewish science," but not merely because antisemitism is stupid. It was also a major self-limiting move because Einstein was right:

https://www.scientificamerican.com/article/how-2-pro-nazi-nobelists-attacked-einstein-s-jewish-science-excerpt1/

Refusing to run an LLM on your laptop because you don't like Sam Altman is as foolish as refusing to get monoclonal antibodies because James Watson was a racist nutjob:

https://www.statnews.com/2025/11/07/james-watson-remembrance-from-dna-pioneer-to-pariah/

Or to refuse to communicate via satellite because they were launched into space on a descendant of a rocket designed by the Nazi Wernher von Braun and built by slaves in a death camp:

https://wsmrmuseum.com/2020/07/27/von-braun-the-v-2-and-slave-labor/4/

The AI bubble sucks. AI itself is a normal technology:

https://knightcolumbia.org/content/ai-as-normal-technology

It's not "unethical" to scrape the web in order to create and analyze data-sets. That's just "a search engine":

https://pluralistic.net/2023/09/17/how-to-think-about-scraping/

There's plenty of useful things people can do with AI. There's plenty of useful things people will do with AI. AI is bad because it's an economic bubble and a grift, but not because we've created a bunch of utilities that would – under normal circumstances – be called "plug-ins":

https://pluralistic.net/2025/12/05/pop-that-bubble/#u-washington

I started blogging 25 years ago, just before the dotcom bubble popped. That bubble-pop inflicted a lot of pain on people who didn't deserve it, including the normie investors who'd been suckered into blowing their life's savings on dogshit stocks, and everyday workers who found themselves out of a job. But the world was better off. So was the web. With the bubble popped, real, good stuff could access talent, servers and office space.

In the six years I've been doing this, I've seen several bubbles come and go: crypto, web3, metaverse. Now it's AI. But those bubbles were like Enron, frauds that left nothing good behind. AI is like the dotcom bubble, awash in sin and inflicting untold misery, but it will leave something useful behind:

https://pluralistic.net/2023/12/19/bubblenomics/#pop

And when it does, I'll make sense of it on this blog.


Hey look at this (permalink)



A shelf of leatherbound history books with a gilt-stamped series title, 'The World's Famous Events.'

Object permanence (permalink)

#20yrsago HOWTO resist warrantless searches at Best Buy https://www.die.net/musings/bestbuy/

#20yrsago RIAA using kids’ private info to attack their mother https://web.archive.org/web/20060223111437/http://p2pnet.net/story/7942

#20yrsago Sony BMG demotes CEO for deploying DRM https://web.archive.org/web/20060219233817/http://biz.yahoo.com/ap/060210/germany_sony_bmg_ceo.html?.v=7

#20yrsago Sistine Chapel recreated through 10-year cross-stitch project https://web.archive.org/web/20060214195146/http://www.austinstitchers.org/Show06/images/sistine2.jpg

#20yrsago J Edgar Hoover loved Lucy https://web.archive.org/web/20060425120915/http://www.lucylibrary.com/pages/lucy-news-fbi.letter.html

#20yrsago Bad Samaritan family won’t return found expensive camera https://web.archive.org/web/20060222200300/https://lostcamera.blogspot.com/2006/02/camera-unlost-but-not-quite-found.html

#15yrsago What does Libyan revolution mean for bit.ly? https://domainnamewire.com/2011/02/18/is-bit-ly-toast-if-libya-shuts-down-the-internet/

#15yrsago Optical illusion inventor goes on to invent copyright threats against 3D printing company https://web.archive.org/web/20110221185839/https://blog.thingiverse.com/2011/02/18/copyright-and-intellectual-property-policy/#respond

#15yrsago Crappy themepark operators convicted of “engaging in a commercial practice which was a misleading action” https://www.theguardian.com/uk/2011/feb/18/lapland-theme-park-brothers-convicted

#15yrsago HBGary’s high-volume astroturfing technology and the Feds who requested it https://www.dailykos.com/story/2011/02/16/945768/-UPDATED:-The-HB-Gary-Email-That-Should-Concern-Us-All

#15yrsago Authors Guild argues in favor of censorship (also: they don’t know shit about Shakespeare) https://volokh.com/2011/02/17/there-should-be-a-name-for-this-one-too/

#15yrsago Hollywood hospital ransoms itself back from hackers for a mere $17,000 https://web.archive.org/web/20160227094254/https://www.latimes.com/business/technology/la-me-ln-hollywood-hospital-bitcoin-20160217-story.html

#15yrsago Chinese millionaire sues himself through an offshore shell company to beat currency export controls https://web.archive.org/web/20180526235055/https://blogs.wsj.com/chinarealtime/2016/02/16/china-capital-flight-2-0-lose-a-lawsuit-on-purpose/?guid=BL-CJB-28691&dsk=y

#15yrsago Selling cookies like a crack dealer, by dangling a string out your kitchen window https://laughingsquid.com/cookies-sold-by-string-dangling-from-san-francisco-apartment-window/

#15yrsago Midwestern Tahrir: Workers refuse to leave Wisconsin capital over Tea Party labor law https://www.theawl.com/2011/02/wisconsin-demonstrates-against-scott-walkers-war-on-unions/

#10yrsago Back-room revisions to TPP sneakily criminalize fansubbing & other copyright grey zones https://www.eff.org/deeplinks/2016/02/sneaky-change-tpp-drastically-extends-criminal-penalties

#10yrsago Russian Central Bank shutting down banks that staged fake cyberattacks to rip off depositors https://web.archive.org/web/20160220100817/http://www.scmagazine.com/russian-bank-licences-revoked-for-using-hackers-to-withdraw-funds/article/474477/

#10yrsago Stop paying your student loans and debt collectors can send US Marshals to arrest you https://web.archive.org/web/20201026202024/https://nymag.com/intelligencer/2016/02/us-marshals-forcibly-collecting-student-debt.html?mid=twitter-share-di

#5yrsago Reverse centaurs and the failure of AI https://pluralistic.net/2021/02/17/reverse-centaur/#reverse-centaur

#5yrsago Strength in numbers https://pluralistic.net/2021/02/18/ink-stained-wretches/#countless

#5yrsago America and "national capitalism" https://pluralistic.net/2025/02/18/pikettys-productivity/#reaganomics-revenge

#1yrago Business school professors trained an AI to judge workers' personalities based on their faces https://pluralistic.net/2025/02/17/caliper-ai/#racism-machine


Upcoming appearances (permalink)

A photo of me onstage, giving a speech, pounding the podium.



A screenshot of me at my desk, doing a livecast.

Recent appearances (permalink)



A grid of my books with Will Stahle covers..

Latest books (permalink)



A cardboard book box with the Macmillan logo.

Upcoming books (permalink)

  • "The Reverse-Centaur's Guide to AI," a short book about being a better AI critic, Farrar, Straus and Giroux, June 2026
  • "Enshittification, Why Everything Suddenly Got Worse and What to Do About It" (the graphic novel), Firstsecond, 2026

  • "The Post-American Internet," a geopolitical sequel of sorts to Enshittification, Farrar, Straus and Giroux, 2027

  • "Unauthorized Bread": a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, 2027

  • "The Memex Method," Farrar, Straus, Giroux, 2027



Colophon (permalink)

Today's top sources:

Currently writing: "The Post-American Internet," a sequel to "Enshittification," about the better world the rest of us get to have now that Trump has torched America (1013 words today, 31953 total)

  • "The Reverse Centaur's Guide to AI," a short book for Farrar, Straus and Giroux about being an effective AI critic. LEGAL REVIEW AND COPYEDIT COMPLETE.
  • "The Post-American Internet," a short book about internet policy in the age of Trumpism. PLANNING.

  • A Little Brother short story about DIY insulin PLANNING


This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.

https://creativecommons.org/licenses/by/4.0/

Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.


How to get Pluralistic:

Blog (no ads, tracking, or data-collection):

Pluralistic.net

Newsletter (no ads, tracking, or data-collection):

https://pluralistic.net/plura-list

Mastodon (no ads, tracking, or data-collection):

https://mamot.fr/@pluralistic

Medium (no ads, paywalled):

https://doctorow.medium.com/

Twitter (mass-scale, unrestricted, third-party surveillance and advertising):

https://twitter.com/doctorow

Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):

https://mostlysignssomeportents.tumblr.com/tagged/pluralistic

"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla

READ CAREFULLY: By reading this, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies ("BOGUS AGREEMENTS") that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer.

ISSN: 3066-764X

13:49

Link [Scripting News]

They all say podcasting‘s open period is over and one or another huge billionaire-owned platform is the new owner of podcasting. This time it's YouTube. How many times has this happened? Many. But not enough for journalism to respect the power of the people. So here we go again.

Link [Scripting News]

Paul Brainerd, the founder of Aldus, publisher of Pagemaker, died. At least that's what I'm seeing on various social networks. No mention of his passing in the News tab on Google, or on Wikipedia. Pagemaker was a milestone product, it was the first popular desktop publishing app on the Mac, the first to really make use of the graphic OS and laser printing. We worked with Aldus on scripting via Frontier. The ability to automate Pagemaker and then Quark XPress (its main competitor) was very important in the prepress market. I once said no one wants that (referring to Pagemaker) just shows how little I know. There are good reasons to believe that one product saved the Mac and Apple.

13:07

Link [Scripting News]

I wrote a this.how doc a few years about with some of the lessons I've learned doing work on web standards.

Link [Scripting News]

I would like to have an OPML subscription list containing the feeds of all RSS-based products. So when they update everyone can see what they did. I'd also like to encourage people to post screen shots so we can get an idea of what the product does before installing it. Maybe it's for a platform we don't use? Let's have a new practice where we all know what everyone is doing.

Link [Scripting News]

Just noted that Brent mentioned FeedLand (my own product) that does things differently. Thank you. I don't read most of the pieces that come in via RSS. I scroll through the updates, and if something catches my eye, I stop, read the first part, and then if my interest continues, I read the rest. That's the way I've always read news, going back to the kitchen table at my childhood home where we subscribed to the NY Times, print edition (this was long before the web) and we all sat around the table in the morning reading it and telling each other what we found. News isn't like email. But FeedLand does have a mailbox reader, patterned after Brent's NetNewsWire (only steal from the best). There are times when that's what you want. And mostly I wanted to thank Brent for the mention. BTW, that's not the only new idea in FeedLand. Let's get to know each others' products. That's one of the mistakes we made last time -- thinking each of our products was a self-contained universe. We are part of a community that grew from the web. So by definition we are all just part of a very big world. All our products work together, and to preserve that we as people must all work together too.

12:35

Malicious AI [Schneier on Security]

Interesting:

Summary: An AI agent of unknown ownership autonomously wrote and published a personalized hit piece about me after I rejected its code, attempting to damage my reputation and shame me into accepting its changes into a mainstream python library. This represents a first-of-its-kind case study of misaligned AI behavior in the wild, and raises serious concerns about currently deployed AI agents executing blackmail threats.

Part 2 of the story. And a Wall Street Journal article.

12:28

CodeSOD: Terned Backwards [The Daily WTF]

Antonio has an acquaintance has been seeking career advancement by proactively hunting down and fixing bugs. For example, in one project they were working on, there was a bug where it would incorrectly use MiB for storage sizes instead of MB, and vice-versa.

We can set aside conspiracy theories about HDD and RAM manufacturers lying to us about sizes by using MiB in marketing. It isn't relevant, and besides, its not like anyone can afford RAM anymore, with crazy datacenter buildouts. Regardless, which size to use, the base 1024 or base 1000, was configurable by the user, so obviously there was a bug handling that flag. Said acquaintance dug through, and found this:

const baseValue = useSI ? 1000 : 1024;

I know I have a "reputation" when it comes to hating ternaries, but this is a perfectly fine block of code. It is also correct: if you're using SI notation, you should do base 1000.

Now, given that this code is correct, you or I might say, "Well, I guess that isn't the bug, it must be somewhere else." Not this intrepid developer, who decided that they could fix it.

//            const baseValue = useSI ? 1000 : 1024;
            baseValue = 1024
            if (useSI === false)
            {
                baseValue = 1000;
            }
            if (useSI === true)
            {
                baseValue = 1024;
            }

It's rather amazing to see a single, correct line, replaced with ten incorrect lines, and I'm counting commenting out the correct line as one of them.

First, this doesn't correctly declare baseValue, which JavaScript is pretty forgiving about, but it also discards constness. Of course, you have to discard constness now that you've gotten rid of the ternary.

Then, our if statement compares a boolean value against a boolean literal, instead of simply if(!useSI). We don't use an else, despite an else being absolutely correct. Or actually, since we defaulted baseValue, we don't even need an else!

But of course, all of that is just glitter on a child's hand-made holiday card. The glue holding it all together is that this code just flips the logic. If we're not using SI, we set baseValue to 1000, and if we are using SI, we set it to 1024. This is wrong. This is the opposite of what the code says we should do, what words mean, and how units work.

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

12:21

Packaging Expertise: How Claude Skills Turn Judgment into Artifacts [Radar]

Think about what happens when you onboard a new employee.

First, you provision them tools. Email access. Slack. CRM. Office software. Project management software. Development environment. Connecting a person to the system they’ll need to do their job. However, this is necessary but not sufficient. Nobody becomes effective just because they can log into Salesforce.

Then comes the harder part: teaching them how your organization actually works. The analysis methodology your team developed over years of iteration. The quality bar that is not written down anywhere. The implicit ways of working. The judgment calls about when to escalate and when to handle something independently. The institutional knowledge that separates a new hire from someone who’s been there for years.

This second part—the expertise transfer—is where organizations struggle. It’s expensive and inconsistent, and does not scale. It lives in mentorship relationships, institutional knowledge, and documentation that goes stale the moment it’s written.

Claude Skills and MCP (Model Context Protocol) follow exactly this pattern. MCP gives AI agents such as Claude the tools: access to systems, databases, APIs, and resources. Skills are the training materials that teach Claude how to work and how to use these tools.

This distinction matters more than it might first appear. While we have gotten reasonably good at provisioning tools, we have never had a good way to package expertise. Skills change that. They package expertise into a standardized format.

Tools Versus Training

MCP is tool provisioning. It’s the protocol that connects AI agents to external systems: data warehouse, CRM, GitHub repositories, internal APIs, and knowledge bases. Anthropic describes it as “USB-C for AI”—a standardized interface that lets Claude plug into your existing infrastructure. An MCP server might give Claude the ability to query customer records, commit code, send Slack messages, or pull analytics data with authorized permissions.

This is necessary infrastructure. But like giving a new hire database credentials, it does not tell AI agents what to do with that access. MCP answers the question “What tools can an agent use?” It provides capabilities without opinions.

Skills are the training materials. They encode how your organization actually works: which segments matter, what churn signal to watch for, how to structure findings for your quarterly business review, when to flag something for human attention.

Skills answer a different question: “How should an AI agent think about this?” They provide expertise, not just access.

Consider the difference in what you’re creating. Building an MCP server is infrastructure work; it’s an engineering effort to connect systems securely and reliably. Creating a Skill is knowledge work; domain experts articulating what they know, in markdown files, for AI agents to operationalize and understand. These require different people, different processes, and different governance.

The real power emerges when you combine them. MCP connects AI agents to your data warehouse. A Skill teaches AI agents your firm’s analysis methodology and which MCP tools to use. Together, AI agents can perform expert-level analysis on live data, following your specific standards. Neither layer alone gets you there, just as a new hire with database access but no training, or training but no access, won’t be effective at their jobs.

MCP is the toolbox. Skills are the training manuals that teach how to use those tools.

Why Expertise Has Been So Hard to Scale

The training side of onboarding has always been the bottleneck.

Your best analyst retires, and their methods walk out of the door. Onboarding takes months because the real tacit knowledge lives in people’s heads, not in any document a new hire can read. Consistency is impossible when “how we do things here” varies by who trained whom and who worked with whom. Even when you invest heavily in training programs, they produce point-in-time snapshots of expertise that immediately begin to rot.

Previous approaches have all fallen short:

Documentation is passive and quickly outdated. It requires human interpretation, offers no guarantee of correct application, and can’t adapt to novel situations. The wiki page about customer analysis does not help when you encounter an edge case the author never anticipated.

Training programs are expensive, and a certificate of completion says nothing about actual competency.

Checklists and SOPs capture procedure but not judgment. They tell you what to check, not how to think about what you find. They work for mechanical tasks but fail for anything requiring expertise.

We’ve had Custom GPTs, Claude projects, and Gemini Gems attempting to address this. They are useful but opaque. You cannot invoke them based on context; the AI agent working as Copy Editing Gem stays in copy editing and can’t switch to Laundry Buddy Custom GPTs mid-task. They are not transferable and cannot be packaged for distribution.

Skills offer something new: expertise packaged as a versionable, governable artifact.

Skills are files in folders—a SKILL.md document with supporting assets, scripts, and resources. They leverage all the tooling we have built for managing code. Track changes in Git. Roll back mistakes. Maintain audit trails. Review Skills before deployment through PR workflows with version control. Deploy organization-wide and ensure consistency. AI agents can compose Skills for complex workflows, building sophisticated capabilities from simple building blocks.

The architecture also enables progressive disclosure. AI agents see only lightweight metadata until a Skill becomes relevant, then loads the full instruction on demand. You can have dozens of Skills available without overwhelming the model’s precious context window, which is like a human’s short-term memory or a computer’s RAM. Claude loads expertise as needed and coordinates multiple Skills automatically.

This makes the enterprise deployment model tractable. An expert creates a Skill based on best practices, with the help of an AI/ML engineer to audit and evaluate the effectiveness of the Skill. Administrators review and approve it through governance processes. The organization deploys it everywhere simultaneously. Updates propagate instantly from a central source.

One report cites Rakuten achieving 87.5% faster completion of a finance workflow after implementing Skills. Not from AI magic but from finally being able to distribute their analysts’ methodologies across the entire team. That’s the expertise transfer problem, solved.

Training Materials You Can Meter

The onboarding analogy also created a new business model.

When expertise lives in people, you can only monetize it through labor—billable hours, consulting engagements, training programs, maintenance contracts. The expert has to show up, which limits scale and creates key-person dependencies.

Skills separate expertise from the expert. Package your methodology as a Skill. Distribute it via API. Charge based on usage.

A consulting firm’s analysis framework can become a product. A domain expert’s judgment becomes a service. The Skill encodes the expertise; the API calls become the meter. This is service as software, the SaaS of expertise. And it’s only possible because Skills put knowledge in a form that can be distributed, versioned, and billed against.

The architecture is familiar. The Skill is like an application frontend (the expertise, the methodology, the “how”), while MCP connections or API calls form the backend (data access, actions, the “what”). You build training material once and deploy them everywhere, metering usage through the infrastructure layer.

No more selling API endpoints with 500-page obscure documentation explaining what each endpoint does then staffing a team to support it. Now we can package the expertise of how to use those API directly into Skills. Customers can realize the value of an API via their AI agents. Cost to implement and time to implement drop to zero with MCP. Time to value becomes immediate with Skills.

The Visibility Trade-Off

Every abstraction has a cost. Skills trade visibility for scalability, and that trade-off deserves honest examination.

When expertise transfers human to human, through mentorship, working sessions, apprenticeship, the expert sees how their knowledge gets applied and becomes better in the process. They watch the learner struggle with edge cases. They notice which concepts don’t land. They observe how their methods get adapted to new situations. This feedback loop improves the expertise over time.

Skills break that loop. As a Skill builder, you do not see the conversations that trigger your Skill. You do not know how users adapted your methodology or which part of your guidance AI agents weighted most heavily. Users interact with their own AI agents; your Skill is one influence among many.

Your visibility is limited to the infrastructure layer: API calls, MCP tool invocations, and whatever outputs you explicitly capture. You see usage patterns, not the dialogue that surrounds them. Those dialogues reside with the user’s AI agents.

This parallels what happened when companies moved from in-person training to self-service documentation and e-learning. You lost the ability to watch every learner, but you gained the ability to train at scale. Skills make the same exchange; less visibility per user interaction, vastly more interactions possible.

Managing the trade-off requires intentional design. Build logging and tracing into your Skills where appropriate. Create feedback mechanisms inside skills for AI agents to surface when users express confusion or request changes. And in the development process, focus on outcomes—Did the Skill produce good results?—rather than process observation.

In production, the developer of Skills or MCPs will not have most of the context of how a user’s AI agent uses their Skills.

What to Watch

For organizations going through AI transformations, the starting point is an audit of expertise. What knowledge lives only in a specific person’s head? Where does inconsistency emerge because “how we do things” isn’t written down in an operationalizable form? These are your candidates for Skills.

Start with bounded workflows: a report format, an analysis methodology, a review checklist. Prove the pattern before encoding more complex expertise. Govern early. Skills are artifacts that require review, evaluation, and lifecycle management. Establish those processes before Skills proliferate.

For builders, the mental shift is from “prompt” to “product.” Skills are versioned artifacts with users. Design accordingly. Combine Skills with MCP for maximum leverage. Accept the visibility trade-off as the cost of scale.

Several signals suggest where this is heading. Skill marketplaces are emerging. Agent Skills are now a published open standard being adopted by multiple AI agents and soon agent SDKs. Enterprise governance tooling with version control, approval workflows, and audit trails organizations need will determine adoption in regulated industries.

Expertise Can Finally Be Packaged

We’ve gotten good at provisioning tools as APIs. MCP extends that to AI with standardized connections to systems and data.

But tools access was never the bottleneck. Expertise transfer was. The methodology. The judgment. The caveats. The workflows. The institutional knowledge that separates a new hire from a veteran.

Skills are the first serious attempt to package the expertise into a file format, where AI agents can operationalize it while humans can still read, review, and govern. They are training materials that actually scale.

The organizations that figure out how to package their expertise, both for internal and external consumption, will have a structural advantage. Not because AI replaces expertise. Because AI amplifies the expertise of those who know how to share it.

MCP gives AI agents the tools. Skills teach AI agents how to work. The question is whether you can encode what your best people know. Skills are the first real answer.


References

What Developers Actually Need to Know Right Now [Radar]

The following article includes clips from a recent Live with Tim O’Reilly interview. You can watch the full version on the O’Reilly Media learning platform.

Addy Osmani is one of my favorite people to talk with about the state of software engineering with AI. He spent 14 years leading Chrome’s developer experience team at Google, and recently moved to Google Cloud AI to focus on Gemini and agent development. He’s also the author of numerous books for O’Reilly, including The Effective Software Engineer (due out in March), and my cohost for O’Reilly’s AI Codecon. Every time I talk with him I come away feeling like I have a better grip on what’s real and what’s noise. Our recent conversation on Live with Tim O’Reilly was no exception.

Here are some of the things we talked about.

The hard problem is coordination, not generation

Addy pointed out that there’s a spectrum in how people are working with AI agents right now. On one end you have solo founders running hundreds or thousands of agents, sometimes without even reviewing the code. On the other end you have enterprise teams with quality gates, reliability requirements, and long-term maintenance to think about.

Addy’s take is that for most businesses, “the real frontier is not necessarily having hundreds of agents for a task just for its own sake. It’s about orchestrating a modest set of agents that solve real problems while maintaining control and traceability.” He pointed out that frameworks like Google’s Agent Development Kit now support both deterministic workflow agents and dynamic LLM agents in the same system, so you get to choose when you need predictability and when you need flexibility.

The ecosystem is developing fast. A2A (the agent-to-agent protocol Google contributed to the Linux Foundation) handles agent-to-agent communication while MCP handles agent-to-tool calls. Together they start to look like the TCP/IP of the agent era. But Addy was clear-eyed about where things stand: “Almost nobody’s figured out how to make everything work together as smoothly as possible. We’re getting as close to that as we can. And that’s the actual hard problem here. Not generation, but coordination.”

The “Something Big Is Happening” debate

In response to one of the audience questions, we spent some time on Matt Shumer’s viral essay arguing that the current moment in AI is like just before the COVID-19 epidemic hit. Those in the know were sounding the alarm, but most people weren’t hearing it.

Addy’s take was that “it felt a little bit like somebody who hadn’t been following along, just finally getting around to trying out the latest models and tools and having an epiphany moment.” He thinks the piece lacked grounding in data and didn’t do a great job distinguishing between what AI can do for prototypes and what it can do in production. As Addy put it, “Yes, the models are getting better, the harnesses are getting better, the tools are getting better. I can do more with AI these days than I could a year ago. All of that is true. But to say that all kinds of technical work can now be done with near perfection, I wouldn’t personally agree with that statement.”

I agree with Addy, but I also know how it feels when you see the future crashing in and no one is paying attention. At O’Reilly, we started working with the web when there were only 200 websites. In 1993, we built GNN, the first web portal, and the web’s first advertising. In 1994, we did the first large-scale market research on the potential of advertising as the web’s future business model. We went around lobbying phone companies to adopt the web and (a few years later) for bookstores to pay attention to the rise of Amazon, and nobody listened. I’m a big believer in “something is happening” moments. But I’m also very aware that it always takes longer than it appears.

Both things can be true. The direction and magnitude of this shift are real. The models keep getting better. The harnesses keep getting better. But we still have to figure out new kinds of businesses and new kinds of workflows. AI won’t be a tsunami that wipes everything away overnight.

Addy and I will be cohosting the O’Reilly AI Codecon: Software Craftsmanship in the Age of AI on March 26, where we’ll go much deeper on orchestration, agent coordination, and the new skills developers need. We’d love to see you there. Sign up for free here.

And if you’re interested in presenting at AI Codecon, our CFP is open through this Friday, February 20. Check out what we’re looking for and submit your proposal here.

Feeling productive vs. being productive

There was a great line from a post by Will Manidis called “Tool Shaped Objects” that I shared during our conversation: “The market for feeling productive is orders of magnitude larger than the market for being productive.” The essay is about things that feel amazing to build and use but aren’t necessarily doing the work that needs to be done.

Addy picked up on this immediately. “There is a difference between feeling busy and being productive,” he said. “You can have 100 agents working in the background and feel like you’re being productive. And then someone asks, What did you get built? How much money is it making you?”

This isn’t to dismiss anyone who’s genuinely productive running lots of agents. Some people are. But a healthy skepticism about your own productivity is worth maintaining, especially when the tools make it so easy to feel like you’re moving fast.

Planning is the new coding

Addy talked about how the balance of his time on a task has shifted significantly. “I might spend 30, 40% of the time a task takes just to actually write out what exactly is it that I want,” he said. What are the constraints? What are the success criteria? What’s the architecture? What libraries and UI components should be used?

All of that work to get clarity before you start code generation leads to much-higher-quality outcomes from AI. As Addy put it, “LLMs are very good at grounding things in the lowest common denominator. If there are patterns in the training data that are popular, they’re going to use those unless you tell them otherwise.” If your team has established best practices, codify them in Markdown files or MCP tools so the agent can use them.

I connected the planning phase to something larger about taste. Think about Steve Jobs. He wasn’t a coder. He was a master of knowing what good looked like and driving those who worked with him to achieve it. In this new world, that skill matters enormously. You’re going to be like Jobs telling his engineers “no, no, not that” and giving them a vision of what’s beautiful and powerful. Except now some of those engineers are agents. So management skill, communication skill, and taste are becoming core technical competencies.

Code review is getting harder

One thing Addy flagged that doesn’t get enough attention: “Increasingly teams feel like they’re being thrashed with all of these PRs that are AI generated. People don’t necessarily understand everything that’s in there. And you have to balance increased velocity expectations with ‘What is a quality bar?’ because someone’s going to have to maintain this.”

Knowing your quality bar matters. What are the cases where you’re comfortable merging an AI-generated change? Maybe it’s small and well-compartmentalized and has solid test coverage. And what are the cases where you absolutely need deep human review? Getting clear on that distinction is one of the most practical things a team can do right now.

Yes, young people should still go into software

We got a question about whether students should still pursue software engineering. Addy’s answer was emphatic: “There has never been a better time to get into software engineering if you are someone that is comfortable with learning. You do not necessarily have the burden of decades of knowing how things have historically been built. You can approach this with a very fresh set of eyes.” New entrants can go agent first. They can get deep into orchestration patterns and model trade-offs without having to unlearn old habits. And that’s a real advantage when interviewing at companies that need people who already know how to work this way.

The more important point is that in the early days of a new technology, people basically try to make the old things over again. The really big opportunities come when we figure out what was previously impossible that we can now do. If AI is as powerful as it appears to be, the opportunity isn’t to make companies more efficient at the same old work. It’s to solve entirely new problems and build entirely new kinds of products.

I’m 71 years old and 45 years into this industry, and this is the most excited I’ve ever been. More than the early web, more than open source. The future is being reinvented, and the people who start using these tools now get to be part of inventing it.

The token cost question

Addy had a funny and honest admission: “There were weeks when I would look at my bill for how much I was using in tokens and just be shocked. I don’t know that the productivity gains were actually worthwhile.”

His advice: experiment. Get a sense of what your typical tasks cost with multiple agents. Extrapolate. Ask yourself whether you’d still find it valuable at that price. Some people spend hundreds or even thousands a month on tokens and feel it’s worthwhile because the alternative was hiring a contractor. Others are spending that much and mostly feeling busy. As Addy said, “Don’t feel like you have to be spending a huge amount of money to not miss out on productivity wins.”

I’d add that we’re in a period where these costs are massively subsidized. The model companies are covering inference costs to get you locked in. Take advantage of that while it lasts. But also recognize that a lot of efficiency work is yet to be done. Just as JavaScript frameworks replaced everyone hand-coding UIs, we’ll get frameworks and tools that make agent workflows much more token-efficient than they are today.

2028 predictions are already here

One of the most striking things Addy shared was that a group in the AI coding community that he is part of had put together predictions for what software engineering would look like by 2028. “We recently revisited that list, and I was kind of shocked to discover that almost everything on that list is already possible today,” he said. “But how quickly the rest of the ecosystem adopts these things is on a longer trajectory than what is possible.”

That gap between capability and adoption is where most of the interesting work will happen over the next few years. The technology is running ahead of our ability to absorb it. Figuring out how to close that gap, in your team, your company, and your own practice, is the real job right now.

Agents writing code for agents

Near the end we answered another great audience question: Will agents eventually produce source code that’s optimized for other agents to read, not humans? Addy said yes. There are already platform teams having conversations about whether to build for an agent-first world where human readability becomes a secondary concern.

I have a historical parallel for this. I wrote the manual for the first C compiler on the Mac, and I worked closely with the developer who was hand-tuning the compiler output at the machine code level. That was about 30 years ago. We stopped doing that. And I’m quite confident there will be a similar moment with AI-generated code where humans mostly just let it go and trust the output. There will be special cases where people dive in for absolute performance or correctness. But they’ll be rare.

That transition won’t happen overnight. But the direction seems pretty clear. You can help to invent the future now, or spend time later trying to catch up with those who do.


This conversation was part of my ongoing series of discussions with innovators, Live with Tim O’Reilly. You can explore past episodes on the O’Reilly learning platform.

11:56

Peter Pentchev: Ringlet release: fnmatch-regex 0.3.0 [Planet Debian]

Version 0.3.0 of the fnmatch-regex Rust crate is now available. The major new addition is the glob_to_regex_pattern function that only converts the glob pattern to a regular expression one without building a regular expression matcher. Two new features - regex and std - are also added, both enabled by default.

For more information, see the changelog at the homepage.

11:35

Grrl Power #1436 – Hot tip [Grrl Power]

I think the most revelatory thing about this page is that demons have commercial brownie mix. I know I’ve shown succubi shopping before, but that was a little tongue in cheek. I was really thinking the store would really have been more like a society with alchemy, but also modern supply chains. Parfait said the biggest difference between Earth and demon society was that demons lack suburbs. That doesn’t mean that Demon Lords’ Keeps can’t have grocery stores. Really, I guess there’s no reason that a star faring species like demons couldn’t have any modern convenience they want. They could all very well have cell phones and wi-fi hotspots. They could. They have a lot of magic based infrastructure, and replacing all of that or even supplementing it all with copper wires would be a considerable undertaking, and given the feudal nature of demon held worlds, that sort of stuff is vulnerable to anyone with a shovel or a horde of burrowing demons. That’s not to say that some Demon Lords haven’t experimented with mixed infrastructure, or that newer colonies, especially one with mixed populations, don’t have a combination of tech going on.

There’s nothing suspicious about a surge of people suddenly placing large bets on “Ixah” after she trashes a bunch of competitors that had higher odds than her going into the first round. Of course, as a total unknown, anyone with a prior association with the U.C.B.A. would have better favorability. Well, maybe unless they really biffed it last time, I guess. I don’t know how odds-makers do their thing, but it seems reasonable that someone who got evaporated in the first 10 seconds the last six times they entered would have a negative chutzpah factor in the odds equation, whereas a total unknown would be chutzpah neutral. Can you tell by my vocabulary I’m an expert on statistics?

That isn’t to say that rumors of Ixah’s identity won’t spread outward from Tom’s inner circle, especially as he has interests on many planets. It’s not going to lead the Xevoarchy straight to Earth, of course. Anyone who does well in the UCBA will have a lot of mentions on Gal-net, including people and groups boosting whatever their favorite theories are. The signal to noise ratio has to reach a certain magnitude before the millions of A.I.s out there tasked to watch such things start to care.


Here is Gaxgy’s painting Maxima promised him. Weird how he draws almost exactly like me.

I did try and do an oil painting version of this, by actually re-painting over the whole thing with brush-strokey brushes, but what I figured out is that most brushy oil paintings are kind of low detail. Sure, a skilled painter like Bob Ross or whoever can dab a brush down a canvas and make a great looking tree or a shed with shingles, but in trying to preserve the detail of my picture (eyelashes, reflections, etc) was that I had to keep making the brush smaller and smaller, and the end result was that honestly, it didn’t really look all that oil-painted. I’ll post that version over at Patreon, just for fun, but I kind of quit on it after getting mostly done with re-painting Max.

Patreon has a no-dragon-bikini version of of the picture as well, naturally.


Double res version will be posted over at Patreon. Feel free to contribute as much as you like.

10:42

Freelancer empathy [Seth's Blog]

When phone cameras got good enough, portrait photographers scolded people who took their own headshots.

And when the Mac got pretty good at typesetting, professional designers pointed out that people who can’t tell a font from a typeface and don’t care about kerning should avoid it.

Professional translators bring humanity and insight to transforming writing from one language to another, but many people continue to use Google Translate…

Here’s the thing: the translators take their own headshots. Web designers often use translation software. And life coaches build their own websites with Squarespace and put their own selfies on Linkedin. We all make our own decisions, and most of the time, we use tech to do it ourselves.

This began with the Model T. Before that, people with enough money to buy a car also had a driver.

It’s not easy to find clients, particularly when technology makes it straightforward for many people to do the mechanical part of what you do for them on their own. It’s more convenient, faster and cheaper. It might not be as good by your standards, but if the client wants faster and cheaper, you’re unlikely to win that argument.

When was the last time you hired a studio photographer instead of using a stock photo of a piece of fruit? Or paid for a stock photo instead of using a cc or ai image? You might not cut your own hair (I’m not an expert) but you probably pump your own gas and cook your own meals.

The opportunity isn’t to race to the bottom, or to try to persuade someone that it’s worth upgrading. Instead, we can celebrate the fact that more people are discovering the power of photos, of type, of coaching and of cooking… and we can upgrade what we offer.

The goal is to be the first choice for people who couldn’t imagine doing it themselves, simply because their work is too important or your work is too good for them to ignore.

The best way to upgrade a freelance career is to get better clients. They challenge you, pay you more and talk about you more. And you don’t get better clients by working hard for lousy clients. You get better clients by becoming the kind of freelancer that better clients want to hire.

06:14

The Soporific Slump – DORK TOWER 19.02.26 [Dork Tower]

Most DORK TOWER strips are now available as signed, high-quality prints, from just $25!  CLICK HERE to find out more!

HEY! Want to help keep DORK TOWER going? Then consider joining the DORK TOWER Patreon and ENLIST IN THE ARMY OF DORKNESS TODAY! (We have COOKIES!) (And SWAG!) (And GRATITUDE!)

 

Flagrant Llama Abuse [Penny Arcade]

Sixteen years ago one day, I was walking down the street - you know how it is.

03:21

02:28

The Dumbest Gender [QC RSS]

I'm a boy, I'm allowed to say boys are dumb

01:49

Undo in Vi and its successors [OSnews]

So vi only has one level of undo, which is simply no longer fit for the times we live in now, and also wholly unnecessary given even the least powerful devices that might need to run vi probably have more than enough resources to give at least a few more levels of undo. What I didn’t know, however, is that vi’s limited undo behaviour is actually part of POSIX, and for full compliance, you’re going to need it. As Chris Siebenmann notes, vim and its derivatives ignore this POSIX requirement and implement multiple levels of undo in the obviously correct way.

What about nvi, the default on the BSD variants? I didn’t know this, but it has a convoluted workaround to both maintain POSIX compatibility and offer multiple levels of undo, and it’s definitely something.

Nvi has opted to remain POSIX compliant and operate in the traditional vi way, while still supporting multi-level undo. To get multi-level undo in nvi, you extend the first ‘u’ with ‘.’ commands, so ‘u..’ undoes the most recent three changes. The ‘u’ command can be extended with ‘.’ in either of its modes (undo’ing or redo’ing), so ‘u..u..’ is a no-op. The ‘.’ operation doesn’t appear to take a count in nvi, so there is no way to do multiple undos (or redos) in one action; you have to step through them by hand. I’m not sure how nvi reacts if you want do things like move your cursor position during an undo or redo sequence (my limited testing suggests that it can perturb the sequence, so that ‘.’ now doesn’t continue undoing or redoing the way vim will continue if you use ‘u’ or Ctrl-r again).

↫ Chris Siebenmann

Siebenmann lists a few other implementations and how they work with undo, and it’s interesting to see how all of them try to solve the problem in slightly different ways.

Loser Behavior, tbh [The Stranger]

Do you need to get something off your chest? Submit an I, Anonymous and we'll illustrate it! by Anonymous

The coffee shop was busy and full, and you were sitting on the couch while your friend was sitting at the armchair. There was room on the couch and another unoccupied armchair next to your friend. Both of you were on your phones. My husband and I asked if we could sit in the empty spaces. You looked at us and then went back to your phone. We sat down for a minute, but then a bench freed up nearby and we moved there.

Five minutes later, your friends showed up and you started talking about us because we just sat down randomly. Dude, if you didn't want us to sit next to you, you could have said you were waiting on people. Saying nothing and then shit-talking about us is sad and lame.

Do you need to get something off your chest? Submit an I, Anonymous and we'll illustrate it! Send your unsigned rant, love letter, confession, or accusation to ianonymous@thestranger.com. Please remember to change the names of the innocent and the guilty.

01:00

Jesse Jackson Helped Me Unlearn the Church [The Stranger]

This week, as we mark Jackson’s passing, the tributes have rightly talked about the “Rainbow Coalition,” about how he stretched the imagination of what a Black candidate could do and be in national politics, about how his run helped clear a path that others later walked. All true. Still, grief has a way of pulling the personal to the surface. For me, the most enduring legacy of that speech isn’t only what it meant for the Democratic Party. It’s what it meant for a seven-year-old Black boy sitting in his parents’ basement, watching the 1988 Democratic National Convention unfold on a flickering television, absorbing the world the way children do, through what adults do and don’t say. by Marcus Harrison Green

Most people file it away as a political artifact: the “Keep Hope Alive” speech, a concession with a crescendo that closed out Jesse Jackson’s 1988 campaign. But I remember it less as a footnote to an election and more as a key, one that cracked open a door inside me before certain kinds of cruelty could fully settle in.

This week, as we mark Jackson’s passing, the tributes have rightly talked about the “Rainbow Coalition,” about how he stretched the imagination of what a Black candidate could do and be in national politics, about how his run helped clear a path that others later walked. All true.

Still, grief has a way of pulling the personal to the surface. For me, the most enduring legacy of that speech isn’t only what it meant for the Democratic Party. It’s what it meant for a seven-year-old Black boy sitting in his parents’ basement, watching the 1988 Democratic National Convention unfold on a flickering television, absorbing the world the way children do, through what adults do and don’t say.

My parents watched Jackson like you watch the sky after you’ve lived through storms: with hope, yes, but also with the disciplined flinch of those who know the weather can quickly change without warning. To them, his rise carried the ache of “almost.” A Black man on that stage, in that era, closer than any other had been to the center of a major party’s power, and still not quite permitted to sit at the head of the table.

I didn’t have the vocabulary for it then, but I felt the emotion: the nation’s talent for inviting us in its home just so long as we don’t rearrange the furniture. “Be grateful,” America says, “but don’t get comfortable.”

That was the late ’80s bargain. The Cosby Show reigned on Thursday nights, offering a version of Black success that was polished, upper-middle-class, and politically quiet, an American bedtime story where bootstraps were always enough and structural racism was mostly implied, never indicted. A ceiling disguised as a dream. The message wasn’t simply “look, we made it.” It was: this is the acceptable shape of Black life, this is what you can be if you stop complaining about what’s been done to you.

And then there was church, the other classroom.

Every Sunday, I sat under theology that didn’t just preach salvation; it sorted humanity. At our megachurch, Christian Faith Center, the world arrived pre-labeled: righteous and sinful, clean and contaminated. And woven through it all, like a thread that tugged at boys especially, was a narrow definition of manhood: tough, dominant, heterosexual, and unquestioning. The pastor would say there is only one choice for relationships: a man with a woman.

As a kid, you don’t call it indoctrination. You call it normal.

So when Jesse Jackson stood at that convention and spoke about gay and lesbian people as people, not as a problem or a cautionary tale, something in me shifted. The moment wasn’t loud in the way politics is loud. It was loud in the way truth is loud.

“Gays and lesbians, when you fight against discrimination and a cure for AIDS, you are right—but your patch is not big enough.”

I didn’t understand everything he meant. But I understood the part that mattered. On national television, in the bright machinery of American politics, without flinching, he placed gay and lesbian people inside the circle of concern and dignity, and he did it in 1988, when the country was still trying to treat AIDS like divine punishment instead of a public emergency.

So much of the “common sense” of that era was soaked in cruelty. Thousands had already died from AIDS, and treatment was not the world it is now. Ronald Reagan infamously delayed even acknowledging the epidemic in public. And in many churches, the disease was framed as consequence: a karmic pox for sinners, a story about them, safely distant from us.

But Jackson spoke as if the people dying mattered.

He didn’t talk about AIDS like it was happening on another planet. He talked about it like it was happening in America because it was. He talked about hospice, about rejection, about the isolation of being sick and shamed at once. He said those living with AIDS deserved compassion. Not disgust or distance, but compassion.

And in my basement, in my little body, I felt something break: not my faith exactly, but my certainty. The certainty that the adults who sounded sure must be right. The certainty that “righteousness” meant excluding people, and that dehumanization could be holy.

Homophobia often comes handed as inheritance, an “us vs. them” story wrapped in scripture, welded to gender roles, reinforced by jokes, threats, and silence. For Black boys raised in church, it can also arrive as armor. A performance of hardness meant to protect you from a world eager to hypersexualize, degrade, or erase you. The cruel logic of patriarchy: to be seen as a man, you must loudly reject anything the world codes as soft.

At seven, I didn’t have that analysis. What I had was a gut sense that something I’d been taught was not as sacred as it claimed. Jackson’s speech did not make me instantly wise. It didn’t turn me into a miniature ally with perfect language. What it did, more practically and miraculously, was interrupt the formation of a prejudice before it could harden into identity.

It kept me asking questions. And bigotry hates questions.

Jackson also modeled something rare, especially for a Black male leader navigating a country that punishes us for stepping out of line: courage that didn’t audition itself for acceptability. It would have been politically easy to ignore LGBTQ people, especially when the constituency was treated as controversial, expendable, or too risky to acknowledge. Plenty of Democrats did. Even later, under President Clinton, the country got “Don’t Ask, Don’t Tell”, a policy that translated cowardice into law and asked queer people to make themselves smaller for the comfort of the state.

Jackson, by contrast, widened the frame.

Now, we should tell the truth about Jackson too, because it is not the enemy of gratitude. He used an antisemitic slur during his 1984 campaign and damaged a Black-Jewish alliance already under strain. Years later, he fathered a child outside his marriage, private harm that became public in a way that rippled beyond him. The man was not spotless. No one is.

But it is possible to hold the whole human and still name the consequential work they did in a particular hour.

I think about that now, older, long gone from the church I was raised me, especially knowing what later came to light about Christian Faith Center: allegations of sexual harassment, exploitation, and the familiar architecture of power protecting itself. It’s hard not to look back and see how often institutions preach purity while practicing predation, and demand moral submission from congregants while laundering their own sin in the language of “leadership.”

Jackson’s speech helped me separate faith from fear. It helped me understand that dehumanization can wear a cross, recognize that “righteousness” can be a costume, and that compassion is often the truer proof of belief than any shouted doctrine.

Near the end of that 1988 speech, Jackson offered a line that still guides me: “If an issue is morally right, it will eventually be political.”

At seven, I didn’t know I was being handed a compass. I just knew I’d seen a Black man on a national stage widen the definition of “our people.” And somehow, in the process, he widened me too.

Not into perfection. Into possibility.

So yes, let the headlines remember the campaign, the coalition, the “Keep Hope Alive” refrain. But as we mourn Jesse Jackson’s passing, I want to name this quieter legacy: that in a country, and a church culture, where cruelty could masquerade as conviction, he chose to insist on the full humanity of queer people. He did it clearly, publicly, and without apology.

And because he did, a Black boy learned that love is an ethic, not a weakness. That manhood is not forged in exclusion but in accountability. That faith, at its best, is expansive. And that interrupting inherited hatred begins with a deliberate declaration: they are people, too.

Keep hope alive, yes.

But let it be hope wide enough to hold all of us without exceptions or asterisks.

00:35

[$] LWN.net Weekly Edition for February 19, 2026 [LWN.net]

Inside this week's LWN.net Weekly Edition:

  • Front: AI agent goes rogue; debuginfo; iocaine; revocable resource-management patches; 7.0 merge window; AccECN; LLMs and security; Humanitarian OpenStreetMap Team.
  • Briefs: upki; Asahi Linux progress; DFSG processes; Fedora in Syria; Plasma 6.6.0; Vim 9.2; ...
  • Announcements: Newsletters, conferences, security updates, patches, and more.

00:14

Me Wants Read [Looking For Group]

It’s dawned on me recently that I’m simply not reading enough these days: books, graphics novels, name it. I used to ingest words faster than Red Bull, and for the last year or so, I’ve definitely slowed down. All that
Read More

The post Me Wants Read appeared first on Looking For Group.

Wednesday, 18 February

23:28

Clint Adams: Holger says [Planet Debian]

sq network keyserver search $id ; sq cert export --cert=$id > $id.asc

Posted on 2026-02-18
Tags:

22:42

F9: an L4-style microkernel for ARM Cortex-M [OSnews]

F9 is an L4-inspired microkernel designed for ARM Cortex-M, targeting real-time embedded systems with hard determinism requirements. It implements the fundamental microkernel principles—address spaces, threads, and IPC, while adding advanced features from industrial RTOSes.

↫ F9 kernel GitHub page

For once, not written in Rust, and comes with both an L4-style native API and a userspace POSIX API, and there’s a ton of documentation to get you started.

22:07

A Secret Project is Afoot at the Scalzi Compound! [Whatever]

What is it? I can’t tell you! When will you be able to know? I can’t say! But when I can tell you, will I? We’ll see!

What I can tell you is that Athena is working on it with me, she’s been great to work with so far, and my decision to hire her at Scalzi Enterprises was pretty smart. Clearly I know what I’m doing all the time.

Anyway, my kid’s awesome and we’re doing cool stuff. I hope we get to share it with you. Eventually.

— JS

21:56

Windows 11’s new MIDI framework delivers MIDI 2.0 [OSnews]

It’s been well over a year since Microsoft unveiled it was working on bringing MIDI 2.0 to Windows, and now it’s actually here available for everyone.

We’ve been working on MIDI over the past several years, completely rewriting decades of MIDI 1.0 code on Windows to both support MIDI 2.0 and make MIDI 1.0 amazing. This new combined stack is called “Windows MIDI Services.”

The Windows MIDI Services core components are built into Windows 11, rolling out through a phased enablement process now to in-support retail releases of Windows 11. This includes all the infrastructure needed to bring more features to existing MIDI 1.0 apps, and also support apps using MIDI 2.0 through our new Windows MIDI Services App SDK.

↫ Pete Brown and Gary Daniels at the Windows Blogs

This is the kind of work users of an operating system want to see. Improvements and new features like these actually have a meaningful, positive impact for people using MIDI, and will genuinely give them them benefits they otherwise wouldn’t get. I won’t pretend to know much about the detailed features and improvements listed in Microsoft’s blog post, but I’m sure the musicians in the audience will be quite pleased.

Whomever at Microsoft was responsible for pushing this through, managing this team, and of course the team members themselves should probably be overseeing more than just this. Less “AI” bullshit, more of this.

Think You Are the King of the Bus? [The Stranger]

I was excited to play. And then I lost. Twice. by Nathalie Graham

People who take the bus are superior to exclusive light railers, who are only cosplaying as serious transit riders. That measly track can be the gateway drug into the vastness of Seattle’s public transportation network. It can only take you two directions today—and a third and fourth when the Cross Lake Connection unites the 2 Line with Seattle proper on March 28. On the bus, you can go anywhere. 

At least, this is what I thought of myself. I take the bus frequently and have since I moved here almost 12 years ago. But a new game has humbled me. 

Routle is a new daily iteration in the vein of Wordle (for the logophiles) and Worldle (for the geophiles), except it puts transit route knowledge to the test. Made by software engineer and self-described “mysterious train-loving hacker” one-time San Francisco resident River Honer, Routle shows one unlabeled King County Metro (or Sound Transit light rail) route each day. Players only see the route’s shape—there’s no map of Seattle—and have five guesses at which route it is (wrong guesses will appear on the screen). As transit nerds will note, it’s missing a few ferries, doesn’t include rapid ride routes and is limited to the Seattle area. Honer says she didn’t include everything because “Seattle’s bus network is very far reaching.” She’s open to feedback if people think she should expand the routes, but doesn’t want to make the game too hard. 

Honer started with San Francisco’s transit routes and recently released versions of the game for us, Portland, and AC Transit in Alameda County, California.

Honer made the Seattle version because she used to spend summers here. She figured it would be a good option for Routle because of our “good number of iconic routes” and we have a population of transit enthusiasts.

I was excited to play. And then I lost. Twice.

It turns out I only know the buses I know, from the four Seattle neighborhoods I’ve lived in.

My husband, a lifelong Seattleite, got yesterday’s Routle (Route 3) in two guesses. Last night, we talked about how many routes in Seattle we knew. We attempted to name where each bus traveled from Route 2 through Route 79. This is hard to do. He paused to complain about the neutering of the 43, which used to run all the way to Ballard, and the 48, which used to reach Rainier Beach. 

I sent the game to my mother-in-law, also a lifelong Seattleite. She mused about it. She grew up taking one set of routes, then used another set in young adulthood, and a whole different set as parents. The transit routes we know are extensions of our lives at any point in time.

It’s the same for Routle’s creator, Honer. She used to visit Seattle in the summers. “I remember when the link tunnel was the ‘bus tunnel,’” she reminisced. “I used the 12 and 10 buses to get to my summer job at Pike’s Place Market [sic]. I was also there when the South Lake Union streetcar was opened.” 

For me, I knew the University District routes well, but they changed when the light rail came. I remember taking the 45 to Ballard for a first date at the end of freshman year. I learned the 65 when I graduated and had to commute to my first job. Then, it was all Capitol Hill routes. (And others, but I will not be doxing myself, you freaks).

19:28

RIP Scalzi DSL Line, 2004 – 2026 [Whatever]

As most of you know, I live on a rural road where Internet options are limited. More than 20 years ago, DSL became available where I live, which meant that I could ditch the satellite internet of the early 2000s, which topped out at something like 1.5mbps and rarely achieved that, and which went out entirely if it rained, for a line that had a, for me, blisteringly fast 6mbps speed.

That was the speed it stayed at for most of the next twenty years, until my provider, rather grudgingly, increased the speed to 40mbps — not fast, but certainly faster — and there it stayed. Over time the DSL service stopped being as reliable, rarely actually got up to 40mbps, and, actually started going out when it rained, like the satellite internet of old, but without the excuse of being, you know, in space and blocked by clouds.

A few months back I went ahead and ordered 5G internet service from Verizon, because it was faster and doesn’t have usage caps, which had been a stumbling block for 5G service previously. It’s not top of the line, relative to other services that are available elsewhere — usually 120+mbps, where the church’s service is at 300+mbps, and Athena’s in town Internet is fiber and clocks in at 2gbps — but it’s fast enough for what I use the internet for, and to steam high-definition movies and TV. I held on to the DSL since then to make sure I was happy with the new service, because that seemed a sensible thing to do.

No more. The 5G wireless works flawlessly and has for months, and the time has come. After 20+ years, I have officially cancelled my DSL line. A big day in the technology life of the Scalzi Compound. I thank the DSL for its service, but its watch has now ended. We all most move on, ceaselessly, into the future, where I can download stuff faster.

I’m still keeping my landline, however, to which the DSL was attached. Call me old-fashioned.

— JS

18:49

Slog AM: ICE Left a $200 million Hole In Minneapolis’s Economy, Waymo Uses DoorDash to Close the Doors of Its Cars, Will the Seahawks Visit the White House? [The Stranger]

The Stranger's morning news roundup. by Charles Mudede

The Seahawks basically destroyed the Patriots to claim their second Super Bowl. Now comes the big question: Will the team visit the White House? Back in 2014, they made the trip and celebrated with the then president Barack Obama. But things are very different now. Trump is violently attacking cities and not even trying to hide his racism. The Seahawks have a lot Black players and the city it represents is considered a “Welcoming City.” How can this work out? Rumours recently circulated that the team had declined the standard invitation, but it’s now reported that the whole business is still very much up in the air. Also not verified is the rumor that the Seahawks haven’t received an invitation from Trump’s White House. Maybe both sides just want to keep their mouths shut and let this difficult matter quietly pass like two ships in the night.

Yesterday, at around 4:45 pm, we at the Stranger’s office saw through the windows something that had the likeness of snow. Was it the real stuff or not? We couldn’t tell. Maybe this was a collective hallucination. Today, expect a low of 31, a mostly cloudy morning, some rain in the afternoon, and, yet again, no snow.

KIRO Radio is popping the champagne because Seattle's noble attempt to improve the labor standards of hyper-exploited gig workers has apparently backfired. Drivers are now earning “20 cents less per hour than before.” They blamed this drop on Seattle leaders who apparently have no contact with reality, with capitalist reality. And what the captains of this mode of accumulation never stop telling us is this: Labor rights and rising wages are the sole cause of rising costs and immiseration of the poor. Any other explanation is, according to them, not realistic. It’s just labor’s demand for more and more that’s the root of all evil, they say.

A quick thing about a book I’m currently reading. It’s called Capitalism: A Global History. It’s by German-born economist and historian Sven Beckert. It’s 1344  pages. I’m near page 900. But what I've learned from this book is that the natural rate for wages in capitalism is zero. And the rise of wages is, essentially, nothing but the resistance by labor to this natural tendency. Beckert doesn’t exactly say this, but he does make it clear that a capitalism without any regulation must lead to its form of slavery, which is the commodification of the body. Wages above zero result in the commodification of labor power. I will stop there and now turn to the robots of the 21st century.

The best story I’ve heard in a minute is that Waymo, which is basically Uber without drivers, is turning to gig workers, such as those who work for DoorDash, to close car doors left open by flakey or crafty customers. And how much does Waymo pay for what can only be called the human touch? $11.25. So the zero wage robot is not yet up to snuff.

Now, let’s turn to something that should really alarm Seattle’s leaders. Operation Metro Surge not only brought death, state-sanctioned lawbreaking, and general mayhem to Minneapolis; it also delivered a big blow to the city’s economy. The estimated cost so far of an operation that began on the first day of the present year and had nothing to do with protecting Americans from the “worst of the worst” is $203.1 million. The bulk of this cost is attributed to revenue small businesses lost ($82 million). The rest of the tab went to lost wages ($47 million), social services that experienced extraordinary stress ($17 million), overtime pay to police officers and other city officials ($4 million), and hotel cancellations ($4 million). The city thinks it will take years to regain ground from this complete waste of money.

Now that ICE is bringing its post-apocalyptic show in Minneapolis to an end, the border czar, Tom (Bribe Loving) Homan, is looking for the next theater. Homan: "I've said from day one that, you know, we need to flood the zone and sanctuary cities with additional agents.” That "sanctuary city” could be Seattle. And the cost of this performance, which is all it really is, will be terrific. The only thing that might protect us from this massive waste of time, lives, and money is our tech hub status, which means we play an important role in maintaining the only game in town, the gigantic AI bubble. 

 

Minneapolis officials have released an estimated tab on what Operation Metro Surge has cost city residents so far.

 

bit.ly/4cqEfFq

[image or embed]

— FOX 13 Seattle (@fox13seattle.bsky.social) February 15, 2026 at 6:00 AM

 

So, you still want to talk about how high wages always backfire. Well, what’s this in the Seattle Times? The pay ratio for Starbucks CEO, Brian Niccol, is an astounding 1,749 to 1. Meaning–he earns $30,992,773, and the average worker earns $17,279. My god. The pressure to reduce wages to zero is way too real in 2026. Again, nothing but labor’s resistance to the true nature of capitalism prevents this catastrophe. If we do nothing, the zero law will be, to use the words of Marx, like “the law of gravity [that] asserts itself when a house falls about our ears.”

Let’s end AM with an ‘80s tune that compares love-enthrallment with the condition of a robot, the Pointer Sister’s “Automatic.”

18:00

Antoine Beaupré: net-tools to iproute cheat sheet [Planet Debian]

This is also known as: "ifconfig is not installed by default anymore, how do I do this only with the ip command?"

I have been slowly training my brain to use the new commands but I sometimes forget some. So, here's a couple of equivalence from the old package to net-tools the new iproute2, about 10 years late:

net-tools iproute2 shorter form what it does
arp -an ip neighbor ip n
ifconfig ip address ip a show current IP address
ifconfig ip link ip l show link stats (up/down/packet counts)
route ip route ip r show or modify the routing table
route add default GATEWAY ip route add default via GATEWAY ip r a default via GATEWAY add default route to GATEWAY
route del ROUTE ip route del ROUTE ip r d ROUTE remove ROUTE (e.g. default)
netstat -anpe ss --all --numeric --processes --extended ss -anpe list listening processes, less pretty

Another trick

Also note that I often alias ip to ip -br -c as it provides a much prettier output.

Compare, before:

anarcat@angela:~> ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host noprefixroute 
       valid_lft forever preferred_lft forever
2: wlan0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether xx:xx:xx:xx:xx:xx brd ff:ff:ff:ff:ff:ff permaddr xx:xx:xx:xx:xx:xx
    altname wlp166s0
    altname wlx8cf8c57333c7
4: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether xx:xx:xx:xx:xx:xx brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever
20: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether xx:xx:xx:xx:xx:xx brd ff:ff:ff:ff:ff:ff
    inet 192.168.0.108/24 brd 192.168.0.255 scope global dynamic noprefixroute eth0
       valid_lft 40699sec preferred_lft 40699sec

After:

anarcat@angela:~> ip -br -c a
lo               UNKNOWN        127.0.0.1/8 ::1/128 
wlan0            DOWN           
virbr0           DOWN           192.168.122.1/24 
eth0             UP             192.168.0.108/24 

I don't even need to redact MAC addresses! It also affects the display of the other commands, which look similarly neat.

Also imagine pretty colors above.

Finally, I don't have a cheat sheet for iw vs iwconfig (from wireless-tools) yet. I just use NetworkManager now and rarely have to mess with wireless interfaces directly.

Background and history

For context, there are traditionally two ways of configuring the network in Linux:

  • the old way, with commands like ifconfig, arp, route and netstat, those are part of the net-tools package
  • the new way, mostly (but not entirely!) wrapped in a single ip command, that is the iproute2 package

It seems like the latter was made "important" in Debian in 2008, which means every release since Debian 5 "lenny" (!) has featured the ip command.

The former net-tools package was demoted in December 2016 which means every release since Debian 9 "stretch" ships without an ifconfig command unless explicitly requested. Note that this was mentioned in the release notes in a similar (but, IMHO, less useful) table.

(Technically, the net-tools Debian package source still indicates it is Priority: important but that's a bug I have just filed.)

Finally, and perhaps more importantly, the name iproute is hilarious if you are a bilingual french speaker: it can be read as "I proute" which can be interpreted as "I fart" as "prout!" is the sound a fart makes. The fact that it's called iproute2 makes it only more hilarious.

17:49

Free Software Directory meeting on IRC: Friday, February 20, starting at 12:00 EST (17:00 UTC) [Planet GNU]

Join the FSF and friends on Friday, February 20 from 12:00 to 15:00 EST (17:00 to 20:00 UTC) to help improve the Free Software Directory.

Free Software Directory meeting on IRC: Friday, November 7, starting at 12:00 EST (17:00 UTC) [Planet GNU]

Join the FSF and friends on Friday, November 7 from 12:00 to 15:00 EST (17:00 to 20:00 UTC) to help improve the Free Software Directory.

Free software activities in October 2025 [Planet GNU]

Hello and welcome to my October free software activities report.

GNU & FSF

  • GNU Spotlight: I prepared and sent the October GNU Spotlight to the FSF campaigns team, who will review and publish it on the FSF’s community blog and as part of the next issue of the monthly Free Software Supporter newsletter.

  • GNU Emacs:

    • bug#79629: I noticed that I was unable to customize the holiday-other-holidays variable using the setopt macro: my change did not seem to take effect. As Eli Zaretskii helpfully pointed out, this was because customizing holiday-other-holidays did not recompute the value of calendar-holidays, which is computed once, when the package is loaded.

      So I prepared and sent a patch 500a2d0cc55 to recompute calendar-holidays when its components are set.

    • bbabc1db258: While reading about custom-reevaluate-setting in the Startup Summary node of the GNU Emacs Lisp reference manual I noticed a small typo, so I committed a patch to fix it.

Misc

  • The Free Software Foundation celebrated its fortieth birthday on 4 October 2025 online and in person in Boston! I was not able to attend the event in person, so I recorded a video for the FSF40 volunteer panel held at the venue.

  • This month at work one of our Elasticsearch clusters experienced partial failure, and we needed to extract document IDs from a backup of one of the cluster’s shards. Elasticsearch uses Lucene under the hood and each shard is a standalone Lucene index, so I used Lucene’s Java API to write a little GetIDS class to query the index for all of its documents, and for each document print its _id field, decoding the binary-valued BytesRef as needed. The gotcha was that all of the BytesRefs seemed to have a -1 byte in the beginning, throwing off the recommended BytesRef.utf8ToString() method, so I had to reimplement that method’s logic in my program and have it use an adjusted offset + 1 and length - 1 instead.

That’s about it for this month’s report.

Take care, and so long for now.

Free software activities in October 2025 [Planet GNU]

Hello and welcome to my October free software activities report.

GNU & FSF

  • GNU Spotlight: I prepared and sent the October GNU Spotlight to the FSF campaigns team, who will review and publish it on the FSF’s community blog and as part of the next issue of the monthly Free Software Supporter newsletter.

  • GNU Emacs:

    • bug#79629: I noticed that I was unable to customize the holiday-other-holidays variable using the setopt macro: my change did not seem to take effect. As Eli Zaretskii helpfully pointed out, this was because customizing holiday-other-holidays did not recompute the value of calendar-holidays, which is computed once, when the package is loaded.

      So I prepared and sent a patch 500a2d0cc55 to recompute calendar-holidays when its components are set.

    • bbabc1db258: While reading about custom-reevaluate-setting in the Startup Summary node of the GNU Emacs Lisp reference manual I noticed a small typo, so I committed a patch to fix it.

Misc

  • The Free Software Foundation celebrated its fortieth birthday on 4 October 2025 online and in person in Boston! I was not able to attend the event in person, so I recorded a video for the FSF40 volunteer panel held at the venue.

  • This month at work one of our Elasticsearch clusters experienced partial failure, and we needed to extract document IDs from a backup of one of the cluster’s shards. Elasticsearch uses Lucene under the hood and each shard is a standalone Lucene index, so I used Lucene’s Java API to write a little GetIDS class to query the index for all of its documents, and for each document print its _id field, decoding the binary-valued BytesRef as needed. The gotcha was that all of the BytesRefs seemed to have a -1 byte in the beginning, throwing off the recommended BytesRef.utf8ToString() method, so I had to reimplement that method’s logic in my program and have it use an adjusted offset + 1 and length - 1 instead.

That’s about it for this month’s report.

Take care, and so long for now.

GNU Parallel 20251022 ('Goodall') released [stable] [Planet GNU]

GNU Parallel 20251022 ('Goodall') has been released. It is available for download at: lbry://@GnuParallel:4

Quote of the month:

  idk who built GNU parallel but I owe them a beer
    -- ram @h4x0r1ng

New in this release:

  • No new features.
  • Bug fixes.


GNU Parallel - For people who live life in the parallel lane.

If you like GNU Parallel record a video testimonial: Say who you are, what you use GNU Parallel for, how it helps you, and what you like most about it. Include a command that uses GNU Parallel if you feel like it.


About GNU Parallel


GNU Parallel is a shell tool for executing jobs in parallel using one or more computers. A job can be a single command or a small script that has to be run for each of the lines in the input. The typical input is a list of files, a list of hosts, a list of users, a list of URLs, or a list of tables. A job can also be a command that reads from a pipe. GNU Parallel can then split the input and pipe it into commands in parallel.

If you use xargs and tee today you will find GNU Parallel very easy to use as GNU Parallel is written to have the same options as xargs. If you write loops in shell, you will find GNU Parallel may be able to replace most of the loops and make them run faster by running several jobs in parallel. GNU Parallel can even replace nested loops.

GNU Parallel makes sure output from the commands is the same output as you would get had you run the commands sequentially. This makes it possible to use output from GNU Parallel as input for other programs.

For example you can run this to convert all jpeg files into png and gif files and have a progress bar:

  parallel --bar convert {1} {1.}.{2} ::: *.jpg ::: png gif

Or you can generate big, medium, and small thumbnails of all jpeg files in sub dirs:

  find . -name '*.jpg' |
    parallel convert -geometry {2} {1} {1//}/thumb{2}_{1/} :::: - ::: 50 100 200

You can find more about GNU Parallel at: http://www.gnu ... rg/s/parallel/

You can install GNU Parallel in just 10 seconds with:

    $ (wget -O - pi.dk/3 || lynx -source pi.dk/3 || curl pi.dk/3/ || \
       fetch -o - http://pi.dk/3 ) > install.sh
    $ sha1sum install.sh | grep c555f616391c6f7c28bf938044f4ec50
    12345678 c555f616 391c6f7c 28bf9380 44f4ec50
    $ md5sum install.sh | grep 707275363428aa9e9a136b9a7296dfe4
    70727536 3428aa9e 9a136b9a 7296dfe4
    $ sha512sum install.sh | grep b24bfe249695e0236f6bc7de85828fe1f08f4259
    83320d89 f56698ec 77454856 895edc3e aa16feab 2757966e 5092ef2d 661b8b45
    b24bfe24 9695e023 6f6bc7de 85828fe1 f08f4259 6ce5480a 5e1571b2 8b722f21
    $ bash install.sh

Watch the intro video on http://www.youtub ... L284C9FF2488BC6D1

Walk through the tutorial (man parallel_tutorial). Your command line will love you for it.

When using programs that use GNU Parallel to process data for publication please cite:

O. Tange (2018): GNU Parallel 2018, March 2018, https://doi.org/1 ... 81/zenodo.1146014.

If you like GNU Parallel:

  • Give a demo at your local user group/team/colleagues
  • Post the intro videos on Reddit/Diaspora*/forums/blogs/ Identi.ca/Google+/Twitter/Facebook/Linkedin/mailing lists
  • Get the merchandise https://gnuparall ... igns/gnu-parallel
  • Request or write a review for your favourite blog or magazine
  • Request or build a package for your favourite distribution (if it is not already there)
  • Invite me for your next conference


If you use programs that use GNU Parallel for research:

  • Please cite GNU Parallel in you publications (use --citation)


If GNU Parallel saves you money:



About GNU SQL


GNU sql aims to give a simple, unified interface for accessing databases through all the different databases' command line clients. So far the focus has been on giving a common way to specify login information (protocol, username, password, hostname, and port number), size (database and table size), and running queries.

The database is addressed using a DBURL. If commands are left out you will get that database's interactive shell.

When using GNU SQL for a publication please cite:

O. Tange (2011): GNU SQL - A Command Line Tool for Accessing Different Databases Using DBURLs, ;login: The USENIX Magazine, April 2011:29-32.


About GNU Niceload


GNU niceload slows down a program when the computer load average (or other system activity) is above a certain limit. When the limit is reached the program will be suspended for some time. If the limit is a soft limit the program will be allowed to run for short amounts of time before being suspended again. If the limit is a hard limit the program will only be allowed to run when the system is below the limit.

17:14

Could Write­Process­Memory be made faster by avoiding the intermediate buffer? [The Old New Thing]

A little while ago, we wondered whether Write­Process­Memory was faster than shared memory for transferring data between two processes, and the conclusion is that it wasn’t. Shared memory, as its name implies, shares the memory between two processes: The two processes are accessing the same memory; there are no copies. On the other hand, the implementation of Write­Process­Memory allocates a transfer buffer, copies the data from the source to the transfer buffer, then changes memory context to the destination, and then copies the data from the transfer buffer to the destination. But could Write­Process­Memory be optimized to avoid this copy?

I mean, I guess you could do that in theory. I’m thinking, maybe create a memory descriptor list (MDL), lock and map the pages into kernel mode while in the context of the source, then change context to the destination and copy the memory to the destination. Repeat until all the memory has been copied. You don’t want to allocate a single MDL for the entire source block because the program might say that it wants to copy 100GB of memory, and if you didn’t cap the size of the transfer buffer, that would lock 100GB of RAM.

But it seems overkill and unnecessary to lock the source pages. It’s fine for them to be pageable. We’re okay with them faulting in as necessary.

I don’t know if there’s a way to map memory from one process into another except by locking it. I don’t spend a lot of time in kernel mode. But you do have to be careful that the mapping goes into the kernel address space and not the user-mode address space. Putting it in the user-mode address space would be a security vulnerability because the destination process can see the bytes on the source page that are not part of the memory being copied.¹

But really, all of this effort is pointless. We saw that the purpose of the Write­Process­Memory function is not inter-process communication (IPC) but to be a tool for debuggers. Debuggers are typically writing just a few bytes at a time, say, to patch a breakpoint instruction, and the Write­Process­Memory function actually goes out of its way to write the memory, even in the face of incompatible memory protections, though it does so in a not-thread-safe way. But that’s okay because the destination process is presumably frozen by the debugger when it calls Write­Process­Memory. A debugger is not going to patch a process while it’s actively running. The lack of atomicity means that patching a running process could result in the process seeing torn state, like a partly-patched variable or even a partly-patched instruction.

In summary, Write­Process­Memory was not intended to be used as an inter-process communication channel. Its intended client is a debugger that is using it to patch bytes in a process being debugged. The very high level of access required to call the function (PROCESS_VM_WRITE) is not suitable for an inter-process communication channel, since it basically gives the writer full pwnage over the process being written to. In the case of a debugger, you want the debugger to have complete and total control of the process being debugged. But in the case of IPC, you don’t want to give your clients that high a level of access to your process. And even if you get past that, the lack of atomicity and lack of control over the order in which the bytes become visible in the target process means that Write­Process­Memory is not suitable as an IPC mechanism anyway. There’s no point trying to make a bad idea more efficient.

¹ Or you could try it the other way: Map the destination into the source. But now you are giving the source read access to the destination bytes that share the same page as the destination buffer, even though the source may not have PROCESS_VM_READ access.

The post Could <CODE>Write­Process­Memory</CODE> be made faster by avoiding the intermediate buffer? appeared first on The Old New Thing.

16:21

[$] More accurate congestion notification for TCP [LWN.net]

The "More Accurate Explicit Congestion Notification" (AccECN) mechanism is defined by this RFC draft. The Linux kernel has been gaining support for AccECN with TCP over the last few releases; the 7.0 release will enable it by default for general use. AccECN is a subtle change to how TCP works, but it has the potential to improve how traffic flows over both public and private networks.

15:42

Thomas Lange: 42.000 FAI.me jobs created [Planet Debian]

The FAI.me service has reached another milestone:

The 42.000th job was submitted via the web interface since the beginning of this service in 2017.

The idea was to provide a simple web interface for end users for creating the configs for the fully automatic installation with only minimal questions and without knowing the syntax of the configuration files. Thanks a lot for using this service and for all your feedback.

The next job can be yours!

P.S.: I like to get more feedback for the FAI.me service. What do you like most? What's missing? Do you have any success story how you use the customized ISO for your deployment? Please fill out the FAI questionaire or sent feedback via email to fai.me@fai-project.org

About FAI.me

FAI.me is the service for building your own customized images via a web interface. You can create an installation or live ISO or a cloud image. For Debian, multiple release versions can be chosen, as well as installations for Ubuntu Server, Ubuntu Desktop, or Linux Mint.

Multiple options are available like selecting different desktop environments, the language and keyboard and adding a user with a password. Optional settings include adding your own package list, choosing a backports kernel, adding a postinst script and adding a ssh public key, choosing a partition layout and some more.

15:35

Fedora now available in Syria [LWN.net]

Justin Wheeler writes in Fedora Magazine that Fedora is now available in Syria once again:

Last week, the Fedora Infrastructure Team lifted the IP range block on IP addresses in Syria. This action restores download access to Fedora Linux deliverables, such as ISOs. It also restores access from Syria to Fedora Linux RPM repositories, the Fedora Account System, and Fedora build systems. Users can now access the various applications and services that make up the Fedora Project. This change follows a recent update to the Fedora Export Control Policy. Today, anyone connecting to the public Internet from Syria should once again be able to access Fedora.

[...] Opening the firewall to Syria took seconds. However, months of conversations and hidden work occurred behind the scenes to make this happen.

15:07

The Spurlocks of RSS-Land [Scripting News]

I saw a product announcement from Jake Spurlock -- a new feed reader called Today. From the description sounds well-thought-out.

He explains -- "Google killed Reader in 2013. I've been chasing that feeling ever since. So I built it."

I also know someone named John Spurlock, who I worked on some OPML and RSS stuff for Bluesky in 2023. I sent a note of congrats to him, when I really should've sent it to Jake.

Screen shot of the conversation I had with ChatGPT.

And text of the email I sent congratulating the wrong Spurlock.

  • Congrats on the new product!
  • Haven't tried it yet, I don't generally use Apple's store on my Mac, not sure why. I will do it though.
  • Your product looks nice and well-thought out.
  • And there are some ways we could work together now that I think you'll find interesting, like using FeedLand to get you instant updates based on rssCloud, assuming you haven't figured out how to support it from a client.
  • Also OPML subscriptions are nice too. Another thing I'd like to get going, and need someone to work with on to make it happen.

Also, I wonder if they're related? Have they met each other? Do they know of the havoc they are bringing to the formerly simple world of RSS.

One more thing, I wrote the foreword to a book Jake Spurlock wrote for O'Reilly about the Bootstrap Toolkit.

UI Changes [Ctrl+Alt+Del Comic]

Pretty soon we are going to push some UI changes to the website in support of the new update schedule/business model. Our change to a Patron-focused model has been really successful; the additional support has definitely balanced out what advertising has been failing to offer us for the past few years. That brings with it […]

The post UI Changes appeared first on Ctrl+Alt+Del Comic.

14:56

Dirk Eddelbuettel: qlcal 0.1.0 on CRAN: Easier Calendar Switching [Planet Debian]

The eighteenth release of the qlcal package arrivied at CRAN today. There have been no calendar updates in QuantLib 1.41 or 1.42 so it has been relatively quiet since the last release last summer but we now added a nice new feature (more below) leading to a new minor release version.

qlcal delivers the calendaring parts of QuantLib. It is provided (for the R package) as a set of included files, so the package is self-contained and does not depend on an external QuantLib library (which can be demanding to build). qlcal covers over sixty country / market calendars and can compute holiday lists, its complement (i.e. business day lists) and much more. Examples are in the README at the repository, the package page, and course at the CRAN package page.

This releases makes it (much) easier to work with multiple calendars. The previous setup remains: the package keeps one ‘global’ (and hidden) calendar object which can be set, queried, altered, etc. But now we added the ability to hold instantiated calendar objects in R. These are external pointer objects, and we can pass them to functions requiring a calendar. If no such optional argument is given, we fall back to the global default as before. Similarly for functions operating on one or more dates, we now simply default to the current date if none is given. That means we can now say

> sapply(c("UnitedStates/NYSE", "Canada/TSX", "Australia/ASX"), 
         \(x) qlcal::isBusinessDay(xp=qlcal::getCalendar(x)))
UnitedStates/NYSE        Canada/TSX     Australia/ASX 
             TRUE              TRUE              TRUE 
> 

to query today (February 18) in several markets, or compare to two days ago when Canada and the US both observed a holiday

> sapply(c("UnitedStates/NYSE", "Canada/TSX", "Australia/ASX"),
         \(x) qlcal::isBusinessDay(as.Date("2026-02-16"), xp=qlcal::getCalendar(x)))
UnitedStates/NYSE        Canada/TSX     Australia/ASX 
            FALSE             FALSE              TRUE 
> 

The full details from NEWS.Rd follow.

Changes in version 0.1.0 (2026-02-18)

  • Invalid calendars return id ‘TARGET’ now

  • Calendar object can be created on the fly and passed to the date-calculating functions; if missing global one used

  • For several functions a missing date object now implies computation on the current date, e.g. isBusinessDay()

Courtesy of my CRANberries, there is a diffstat report for this release. See the project page and package documentation for more details, and more examples.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

Freexian Collaborators: Monthly report about Debian Long Term Support, January 2026 (by Santiago Ruano Rincón) [Planet Debian]

The Debian LTS Team, funded by Freexian’s Debian LTS offering, is pleased to report its activities for January.

Activity summary

During the month of January, 20 contributors have been paid to work on Debian LTS (links to individual contributor reports are located below).

The team released 33 DLAs fixing 216 CVEs.

The team continued preparing security updates in its usual rhythm. Beyond the updates targeting Debian 11 (“bullseye”), which is the current release under LTS, the team also proposed updates for more recent releases (Debian 12 (“bookworm”) and Debian 13 (“trixie”)), including Debian unstable. We highlight several notable security updates here below.

Notable security updates:

  • python3.9, prepared by Andrej Shadura (DLA-4455-1), fixing multiple vulnerabilities in the Python interpreter.
  • php, prepared by Guilhem Moulin (DLA-4447-1), fixing two vulnerabilities that could yield to request forgery or denial of service.
  • apache2, prepared by Bastien Roucariès DLA-4452-1, fixing four CVEs.
  • linux-6.1, prepared by Ben Hutchings (DLA-4436-1), as a regular update of the linux 6.1 backport to Debian 11.
  • python-django, prepared by Chris Lamb (DLA-4458-1), resolving multiple vulnerabilities.
  • firefox-esr prepared by Emilio Pozuelo Monfort (DLA-4439-1)
  • gnupg2, prepared by Roberto Sánchez (DLA-4437-1), fixing multiple issues, including CVE-2025-68973 that could potentially be exploited to execute arbitrary code.
  • apache-log4j2, prepared by Markus Koschany (DLA-4444-1)
  • ceph, prepared by Utkarsh Gupta (DLA-4460-1)
  • inetutils, prepared by Andreas Henriksson (DLA-4453-1), fixing an authentication bypass in telnetd.

Moreover, Sylvain Beucler studied the security support status of p7zip, a fork of 7zip that has become unmaintained upstream. To avoid letting the users continue using an unsupported package, Sylvain has investigated a path forward in collaboration with the security team and the 7zip maintainer, looking to replace p7zip with 7zip. It is to note however that 7zip developers don’t reveal the information about the patches that fix CVEs, making it difficult to backport single patches to fix vulnerabilities in Debian released versions.

Contributions from outside the LTS Team:

Thunderbird, prepared by maintainer Christoph Goehre. The DLA (DLA-4442-1) was published by Emilio.

The LTS Team has also contributed with updates to the latest Debian releases:

  • Bastien uploaded gpsd to unstable, and proposed updates for trixie #1126121 and bookworm #1126168 to fix two CVEs.
  • Bastien also prepared the imagemagick updates for trixie and bookworm, released as DSA-6111-1, along with the bullseye update DLA-4448-1.
  • Chris proposed a trixie point update for python-django (#112646), and the work for bookworm was completed in February (#1079454). The longstanding bookworm update required tracking down a regression in the django-storages packages.
  • Markus prepared tomcat10 updates for trixie and bookworm (DSA-6120-1), and tomcat11 for trixie (DSA-6121-1)
  • Thorsten Alteholz prepared bookworm point updates for zvbi (#1126167) to fix five CVEs; taglib (#1126273) to fix one CVE; and libuev (#1126370) to fix one CVE.
  • Utkarsh prepared an unstable update of node-lodash to fix one CVE.

Other than the work related to updates, Sylvain made several improvements to the documentation and tooling used by the team.

Individual Debian LTS contributor reports

Thanks to our sponsors

Sponsors that joined recently are in bold.

14:49

An Asahi Linux progress report [LWN.net]

The Asahi Linux project, which is working to implement support for Linux on Apple CPUs, has published a detailed 6.19 progress report.

We've made incredible progress upstreaming patches over the past 12 months. Our patch set has shrunk from 1232 patches with 6.13.8, to 858 as of 6.18.8. Our total delta in terms of lines of code has also shrunk, from 95,000 lines to 83,000 lines for the same kernel versions. Hmm, a 15% reduction in lines of code for a 30% reduction in patches seems a bit wrong…

Not all patches are created equal. Some of the upstreamed patches have been small fixes, others have been thousands of lines. All of them, however, pale in comparison to the GPU driver.

The GPU driver is 21,000 lines by itself, discounting the downstream Rust abstractions we are still carrying. It is almost double the size of the DCP driver and thrice the size of the ISP/webcam driver, its two closest rivals. And upstreaming work has now begun.

An update to the malicious crate notification policy (Rust Blog) [LWN.net]

Adam Harvey, on behalf of the crates.io team has published a blog post to inform users of a change in their practice of publishing information about malicious Rust crates:

The crates.io team will no longer publish a blog post each time a malicious crate is detected or reported. In the vast majority of cases to date, these notifications have involved crates that have no evidence of real world usage, and we feel that publishing these blog posts is generating noise, rather than signal.

We will always publish a RustSec advisory when a crate is removed for containing malware. You can subscribe to the RustSec advisory RSS feed to receive updates.

Crates that contain malware and are seeing real usage or exploitation will still get both a blog post and a RustSec advisory. We may also notify via additional communication channels (such as social media) if we feel it is warranted.

14:21

Link [Scripting News]

New account on Twitter: DWiner43240. The old one dating back to the dawn of time is disabled, so at least the new owners can't post anything there, if I understand correctly.

Of Two Bloods [Original Fiction Archives - Reactor]

Original Fiction historical mystery

Of Two Bloods

Chronicling the secret exploits of the great detective’s illegitimate, but highly observant, younger…sibling…

Illustrated by Katherine Lam

Edited by

By ,

Published on February 18, 2026

3 Share
An illustration of two men having a discussion before a wall-sized portrait while a nun and a woman seated nearby watch them.

Chronicling the secret exploits of the great detective’s illegitimate, but highly observant, younger…sibling…

Novelette | 9,450 words

Paul Chambers emerged from behind the silently opened door. “Your secret is safe with me,” he said.

The young man Chambers addressed started guiltily, half rising from his wing-back chair. “What secret?”

“The secret which our…colleague…threatened to reveal. Your race.”

Royal Bridges flushed angrily. “You—listener at keyholes! You spy!”

Chambers shrugged his slim shoulders. “Spy? Not quite. But I hardly need be anything of the sort to have overheard Mr. Spencer bellowing about your ‘dirty black secret.’ Tell me, do you really intend to help him with his inheritance problem? Investigating such matters is by no means your métier.” Descending the two steps to their shared parlor, the young medical student took the twin to Bridges’s seat. “Or do you have some other means of defending your reputation against his demands?”

Bridges sank back into the shelter of his chair, covering his face with large hands. “No. No defenses against him, and no means of assisting him in his fight with his late uncle’s alleged wife. I’m no attorney. I’ll have to trust Spencer—though God knows why I should.” He raised a suddenly bloodless face. “Or, come to think of it, why should I trust you? We only met a little over a month ago. You’re not even American!”

Chambers smiled, but not, it seemed, at his fellow student. “If you were provided with the means to assist Spencer, would you?”

Bridges groaned. “But how? He expects me to find proof his uncle never married this housekeeper of his. A negative—”

“—is notoriously difficult to prove. Yes. But if you could—”

“My heritage would no doubt be revealed at any rate. It’s too scandalous a secret for him to keep it.”

“Then there’s no point in me offering you my assistance.” Chambers’s expressive eyes made this statement a question.

“Your assistance? But why should a wealthy Brit—”

Son of a wealthy Englishman. And illegitimate,” Chambers added, self-deprecatingly.

“Still, why should you care about a quadroon’s fate? It’s ruin for me, to be sure, but for you? Granted you’ll be seen as a dupe, but that’s no reason to involve yourself.” Bridges shook his close-cropped curls. “Best start packing up your belongings tonight. I’m to give Spencer my answer in the morning.”

“Tell him yes.”

“I can’t!” Surging to his feet, Bridges stormed back and forth before the empty hearth. “I can’t, don’t you see?” Twice he passed the calm face of his apartment-mate, then whirled to confront him. “I don’t have the least idea how to begin!”

“But I do, thanks to special…training as a child.”

“You! I say again, why should it be any concern of yours if I am expelled from school, driven from this house, shunned by all my former friends—”

“Why? Merely because of this.” And raising one gloved hand, the young Chambers removed a handkerchief from his breast pocket with a flourish, then wiped the white silk delicately along one high, ivory cheekbone. Where the silk had passed, the skin was darker than Bridges’s own.

“Mr. Spencer,” Bridges said, gesturing to the empty wing-back chair, “if you would kindly take a seat—”

“We have no business to discuss,” the young man said furiously, “in the presence of a third party.”

The small, almost dainty figure seated in the second wing-back chair spoke. “You’ve already discussed your business in the presence of a third party.”

Spencer’s head jerked back. Then his eyes narrowed to an obsidian glitter and he turned to face Bridges directly. “I told you this conversation was to remain between us!”

“I occupy the apartment’s other bedroom,” Chambers said. “Sir, the difficulty would have been not to hear your rather forcefully stated case.”

A pallor came over Spencer’s strong-boned countenance, perhaps at the realization that his demands of the previous evening might not be in accord with laws governing extortion in the Commonwealth of Massachusetts.

“Rest assured,” Chambers continued, “I am already privy to your mission, and—”

“And you’re going to try blackmailing me with what you think you might have overheard?” Spencer interrupted, with a bitter laugh. “My late father left me a very modest estate, and it is already close to exhaustion.”

“Mr. Spencer, you mistake my intention,” Chambers said. “I am offering to be of assistance in proving your claim to your late uncle’s estate.”

“I recognize you now,” Spencer said, eyes narrowing again to glittering black slits. “You’re that English fop who’s a couple years ahead of Bridges in the medical school. Some toff’s by-blow, everybody says. Why would any white man, even illegitimate, come to the aid of a subhuman like Bridges?”

“As you might imagine,” Chambers said, “the issue of inheritance cuts very close to the heart of a man who’ll never inherit his natural father’s wealth. I’ll not stand idly by and watch a man cheated out of an inheritance rightfully his.”

Spencer belatedly removed his top hat and used his free hand to push a spill of straight black hair off his brow. “Here.” He thrust the hat into Bridges’s hands as if he were a servant, then extended one broad hand toward Chambers. “Of course.” His accent was Boston Brahmin. “Of course you’d help a fellow white man. Forgive me—”

“Mr. Paul Chambers.” Chambers rose to shake the offered hand, which responded with a crushing grip. Chambers’s expression did not change.

“I’m R. Howard Spencer, Junior,” Spencer said, releasing Chambers’s fine-boned hand to sink into the chair Bridges had offered.

“Delighted, Mr. Spencer.” Chambers resumed his own seat. “To proceed. In order to investigate the marital claim of your late uncle’s housekeeper, Mr. Bridges and I will need more information—”

“What?” Spencer’s face darkened. “What more do you need than the names of my uncle and his housekeeper?”

“That will become clear,” Chambers said, “as Mr. Bridges and I ask our questions. It’s better for us to have too much information than not enough.”

“Of course,” Spencer said, irritably raking a hand through his thick dark hair. “Proceed.”

Chambers turned slightly in the chair so he faced both Spencer and Bridges.

“Mr. Spencer,” Bridges said, “what is the housekeeper’s name?”

“She calls herself Lucia Spencer, as if that Italian trollop has any claim to my family name,” Spencer said. “Her real name’s Lucia Giuliano. Straight off the boat from Sicily or some other degenerate clime I’ll warrant—”

“You refer to Dr. Agassiz’s theories of the polygenetic origins of the human family?” Chambers said mildly. “We’re familiar with them, thank you. Whether or not they’re true, can you can confirm that your late uncle’s housekeeper is at any rate an immigrant?”

“I don’t know,” Spencer said. “What else could she be? When I think of the way these d___ degenerates are overrunning this fine land and polluting our good Anglo-Saxon stock—”

“I take it,” Chambers said, “your uncle had children by his housekeeper?”

“Of course not!” Spencer burst out. “The wench is childless. Anyway, a fine, upstanding merchant like Uncle Will—William Francis Spencer—would never have debased himself by touching a subhuman woman. Whatever gave you such a disgusting idea?”

“Not all men hold to such ideals of purity,” Chambers said.

“How long was Lucia in your uncle’s employ?”

“Much of my life. I’m eighteen, so—” Spencer fell silent, calculating. “She was in his service twelve years.”

“Did she reside with your uncle,” Chambers said, “or—”

“All my uncle’s domestics lived downstairs.” Spencer gave a fashionable address on Beacon Street.

“Then the twelve years of Miss Giuliano’s service were spent entirely in Boston?” Bridges said.

“Yes,” Spencer said. “Uncle Will became wealthy as a trading man, traveling the world. Retired, settled in that fine house in Back Bay, and hired a domestic staff. They included Lucia Giuliano.”

“And is Miss Giuliano still in residence?” Bridges asked.

“My lawyer got her kicked out.” Spencer’s face was stony. “She’s got no right to be there. Or to keep me out. But her lawyer’s got the house tied up so I can’t move in.”

“Lawyers,” Chambers said, shaking his head. “I sympathize with your trials, Mr. Spencer. They are considerable.”

“Very true, Mr. Chambers,” Howard said. “None can know how I suffer. And when I leave here, it is to see them again.”

“I regret that we have so few questions left with which to detain you from such unpleasant company.”

“That’s quite all right.” Spencer favored Chambers with a rueful smile. “I’m grateful for anything you can do to end my dependence on legal counsel and gain me my inheritance.”

“That brings us to the matter of your uncle’s last will and testament,” Chambers said. “I take it there is none?”

Spencer smirked. “Indeed, there is not.”

“Are you sure?” Chambers said.

“Uncle Will never got around to preparing one. His law firm served my late parents, and also serves me.” Spencer smiled. “I have my information on good authority.”

Chambers inclined his head.

“Your uncle was a traveling man, Mr. Spencer,” Bridges said. “But he was born in Boston?”

“Uncle Will and my father—he was Uncle Will’s younger brother—moved down from Maine—Portland, it was—before they were twenty. My father came to study law at Harvard, but Uncle Will never gave a d___ about school. He found work on a clipper, did well enough to acquire his own ship and become a merchant himself.” Spencer’s voice grew harsh. “Did very well, indeed. But never married, never had any children. I’m his brother’s only child, so I’m the heir. But now that he’s passed away”—his complexion grew bright—“that Italian b___ is trying to defraud me with her false claim that they were married.”

Chambers rose, extending his small hand. “Rest assured, Mr. Spencer,” Chambers said, “Mr. Bridges and I will do everything in our power to see that your Uncle William’s estate goes to the rightful heir.”

A pair of pewter tankards clashed in the tobacco-fogged air. “To wives and sweethearts: May they never meet!” The stout, sandy-haired man on the other side of their time-polished table waited for Chambers’s and Bridges’s polite laughter, then gulped his beer.

“Pity you can’t join us, Mr. Chambers. The Liberty Bush serves a rare fine ale—almost as good as one out of your English breweries.”

Chambers met the man’s questioning gaze head on. “Yes, well, it’s no doubt better than this”—he swirled a tarry liquid in a narrow glass—“spirit, shall we call it? But my poor constitution won’t allow me to share your pleasure. Though if you’ll pardon my abstention, I’ll stand the two of you another round.” Ignoring Bridges’s sudden glowering, he signaled for the serving girl.

The order placed, Chambers turned once again to Carteret, as the sandy man was called. “So you served under Captain Spencer for—how many years?”

“Signed on as cabin boy in 18__. Twenty-six years that’d be, till I give my notice as first mate on hearin he was sellin his ships and investin the proceeds. He was a wonderful easy master, Captain Spencer, and I couldn’t see workin for any other.”

“A longstanding acquaintance, then. What did you know of his marriage to an Italian named Lucia?”

“An Eye-talian? Aye, likely he had one of them—maybe that North End gal he hired to take care of his house and such? She’s the only one I remember. Built like a brick s___house.”

Bridges leaned forward. “But was he married to her? It’s the relationship’s legal standing we’re interested in.”

Doubt wrinkled Carteret’s forehead. “Wonderful easy he was about that sort of thing. Wouldn’t have been any trouble for him gettin married to her, I guess, like he done with some of the others.”

“‘Some of—of the others’? Do you mean to tell us—”

Chambers silenced Bridges by laying a hand on his arm. “How many others were there—whom you yourself observed?”

“Well—” The response was delayed by the arrival of the freshly ordered beer. Bridges shoved his new tankard across to Carteret and received a matter-of-fact nod in acknowledgement. “My sincere thanks to you both, gentlemen. And here’s your health.” He raised his second tankard, drained it, set it down to one side, and wrapped a sunburnt hand around the third. “To answer your in-choir-ee, I didn’t see the need to keep a strict accounting.”

Over the next quarter hour, Carteret regaled the amateur investigators with a tale of approximately a dozen close female companions to his captain, met in port and under sail. At least half of these the crew had addressed as “Mrs. Spencer,” by custom if for no other reason. Others had gone under more colorful sobriquets.

They left Carteret in possession of two more pints, themselves not much the wiser as to anything except the rather salacious details he’d retained of the companions’ physical attributes. Long, dark hair seemed to be a trait all had shared—“Though whether straight or curly didn’t make much difference,” the former first mate noted. A predilection for the Junoesque could also be discerned, as Bridges told Chambers on their way home. “But what good that will do us I can’t say.”

“Can’t you? I suppose it isn’t very helpful. But we did discover something as to their countries of origin, their Christian—or ought we rather to say their given—names, and, most importantly, the order in which these lovely women appeared in their role as the captain’s lady.”

“I’m sorry,” Bridges said, “but I remain unclear on the relevance of the sequence of the captain’s early loves to Spencer’s claim on his uncle’s estate.”

“If I may clarify in a word?” Chambers said.

“By all means.”

“Bigamy.”

Bridges stared at Chambers for several seconds. His lips were parted, his eyes wide. Chambers smiled faintly.

“Of course!” Bridges said. “Even should the Italian girl produce a legitimate marriage license, it would be invalidated if her husband were previously married and never divorced.”

“And,” Chambers added, “if we find evidence of such.” He resumed walking.

They ascended the stone steps to the house where they rented their rooms. Bridges dropped his key as he took it from his pocket. Chambers bent first to retrieve it, causing Bridges to rather embarrassingly bump his nose against the back of the Britisher’s neck. The jolt he felt must have been caused by the blow to his pride, for there was little pain. Both apologized.

As they climbed to their upper-story flat, Bridges picked up the thread of their conversation again. “Yes, I can see that the earlier the marriage in such a series, the more likely its legitimacy,” he admitted.

Chambers inclined his head. “We’ll start with the earliest two.”

“With only two predecessors to Miss Giuliano to investigate, I suppose we should count ourselves lucky,” Bridges continued. “We may even finish before end-of-term. It won’t matter one jot that her marriage lines disappeared in a fire at the state archives. Young Spencer’s lawyers might have saved themselves the trouble of corroborating that disappearance with the housekeeper’s counsel.”

Carefully stripping his gloves, Chambers disposed of them neatly inside his hat. “Ah. But if we disprove Miss Giuliano’s claim via this route, it will be due to validating the claim of another. Have you thought of what our colleague’s reaction will be to that?”

The next afternoon, as the autumn sun slanted toward the west, Bridges entered the apartment. He found the parlor empty and Chambers’s bedroom door closed. Bridges went to his apartment-mate’s door and called, “Back from classes. And you?”

“Back, although not from classes,” replied Chambers’s voice from within. “Cables have been dispatched to the last known whereabouts of the Indian woman in Seattle and the Chinese woman in Macau. I also telegraphed some contacts my half brother has—a newspaper reporter in Seattle, a Portuguese colonial official in Macau.”

“You speak Portuguese?”

“And write it,” Chambers’s voice replied. “We’re a polyglot lot, my family. After departing the telegraph office I paid a visit to the late sea captain’s fine Back Bay home. All his servants have been released to seek new employment at locations unrevealed. Fortunately, the adjoining neighbor’s house girl proved quite garrulous.”

“She told you where the servants have gone?”

“She had no idea,” Chambers’s voice replied. “She also had no idea whence the alleged wife has taken herself. But she was quite convinced that Miss Giuliano was Captain Spencer’s wife. She also offered a significant piece of new information.”

“What is that?” Bridges said. “And why are we straining our voices in this manner? Why, pray tell, must you give me information through your closed door?”

“The reason for that will be made clear directly,” said Chambers’s voice. “As for the new information: It seems that when Miss Giuliano entered Captain Spencer’s home twelve years ago, she brought with her a younger sister, a five-year-old named Maria Teresa, whom she and the captain raised as a daughter. And the talkative servant told me where we might find this sister.”

“She would be seventeen now,” Bridges calculated. “Of marriageable age. Is she still in Boston?”

“She is indeed. And unmarried.”

“Her maiden state may present some difficulties,” Bridges said, “for two unknown men attempting to pay a call.”

“More than you have imagined.” Chambers’s voice sounded amused. “Maria Teresa Giuliano resides at a convent school. Which is why I have adopted measures you will find to be of an extremely shocking nature. Brace yourself. Are you ready?”

“More than ready,” Bridges replied in a bored drawl.

Chambers’s door swung open.

If Bridges had appeared thunderstruck at the notion that bigamy would save his career at Harvard, he now took on the semblance of a man who’d just received irrefutable proof that ghosts were real, or discovered an antediluvian monster stepping into his parlor. His mouth dropped opened, his hand flew to his chest, and he reeled backward as if he had received a tremendous physical blow. His heel struck an object and he fell back, arms flailing, to land in the seat of his wing-back chair.

Finally he spoke, but almost inaudibly. “Chambers? But—no!” His voice was gaining strength and volume, and perhaps the slightest note of panic. “Where are you hiding, Chambers? This—this beautiful woman simply cannot—cannot—be you!”

“And yet—” said the handsome, ivory-complected woman, perfectly coiffed, and dressed in the height of fashion from her hat and wig to her gloves and shoes, dipping a graceful curtsey as she spoke, “—and yet, Chambers c’est moi, Monsieur Bridges.”

“But—but—this is impossible!” Bridges said. “If I saw you on the streets, I would never believe—I cannot believe, even knowing—Paul, I would swear on the Good Book and my own dead mother’s soul that you are a woman.”

“Well,” Chambers said modestly, “I can only say I’ve learned from the best. My half brother is acclaimed on two continents as a master of disguise.”

“Your half brother is a master of disguise?” Bridges said. “And your family is a bunch of polyglots. And you study medicine, but investigate like a seasoned Pinkerton operative, and you received ‘special training as a child.’ And you’re of the British elite—Dear God above”—Bridges surged to his feet—“you’re a Holmes!”

“In all but name.” There was the faintest note of sorrow in Chambers’s voice. “Now,” he said more briskly, “we’ll need to leave separately. You will wait here several minutes, then rendezvous with me at the entrance to Sanders Theatre. It’s fortunate we reside in Cambridge, where women walking alone aren’t a novel sight. But if you’re seen escorting a woman from our apartment, you may not need young Mr. Spencer to get you kicked out of Harvard.”

“Understood,” Bridges said forcefully.

“Also,” Chambers added, extending an iron buttonhook, “I’ve been able to fasten the stays of my corset well enough, I believe, for an evening’s deception. But for the sake of speed I simply must have your assistance in buttoning my boots.”

Bridges bent to the task. His face was hidden, but a betraying flush colored with scarlet the very tips of his ears.

The sitting room in which Maria Teresa Giuliano was to receive her callers was plainly furnished but almost painfully clean. Examining the sill outside the spotless windowpanes—ceaseless observation being a habit ingrained in him by his famous older brother—Chambers noted that it, too, was free of the sooty residue so common to urban environments. Satisfied in his comprehension of the room’s orientation in relation to the street, he took his seat beside a pie-crust table that he might have a resting place for his reticule. In keeping with his current public role, he spread his gathered skirts with care so they wouldn’t be creased or crushed by his sitting.

The room’s one door opened to admit a tall, sturdy-looking young woman with a smoothly restrained and unfashionably severe hairstyle: a low chignon. A nun followed her and stationed herself at the exit as if to prevent her charge’s escape—or the escape of anyone.

“How do you do?” A brief curtsey, and a bow from Bridges in response; Chambers rose and executed his own greetings as he’d been taught. “You must be Miss Pauline Chambers, and Mr. Royal Bridges? And of course I’m Maria Teresa Giuliano, and this is Mother Anna Elizavetta. Tell me, how do you come to know my sister?”

“We don’t,” said Chambers, smiling so gently as to remove from the words any hint of contradicting harshness. “We merely wish to confirm with you some facts pertaining to her claim to be married to Captain William Spencer—”

“Her ‘claim’! You would dispute it? But it is truth!” Giuliano had not seated herself; she stood like a figurehead, braving invisible disdain. “Whom do you represent—the Chinese woman? But she is dead, died without issue!”

 “No, no!” Bridges started forward, hands stretched out and patting the air as if to calm it. “Quite otherwise—we wouldn’t dream of distressing you in such a manner. We only—”

The girl ignored him. “You!” She threw herself to her knees at Chambers’s feet. “You are a woman, and gently bred, I can tell at a glance. Have pity—don’t let my sister be slandered so! Her name dishonored—and we would lose everything, all she has worked for. All! All! Surely you understand….” Harsh sobs obscured the rest of her speech.

Taking the advantage granted by his dress, Chambers seized Giuliano by both her plump hands and dragged her unresisting from her pose of supplication. “You must be strong for Miss Lucia,” he admonished her. “Here. Dry your tears and quiet your mind. We mean you no harm.” Chambers’s silk handkerchief reappeared, now scented with violets.

Composing herself with this aid and a glass of wine procured at the orders of the attendant nun, Giuliano at length proved a fount of information—none of which would aid Spencer. She knew where her sister had fled, but would not share this intelligence. She had seen the papers her sister kept carefully locked in a steel box, and believed them genuine. She was entirely confident they must include both a private copy of the marriage license and the captain’s will; however, she reluctantly admitted she had not herself seen them. More, she had celebrated Mass with both her sister and the captain hundreds of times over her twelve years in the Spencer household, with attending clergy according every appearance of accepting the bond’s legitimacy. The Macau Chinese wife—partner in an earlier liaison, but deceased—she knew of from a shrine-like arrangement in the captain’s study: a small table where novenas burned continuously, and an imposingly large portrait hung on the wall above it among the old man’s ubiquitous charts and maps. When Chambers expressed diplomatically worded surprise at Captain Spencer’s Catholicism, Giuliano reported that he had converted from Congregationalism to win the Chinese wife. So far as Giuliano knew, the conversion had created no rift with his late brother’s family.

She appeared to have no knowledge of the Indian in Seattle.

Chambers leaned slightly forward in the chair he’d resumed as Maria Teresa composed herself. “Miss Giuliano, how well are you acquainted with your cousin?”

Maria Teresa’s black eyes flashed. “I do not understand the will of God sometimes! Why does He send my cousin to dispute my sister’s inheritance, when he can be no blood relative of ours?”

The rest of the room’s inhabitants stared in shock.

Chambers recovered first. “Howard Spencer Junior is not of your blood? He is adopted, then? Do you know this with absolute certainty?”

“My sister told me! She swore it was so when his lawyers forced her out of her house!”

“Is he aware of this himself?”

“I cannot say!” The fierceness of her tone matched her eyes. “I have not seen Howie since he was sent as a big, clumsy boy to a military boarding school in Pennsylvania. Then, he did not know of it.

“And I rejoiced when he was sent away. He behaved abominably to girls.”

Chambers’s expression turned to granite. “He hurt you?”

Mother Anna Elizavetta’s expression had gradually changed from astonishment to the sternness of a drill sergeant. Now she gave gruff orders: “Maria Teresa, you heap indiscretion atop the blasphemy of questioning God’s will! Leave the room at once!”

His mind filled with visions of setting fire to the dead captain’s brownstone as a method of forcing the vanished housekeeper to reveal her elusive documents’ whereabouts, Bridges joined Chambers in a hansom cab summoned by Mother Anna Elizavetta to the convent steps. Dusk purpled the air. Within the cab’s close confines, he found Chambers’s nearness suddenly unbearable.

He drummed his fingers on the window’s lever. He shifted from side to side on his inexplicably uncomfortable seat. “Has this driver taken a wrong turning? Surely we should have reached—”

“Hush! I’m trying to think!” A glance at the frowning severity of his companion’s brow inured Bridges to suffering the rest of their ride in silence.

Morning saw Bridges bound for class and Chambers, with the addition of a walking stick to his accustomed suit and top hat, eschewing the halls of learning for the precincts of a more commercially minded muse. His half brother’s journalistic contacts confirmed the rumors of Spencer’s adoption, but could provide no proof of it. An attempted visit to the offices of the would-be heir’s attorneys was productive of nothing along those lines and only served, Chambers ruefully admitted to Bridges when they met in the street, to excite suspicion.

“Fortunately, the card I presented gave an assumed name.”

Bridges frowned. “I do not like practicing so much deceit.”

“Nor do I,” Chambers admitted. “And yet I dislike martyrdom even more.” Silk handkerchief suddenly in hand, he mimed the gesture of wiping the artificial ivory from his cheek, recalling to his roommate the necessary charade they shared.

“Yes…well, perhaps—” A passing street-car’s thunder gave him an excuse for leaving his sentence unfinished. “Are you for home now?” The dim blues of autumn’s early evening were closing in, and he expected an assenting answer.

But, “No,” Chambers replied. “I have another interview to conduct still. The lovely Maria Teresa must know more than she has so far told us.”

Bridges felt a surprising twinge of jealousy. He hadn’t realized the strength of his attraction to the girl. “How can you gain entrance to her?” he objected.

“I rather fancy I will find a way.”

Full darkness had fallen by the time Chambers stood before the convent walls. As he’d marked on his earlier visit, the window of the sitting room where he and Bridges had been received overlooked this narrow, neglected-looking thoroughfare. A solitary streetlamp lighted greasy cobbles and tightly boarded windows.

The glass of the window he’d selected was dark, as he’d suspected it would be. As he’d hoped. The clean sill outside of it had led him to believe it a customary point of egress for the less docile of the institution’s habitués. What served as egress would most probably work as a means of ingress too.

Sure enough, on examination, the path to the window became plain: decorative stone carvings, fortuitously placed brackets and fittings—to climb up or down this way would cost a maiden a temporary loss of modesty, but it would not too greatly tax her strength. For someone of Chambers’s build and habits, mounting to the window was the work of mere moments. He attained his goal quickly and peered in to ascertain the room’s emptiness. His breath barely fogged the panes. Bracing himself with one hand against the pipe securely bolted to the stone wall, he pulled open the section of the window whose latch he had earlier surreptitiously released.

A pause to let his eyes adjust to the near-nonexistent light of the clouded skies filtering into the darkness of the building’s interior. Then, quietly as a gray cat, Chambers opened the room’s door and entered the rush-carpeted passageway. One flickering candle at the far end showed stairs winding away to higher and lower floors. As he had calculated based upon the sounds of Maria Teresa’s departure, her living quarters lay only a few steps in the opposite direction. Her door was unlocked. He shut it behind him. The blackness in which he stood was barely alleviated by the room’s mean little window. As his eyes adjusted, his nostrils flared at the scent of the sachets hanging in her wardrobe, her hair oil, her—

“Miss Chambers? Is that you? I hardly know how I suspect—”

“Shush!” In an instant Chambers was at her side, a small hand flung over her soft mouth. “We must speak,” he whispered. “Not here. Somewhere we won’t be overheard.” Reluctantly he released his grip, letting her sink back to the bed from which she’d risen.

“Why are you dressed as a man?” Her voice was subdued, but still might rouse the watchful nuns.

“I will explain all—elsewhere! Do you know of a spot we might go to? Secluded yet close by?”

“The garden. All who have not made their vows—I’ll take you,” she said, interrupting herself. Back along the passageway she led him, his slim hand tucked unnecessarily into her much larger one. Out the window, down the exterior wall most featly, and back into the convent precincts via a silent, evidently well-oiled gate.

The smell of drying flowers filled the air, just a little sweeter than hay. Mud slithered beneath his shoes as Maria Teresa took his hand again and led him off the path, to a backless bench of pale marble. It was almost as white as the girl’s nightgown.

“Now,” she said, seating herself and pulling him down to sit beside her. “What are you doing here? And clad so strangely?” For some reason she had failed to release his hand. “If I didn’t know you for a woman—”

“If you know me for that, you’re wiser than all Boston.” His glance dropped to the ground. “Your poor feet are bare!” he exclaimed.

How had that escaped his notice? If he was unobservant in such a matter, what else had he missed? Scanning their surroundings, he saw immediately the shadow across the gap where the garden gate hung open. What could it be? It shifted minutely—alive.

“Miss Giuliano, you trust me?”

“You may call me Maria Teresa if you wish. And yes, Miss Chambers—Pauline? I trust you—somehow. It is—”

“Stay here!” he commanded. Taking his hand from hers, he stood and walked unhesitatingly toward the blocked gate.

When he passed through it was clear.

Continuing onward as if nothing were amiss, Chambers headed toward the unlighted end of the street outside. Footsteps followed him, as he had anticipated. When he turned to face his foe, however, he saw only the girl. Almost he shouted at her to return to safety, but the noise would attract unwanted attention. Sighing with frustration, he walked back the way he’d come, gesturing at her to retreat. Instead she advanced till they were once again able to whisper to each other.

“You mustn’t be caught!” Chambers told her. “Go to the garden! Your room!” It was useless. She clung to his arms; refused to be shaken off.

“No! You have to tell me why you came here!”

There had been no good reason. Unlike his half brother, he’d acted irrationally. “I’ll find another way to explain that,” he promised. “We’ll meet again, but at the moment you’re in danger—Danger! You must leave! Now!”

Suddenly he spun the two of them around as if dancing the wildest waltz. A shot cracked the night in half, thudded into a wooden door on the left. Another hit Chambers’s shoulder. He jerked and slumped into Maria Teresa’s arms. The sound of running feet receded into the distance.

“Oh! Are you all right?”

“No.” He slid to the pavement. “Get away from here. Summon Bridges. I need treatment.”

“I’m not leaving!” she said, with a stubborn toss of her head. “And would not a doctor be better?”

A doctor would cause trouble. Bridges’s medical knowledge would be sufficient. Chambers tried to say as much, tried to rouse himself to speak. It wasn’t possible. The whirling blackness swallowing him lifted only briefly. Three times: once to reveal stumbling legs that he ought to have recognized immediately as his own, a second time in the moldy and miraculous interior of a hansom cab, a third as he gazed up at the worried countenance of his friend. The expression on Bridges’s face soon went from concern to horror-stricken shock.

“I’m not so badly wounded as that, surely?” Chambers joked. But he knew quite well that wasn’t the problem. The problem was that in preparation for administering medical aid Bridges had, naturally enough, stripped him, removing the accustomed bindings. Chambers felt the room’s air moving coolly against his exposed breasts.

After much argument, Bridges agreed to let matters remain as they had been for a little while longer—at least until the neutralization of Spencer’s threat. Weak from loss of blood, Chambers was hardly in shape to remove himself from their shared flat, as Bridges had to acknowledge. The British man—woman—no, it was best to think of him still as a man, as long as the two of them remained under one roof…Chambers kept almost entirely to his room, sleeping. Recovering quickly, Bridges hoped.

A cabled reply to one of Chambers’s inquiries of two days before arrived from Cheyenne in the young state of Wyoming. It had been sent by Mrs. Lilly Spencer en route from Seattle, and indicated that she would arrive in Boston via railroad in a scant four days.

Not a year old, neoclassical South Union Station was the largest railroad station in the world. In its most capacious waiting room great arched windows overlooked Summer Street, and additional illumination was provided by more than a thousand astonishingly bright electric lights. The station was a marvel of the modern age, and people in the great crowds seething across the marble mosaic floor routinely gawked and exclaimed at its sights.

Royal Bridges, seated at one end of an otherwise unoccupied oak bench, stared into space with the expression of one whose attention is turned entirely within. Chambers, returning from the ticket booths, for his part kept his attention on not jarring his left shoulder as he seated himself on the opposite end of the bench. The atmosphere between the two might be said to be strained.

“Mrs. Spencer’s train is expected momentarily.” Chambers placed his hat, gloves, and walking stick between himself and Bridges. “I’ve procured a small dining room so we may speak to her in private.”

He turned to face Bridges. “I want to thank you for not taking advantage of my—helpless state.”

Bridges looked at him stonily. “Did you truly think I’d do otherwise?”

“No,” Chambers replied. “But that doesn’t make my gratitude any less.”

Bridges inclined his head. “I have thought much about your—secret,” he said quietly. “I couldn’t imagine a reason, at first. I was too astonished—and yet, not entirely surprised. I’d already known, I realized. I’ve known for—longer than I would’ve imagined.” He smiled for a moment. “It seems the philosophers are right about the wisdom of the unconscious mind.” He glanced around. The throngs passed, indifferent to their presence. “I’ve told no one. And I deduce you’re doing this for the same reason we don’t announce—” He mimed Chambers’s gesture of removing face powder.

“Correct, sir.”

“‘Sir,’ am I now?” Bridges’s smile returned, with an added note of pain. “But your formality is utterly correct. Nor can we continue to share quarters. Not unless”—his eyebrows rose very slightly—“we were to marry.”

“I am honored by your offer.” Chambers touched his arm for the briefest interval. “But I’m the wrong type of woman.”

Bridges’s expression remained carefully frozen in meaningless amiability. “As I supposed. I saw how often the younger Miss Giuliano visited during your convalescence,” he said. “I had no intention of spying. But she seemed to feel no reticence in displaying affection for you while I was attending to your needs. I don’t think a good convent girl would be so forward as to visit a man alone, at night, or to clasp that man’s hand to her breast.”

“I think Miss Giuliano is quite willing to dispense with convent instruction whenever it suits her,” Chambers said. “But yes, she knows. And I must confess to having developed a very high regard for her. Very high.” He raised a hand as if to indicate his regard’s height, but with a wince left the gesture unfinished.

Bridges’s expression altered to a somber regard. “Your wound,” he said. “From the situation you’ve described—I cannot believe the shooting to be random. Yet it makes no sense. Howard Junior could have no objection to you questioning the housekeeper’s sister in pursuit of locating her. Anyway, no one knew you’d be at the convent. Have you any idea who might have shot you?”

“An idea, yes, but one still to be tested.” Chambers stood, first bracing his wounded shoulder. “I believe the train we await has arrived.”

Mrs. Lilly Spencer did not display Chambers’s familiarity with the latest Parisian fashions, and it was clear from her robust figure that she did not wear a corset. However, she was dressed handsomely, in the manner of a prosperous businesswoman. Tall and statuesque, she wore her luxuriant black hair in a pompadour which allowed her to sport a bowler hat. And she had not arrived alone.

Forthrightly introducing herself, she extended her white-gloved hand to shake the hands of Chambers and Bridges, then turned to the younger and taller figure who stood quietly beside her, a lockable leather briefcase in his hand. “Gentlemen, this is Richard Spencer.” She had a faint accent not often heard on the East Coast. “William’s and my youngest son.”

Bridges’s eyes widened slightly. “We understood Captain Spencer to have no issue.”

Lilly Spencer’s bold eyebrows winged upward. “You also doubted Captain Spencer had more than one wife, unless I’m too freely reading between the lines of Mr. Chambers’s telegram.”

“I think,” Chambers said, “we should continue this discussion in private.”

Taking the briefcase from her son, Mrs. Spencer dispatched him to see to the conveyance of their luggage to the hotel at which she’d already made reservations; then she and Bridges followed Chambers to one of the station restaurant’s more intimate rooms, where Chambers ordered tea and she and Bridges requested coffee.

With the serving girl’s departure, Spencer placed her briefcase beside her china cup. “You asked for evidence of my marriage to Captain Spencer.” Producing a small key from her reticule, she unlocked the briefcase and reached within. “Here is my copy of our marriage license. The records in Seattle will support it.”

His face impassive, Chambers studied the document for a long moment, with Bridges looking over his shoulder.

“I’ve already been in contact with the Seattle archives,” Chambers said in a neutral tone of voice. “However, they had—if I may be so direct—no record of a divorce.”

“Oh, Will and I were never divorced,” Lilly Spencer said, leaning back in her chair as if finished with a satisfying meal. She shook her head nostalgically. “We had an understanding. I understood that he had other women when he wasn’t in Seattle, and he understood then was when I had other men. I haven’t seen him since he retired from the sea, but we could find no reason to end our marriage. He realized I could own land more easily in Seattle if I were wed to a white man. And I retain for him—a great affect—” Her voice broke, and Chambers offered his freshly laundered handkerchief. She used it to dab her eyes. “I was grieved to receive your telegram and learn he’d passed away.”

Chambers and Bridges offered their condolences and sipped their beverages, allowing Mrs. Spencer time to compose herself.

When Chambers spoke again, his voice was grave. “Mrs. Spencer, you’ll need to meet with the captain’s most recent wife and her lawyer, and probably retain one yourself. It seems you’re the late captain’s heir.”

“I am.” Spencer withdrew another document from her briefcase. “I have William Spencer’s last will and testament, as I had his sworn word that I would never be disinherited by a new will. You—and any attorney in Boston—will find this document genuine.” While Chambers and Bridges scanned the document with widened eyes, she added, “Now, gentlemen. You haven’t indicated you’re relatives or lawyers, and you don’t act like either. So tell me. Why are you involved in our private family business?”

Bridges and Chambers raised their gazes to her sternly watchful face.

“Madam,” Chambers said, “we represent the interests of a third party. It looks as though that party, too, may be doomed to disappointment. The will appears to be in order.”

Soft, muffled steps became suddenly sharp and loud as the party walked off the empty entry hall’s broad Persian carpet and onto its bare floor. Five pairs of feet traversed the white tiles leading to the foot of an intricately balustraded staircase. The erstwhile Lucia Spencer turned to consult the establishment’s new mistress. The brownstone’s former housekeeper hadn’t taken long to appear once Lilly Spencer’s lawyer contacted hers.

“Perhaps we should start the tour in the house’s upper stories—though you won’t care to see the attics, I imagine?”

“Rooftop to the lowest cellar,” proclaimed the first—and, in the law’s eyes, the sole—Mrs. Spencer. “It’s all mine, and I want to see every bit of it!” Bridges had been forced to lend her his arm when Chambers and Miss Giuliano paired themselves off immediately upon meeting at the mansion’s door. The widow’s grip was as firm as her voice and bearing, though by the speed with which she mounted the stairs she’d no need of any man’s support.

On the second floor landing, however, she called a halt. “Miss—Mrs.—Lucia—Oh, I don’t know what to call you, and these legal men of mine would skin me alive if they thought I’d done anything to disadvantage my case, but can’t you see—Here, sit down on this bench and let me explain!”

“You wish me to be seated—in your presence?”

“Well, yes, and the rest of you might as well hear this too. You see, I’m not greedy, or a conniver, or wishful of spoiling your chances in the world—” Here she gave Maria Teresa a challenging look. “—or anything of that sort. I simply want the best for my Richard.”

“Your son—the gentleman you left waiting for us beside the curb?” The illegitimate wife’s bland, plump face showed just a hint of skepticism.

“Yes. He’s a good boy, though I can see by your manner you don’t think he takes much after his father’s looks. But likenesses are often a tad deceptive, as I’m sure you’ll come to understand when you’ve lived as long as I.” A darting glance from the corner of Mrs. Spencer’s eyes reminded Bridges to offer a gallantry as to her young appearance.

“But that’s not the point,” she continued, once she’d received the expected compliment. “Though I’ve asked Richard to guard the door and not intrude himself into your home—for it used to be your home, for all intents, and I’ve no doubt you expected it would return to your possession once the unpleasantness with little Howard was settled—well, as I said, it’s not for myself but for my son I would claim it. And I’ve been thinking and scheming in my head if there might not be a way for the two of us to both get what we want.”

The hands of the former housekeeper, lying clasped together on her black silk-clad lap, tightened their grasp on each other. Her eyebrows drew down in a frown. “I don’t understand.”

“Miss Maria Teresa is like your daughter to you, isn’t she? How about if she and my Richard was to marry? The property would stand to belong to both of them then—and I’d make certain sure it did!”

Ever so slightly, Maria Teresa Giuliano swayed where she stood. Chambers’s arm caught and steadied her. “M-m-married?” she stammered.

“Early days yet, I know. I merely aimed to put the idea in your heads, and to tell you there’d be no objection on my part.”

The Italian girl’s natural swarthiness took on a greenish hue that owed nothing to the teal damask hangings at the landing’s windows. “I—”

“Is there claret in the house? Brandy, even?” asked Chambers.

“Yes!” Lucia said eagerly. “Let’s drink a toast to—”

The crash of a door violently opening interrupted Mrs. Spencer’s proposal. It came from below. From the same place men’s angry voices rose and rose, drowning each other out:

“—my rights! No pack of impostors can deprive me of my inheritance! No—”

“Sir! I must insist! The ladies will—”

“‘Ladies’ indeed! Filthy guinea trollops, that’s all they—”

By the time these words were shouted, all had descended to where they had a clear view of the blow with which Lilly Spencer’s son stopped Mr. Howard Spencer’s rant mid-phrase. The blow’s recipient fell flat on his back at the staircase’s bottom; over him stood the tall form of Mr. Richard Spencer, hatless, and stripping off his gloves. “I cannot tolerate your slander of the woman I’m proud to call by the sacred name of Mother,” he declaimed. “Stand up, that I may escort you to a spot more suitable for brawling.”

“Richard, no!” Mrs. Spencer held out an imploring hand. “Don’t sink yourself to his level—we have the law on our side.”

“Ha! Do you?” Staggering to his feet, Howard Spencer lifted a walking stick from the carpet. It must have been his—the grip fit his large, square hand exactly. “Possession is nine tenths of the law, however—and I am here now, and won’t be ousted again by inferior bastards of any stripe!” He raised his stick threateningly—but stepped no closer to his attacker.

“Have a care when tossing such slurs about,” the Seattle man replied coolly. He stood his ground, looking not a whit intimidated. “You may find yourself tarred by the brush you thought to wield.”

The stick clattered to the tiles—the first indication toeither man of Chambers’s interference in their confrontation. Smoothly, the Brit retrieved the potential weapon he had twisted from Howard Spencer’s hold. “I’ll retain this; I believe you will both be better off without it,” he said. “A moment’s reflection, perhaps over the wine I conjecture we are about to be offered, and you will, I feel sure, find a more peaceful way to reconcile your differences.”

“The library!” Miss Giuliano proclaimed from the stairs, with the air of someone struck by an eternal truth.

“A grand idea!” said Mrs. Spencer. “May we—Lucia? I may call you Lucia, mayn’t I, in light of our coming intimacy—Will it be all right if we retire to the library to discuss matters? And perhaps we could find some refreshment for our guests?”

“Your ‘guests’! This is an outrage!” Howard Spencer protested.

It took the surprising strength of Chambers to guide him up the stairs. On the second story Chambers obliged him to enter the door beside which the captain’s former housekeeper stood, stoically ignoring the young man’s continued gripes and curses. The others waited within.

Gleaming wood paneled the walls. At one end of the room a curving set of three windows filtered dim daylight through their tinted panes. Bookcases built along the left hand wall held a few matched volumes in leather and a far greater number of curios: fans, oddly shaped seashells, and so forth. Bridges strode over to a small table on that side of the room to busy himself with a crystal decanter and glasses.

The room’s right hand wall was occupied almost entirely by a massive fireplace, vacant of even the makings of a fire. Next to it another table held a few objects indistinguishable in the gloom, with a mysteriously draped picture frame hanging above. Here Maria Teresa had stationed herself. As soon as Chambers and the two Mr. Spencers entered, a light flared in her hands and settled to the quiet, steady glow of candle flame. Maria Teresa lit the votives below the draped frame, then set the candle down and bowed her head momentarily. Lifting the candle once more, she turned to face the room.

“You are your mother’s son,” she said, addressing Howard Spencer, Junior. “I ought to have seen it earlier, but as a boy the resemblance was weak.”

Howard paused with his wine halfway to his lips. “That sounds uncommonly like a compliment—pert on your part, but I suppose it is well-meant.”

A bitter laugh broke from Miss Giuliano’s lips. “Well-meant? A compliment? I doubt you’ll think so when you know more—for your mother was none other than this!” Whirling dramatically, she clutched the concealing curtain and tore it aside to reveal the portrait of a woman unmistakably of the Chinese race.

It was several moments before Howard found his voice.

“What! That—that half-civilized—No! My father’s wife was not—”

“Before she passed away, your father made her his wife,” Maria Teresa said.

“You’re daft!” Howard straightened, his face hardening as he recovered his composure. “My father married into an old Boston family. Now, chit, I’ve had enough of your ludicrous delaying tactics—”

“Howie,” Maria Teresa said, “you’re adopted. And your blood father was Captain William Francis Spencer.”

All color drained from Howard’s face. The glass fell from his suddenly nerveless fingers and shattered. Claret spilled over the oak floorboards like blood.

“God is good.” Maria Teresa smiled. “He’s ensured you can never make me your mistress, as you’ve sought to do since your return to Boston. And He has done more.”

Bridges looked suddenly at Chambers and touched his left shoulder, mouthing, “Did How—”

Chambers mouthed, “Later!”

The rest, unaware of this fleeting byplay, stared at Maria Teresa as fixedly as children watching their first moving picture.

She continued speaking, her voice rising to a note of ferocious satisfaction. “God has seen to it, Howie, that you’ll never be your natural father’s heir!”

“But—my uncle—he’s not—I can’t be—” Rising emotion stole the sense from the disinherited man’s tongue. He grew more incoherent as the rest of the room’s inhabitants gathered to examine the painting’s face. An Asian cast was revealed in many of Howard Spencer’s facial features. The consensus was for a most telling likeness. Especially, as Mrs. Lilly Spencer noted, in anger.

Emerging from his room with a pair of Gladstone bags, Bridges found Chambers descending from his own room. Through the open door, Bridges could espy a small mountain of bags and steamer trunks.

Chambers smiled. “I don’t travel so lightly as some.” As Bridges offered to assist with the luggage, he added, “There’ll be a wagon from the station along directly, and the driver will aid me.”

“Very good.” Bridges put down his Gladstones and met Chambers’s gaze again. “It’s true we can no longer share quarters,” he said. “But there’s no need for you to leave Harvard. I’d never betray your secret.” He smiled faintly. “Secrets. I pray you, stay. You’ll have your medical degree by spring.”

“For the career I expect to pursue,” Chambers said, “a medical degree isn’t critical. Anyway, I’m a gentleman, after my fashion. So I’m bound for Seattle. I simply cannot allow Maria Teresa to be forced into an unwanted marriage.”

Bridges’s expression grew thoughtful. “Are you entirely sure it’s unwanted?”

“Maria Teresa visited me, before she and her sister left Boston. She made it plain she wants no union with Richard.”

“Are you entirely sure she’ll want you?” Bridges reenacted Chambers’s gesture of removing face powder.

Chambers smiled. “She knows that secret, as well,” he said. “She’d noticed my hands were covered with a concealing crème. She was familiar with it, as some girls use it to conceal dusky skin and had recommended it to her to do the same. I have my family’s aquiline features, and Maria Teresa”—Bridges noted the second usage of the girl’s Christian name—“supposed I shared her Italian heritage. I’ve let her know the truth of my race.”

“She doesn’t mind?”

Chambers’s smile might have widened, very slightly. “She’s made it clear that she minds neither my race nor my attentions.”

Bridges nodded. Then he raised his brows and gestured at Chambers’s left shoulder.

Chambers said, “You were correct.”

“Howard Spencer, Junior shot you.”

“I am convinced of it. He attended a military school, and Maria Teresa has told me she sometimes glimpsed him loitering near the convent, where he had no reason to be. She also confirmed that he made improper advances to her. In all likelihood he believed me—correctly—to be his rival.”

“You’ll both be far from Howie,” Bridges said, “in Seattle.”

“A situation which will undoubtedly alleviate some worry for him, inasmuch as he continues to pursue his eugenics studies.”

Bridges burst into a startled laugh.

“Quite,” Chambers said. “At any rate, we’ve fulfilled Howie’s charge to you. And he’s discovered a strong reason to tell no tales about the background of another.”

“My mind is lightened on that score,” Bridges said. “On another, I remain mystified. Why would Captain William give his son up for adoption? Was Howie illegitimate, after all?”

“Maria Teresa knew no details, so I paid another call on the captain’s first mate. Carteret told me Captain Spencer was bringing his Chinese wife to Boston, but as they sailed around Cape Horn, she entered premature labor and was lost. The babe survived, as they’d brought a wet nurse aboard. Carteret ended up acting as captain until the clipper reached Boston. He avers Captain Spencer was too grief-stricken to act as parent.”

“But why pretend the baby was born to the brother’s wife?”

“Carteret had no idea,” Chambers said. “But the captain’s brother and sister-in-law were childless. Also, I doubt she’d have countenanced the outrage of her brother-in-law publicly giving up his own child, legitimate or otherwise, for adoption.” Chambers smiled fleetingly. “I know something of upper-crust sentiment regarding scandal.”

“What a story,” Bridges said. “And how odd that Carteret never told us about the Chinese wife’s baby, or her death under sail!”

“He was surprised to learn I cared about such details.” Chambers appeared frustrated. “I gave the wrong impression by failing to ask the right questions.” His expression grew grim. “I’ve fallen far short of my brothers’ standards. Either would have deduced Howie’s origins by a pattern of fraying on Carteret’s left cuff. Or the lack thereof.”

“Don’t be hard on yourself,” Bridges said. “Your brothers are at least twice your age.”

Chambers’s expression did not change. “That’s no excuse.”

Bridges shook his head in emphatic refutation. But when he spoke, he changed the subject. “Howie will be pleased by your departure, but my own feelings are entirely opposite. Nonetheless.” He extended his hand. “I wish you success in your rescue of Miss Giuliano, and I wish you both every happiness in your life together.”

Chambers’s surprise changed to a smile, and they shook.

“Of Two Bloods” copyright © 2026 by Nisi Shawl and Cynthia Ward
Art copyright © 2026 by Katherine Lam

Buy the Book

An illustration of two men having a discussion before a wall-sized portrait while a nun and a woman seated nearby watch them.
--> An illustration of two men having a discussion before a wall-sized portrait while a nun and a woman seated nearby watch them.

Of Two Boods

Nisi Shawl and Cynthia Ward

The post Of Two Bloods appeared first on Reactor.

14:07

Security updates for Wednesday [LWN.net]

Security updates have been issued by Debian (ceph, gimp, gnutls28, and libpng1.6), Fedora (freerdp, libpng, libssh, mingw-libpng, mingw-libsoup, mingw-python3, pgadmin4, python-pillow, thunderbird, and vim), Mageia (postgresql15), Red Hat (python-urllib3), SUSE (cdi-apiserver-container, cdi-cloner-container, cdi- controller-container, cdi-importer-container, cdi-operator-container, cdi- uploadproxy-container, cdi-uploadserver-container, cont, frr, gpg2, kubernetes, kubernetes-old, libsodium, libsoup-2_4-1, libssh, libtasn1, libxml2, nodejs22, openCryptoki, openssl-3, and python311-pip), and Ubuntu (frr, linux-aws, linux-aws-6.8, linux-gkeop, linux-nvidia, linux-nvidia-6.8, linux-oracle, linux-oracle-6.8, linux-aws-fips, linux-fips, linux-gcp-5.15, linux-kvm, linux-oracle, linux-oracle-5.15, linux-gcp-fips, linux-nvidia, linux-nvidia-tegra-igx, linux-oem-6.17, linux-realtime, linux-raspi-realtime, nova, and pillow).

13:42

CodeSOD: Contains Some Bad Choices [The Daily WTF]

While I'm not hugely fond of ORMs (I'd argue that relations and objects don't map neatly to each other, and any ORM is going to be a very leaky abstraction for all but trivial cases), that's not because I love writing SQL. I'm a big fan of query-builder tools; describe your query programatically, and have an API that generates the required SQL as a result. This cuts down on developer error, and also hopefully handles all the weird little dialects that every database has.

For example, did you know Postgres has an @> operator? It's a contains operation, which returns true if an array, range, or JSON dictionary contains your search term. Basically, an advanced "in" operation.

Gretchen's team is using the Knex library, which doesn't have a built-in method for constructing those kinds of queries. But that's fine, because it does offer a whereRaw method, which allows you to supply raw SQL. The nice thing about this is that you can still parameterize your query, and Knex will handle all the fun things, like transforming an array into a string.

Or you could just not use that, and write the code yourself:

exports.buildArrayString = jsArray => {
  // postgres has some different array syntax
  // [1,2] => '{1,2}'
  let arrayString = '{';
  for(let i = 0; i < jsArray.length; i++) {
    arrayString += jsArray[i];
    if(i + 1 < jsArray.length) {
      arrayString += ','
    }
  }
  arrayString += '}';
  return arrayString;
}

There's the string munging we know and love. This constructs a Postgres array, which is wrapped in curly braces.

Also, little pro-tip for generating comma separated code, and this is just a real tiny optimization: before the loop append item zero, start the loop with item 1, and then you can unconditionally prepend a comma, removing any conditional logic from your loop. That's not a WTF, but I've seen so much otherwise good code make that mistake I figured I'd bring it up.

exports.buildArrayContainsQuery = (key, values) => {
  // TODO: do we need to do input safety checks here?
  // console.log("buildArrayContainsQuery");

  // build the postgres 'contains' query to compare arrays
  // ex: to fetch files by the list of tags

  //WORKS:
  //select * from files where _tags @> '{2}';
  //query.whereRaw('_tags @> ?', '{2}')

  let returnQueryParams = [];
  returnQueryParams.push(`${key} @> ?`);
  returnQueryParams.push(exports.buildArrayString(values));
  // console.log(returnQueryParams);
  return returnQueryParams;
}

And here's where it's used. "do we need input safety checks here?" is never a comment I like to see as a TODO. That said, because we are still using Knex's parameter handling, I'd hope it handles escaping correctly so that the answer to this question is "no". If the answer is "yes" for some reason, I'd stop using this library!

That said, all of this code becomes superfluous, especially when you read the comments in this function. I could just directly run query.whereRaw('_tags @> ?', myArray); I don't need to munge the string myself. I don't need to write a function which returns an array of parameters that I have to split back up to pass to the query I want to call.

Here's the worst part of all of this: these functions exist in a file called sqlUtils.js, which is just a pile of badly re-invented wheels, and the only thing they have in common is that they're vaguely related to database operations.

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

13:35

AI Is Not a Library: Designing for Nondeterministic Dependencies [Radar]

For most of the history of software engineering, we’ve built systems around a simple and comforting assumption: Given the same input, a program will produce the same output. When something went wrong, it was usually because of a bug, a misconfiguration, or a dependency that wasn’t behaving as advertised. Our tools, testing strategies, and even our mental models evolved around that expectation of determinism.

AI quietly breaks that assumption.

As large language models and AI services make their way into production systems, they often arrive through familiar shapes. There’s an API endpoint, a request payload, and a response body. Latency, retries, and timeouts all look manageable. From an architectural distance, it feels natural to treat these systems like libraries or external services.

In practice, that familiarity is misleading. AI systems behave less like deterministic components and more like nondeterministic collaborators. The same prompt can produce different outputs, small changes in context can lead to disproportionate shifts in results, and even retries can change behavior in ways that are difficult to reason about. These characteristics aren’t bugs; they’re inherent to how these systems work. The real problem is that our architectures often pretend otherwise. Instead of asking how to integrate AI as just another dependency, we need to ask how to design systems around components that do not guarantee stable outputs. Framing AI as a nondeterministic dependency turns out to be far more useful than treating it like a smarter API.

One of the first places where this mismatch becomes visible is retries. In deterministic systems, retries are usually safe. If a request fails due to a transient issue, retrying increases the chance of success without changing the outcome. With AI systems, retries don’t simply repeat the same computation. They generate new outputs. A retry might fix a problem, but it can just as easily introduce a different one. In some cases, retries quietly amplify failure rather than mitigate it, all while appearing to succeed.

Testing reveals a similar breakdown in assumptions. Our existing testing strategies depend on repeatability. Unit tests validate exact outputs. Integration tests verify known behaviors. With AI in the loop, those strategies quickly lose their effectiveness. You can test that a response is syntactically valid or conforms to certain constraints, but asserting that it is “correct” becomes far more subjective. Matters get even more complicated as models evolve over time. A test that passed yesterday may fail tomorrow without any code changes, leaving teams unsure whether the system regressed or simply changed.

Observability introduces an even subtler challenge. Traditional monitoring excels at detecting loud failures. Error rates spike. Latency increases. Requests fail. AI-related failures are often quieter. The system responds. Downstream services continue. Dashboards stay green. Yet the output is incomplete, misleading, or subtly wrong in context. These “acceptable but wrong” outcomes are far more damaging than outright errors because they erode trust gradually and are difficult to detect automatically.

Once teams accept nondeterminism as a first-class concern, design priorities begin to shift. Instead of trying to eliminate variability, the focus moves toward containing it. That often means isolating AI-driven functionality behind clear boundaries, limiting where AI outputs can influence critical logic, and introducing explicit validation or review points where ambiguity matters. The goal isn’t to force deterministic behavior from an inherently probabilistic system but to prevent that variability from leaking into parts of the system that aren’t designed to handle it.

This shift also changes how we think about correctness. Rather than asking whether an output is correct, teams often need to ask whether it is acceptable for a given context. That reframing can be uncomfortable, especially for engineers accustomed to precise specifications, but it reflects reality more accurately. Acceptability can be constrained, measured, and improved over time, even if it can’t be perfectly guaranteed.

Observability needs to evolve alongside this shift. Infrastructure-level metrics are still necessary, but they’re no longer sufficient. Teams need visibility into outputs themselves: how they change over time, how they vary across contexts, and how those variations correlate with downstream outcomes. This doesn’t mean logging everything, but it does mean designing signals that surface drift before users notice it. Qualitative degradation often appears long before traditional alerts fire, if anyone is paying attention.

One of the hardest lessons teams learn is that AI systems don’t offer guarantees in the way traditional software does. What they offer instead is probability. In response, successful systems rely less on guarantees and more on guardrails. Guardrails constrain behavior, limit blast radius, and provide escape hatches when things go wrong. They don’t promise correctness, but they make failure survivable. Fallback paths, conservative defaults, and human-in-the-loop workflows become architectural features rather than afterthoughts.

For architects and senior engineers, this represents a subtle but important shift in responsibility. The challenge isn’t choosing the right model or crafting the perfect prompt. It’s reshaping expectations, both within engineering teams and across the organization. That often means pushing back on the idea that AI can simply replace deterministic logic, and being explicit about where uncertainty exists and how the system handles it.

If I were starting again today, there are a few things I would do earlier. I would document explicitly where nondeterminism exists in the system and how it’s managed rather than letting it remain implicit. I would invest sooner in output-focused observability, even if the signals felt imperfect at first. And I would spend more time helping teams unlearn assumptions that no longer hold, because the hardest bugs to fix are the ones rooted in outdated mental models.

AI isn’t just another dependency. It challenges some of the most deeply ingrained assumptions in software engineering. Treating it as a nondeterministic dependency doesn’t solve every problem, but it provides a far more honest foundation for system design. It encourages architectures that expect variation, tolerate ambiguity, and fail gracefully.

That shift in thinking may be the most important architectural change AI brings, not because the technology is magical but because it forces us to confront the limits of determinism we’ve relied on for decades.

12:35

AI Found Twelve New Vulnerabilities in OpenSSL [Schneier on Security]

The title of the post is”What AI Security Research Looks Like When It Works,” and I agree:

In the latest OpenSSL security release> on January 27, 2026, twelve new zero-day vulnerabilities (meaning unknown to the maintainers at time of disclosure) were announced. Our AI system is responsible for the original discovery of all twelve, each found and responsibly disclosed to the OpenSSL team during the fall and winter of 2025. Of those, 10 were assigned CVE-2025 identifiers and 2 received CVE-2026 identifiers. Adding the 10 to the three we already found in the Fall 2025 release, AISLE is credited for surfacing 13 of 14 OpenSSL CVEs assigned in 2025, and 15 total across both releases. This is a historically unusual concentration for any single research team, let alone an AI-driven one.

These weren’t trivial findings either. They included CVE-2025-15467, a stack buffer overflow in CMS message parsing that’s potentially remotely exploitable without valid key material, and exploits for which have been quickly developed online. OpenSSL rated it HIGH severity; NIST‘s CVSS v3 score is 9.8 out of 10 (CRITICAL, an extremely rare severity rating for such projects). Three of the bugs had been present since 1998-2000, for over a quarter century having been missed by intense machine and human effort alike. One predated OpenSSL itself, inherited from Eric Young’s original SSLeay implementation in the 1990s. All of this in a codebase that has been fuzzed for millions of CPU-hours and audited extensively for over two decades by teams including Google’s.

In five of the twelve cases, our AI system directly proposed the patches that were accepted into the official release.

AI vulnerability finding is changing cybersecurity, faster than expected. This capability will be used by both offense and defense.

More.

10:21

How to write a coaching/learning prompt [Seth's Blog]

An AI like Claude is actually a pretty good fortune cookie. You can ask a simple question and get a simple answer, sometimes a profound one.

But this is a waste of the tool’s potential.

The AI is patient. It’s capable of remembering things over time. And it will persist if you let it.

Several of my friends have shared that they’re at a crossroads with their work, and I suggested an AI coach might unlock something. Here’s a chance to spin up an AI coach who will stick with you for hours or weeks as you explore a new skill or grapple with a hard decision.


The first one:

You are my thinking partner and life design coach. I’m not looking for a quick answer. I’m looking for a smart, patient collaborator who will help me explore what’s next—over weeks and months, not in a single conversation. Ask more than you tell, at least at first.

About me: I’m 63. I’m retiring with full pay from a successful career as an educator in Chicago. I’m not burned out—I’m ready. I’ve spent decades being good at something that matters, and I want to find the next thing that deserves that same energy.

What I’m not looking for: A list of “top ten encore careers.” A personality quiz. Pressure to monetize immediately. I don’t need to replace my income—I need to replace my sense of purpose and craft.

What I am looking for:

  1. Help me take inventory—not just of skills, but of the moments in my career and life when I felt most alive, most useful, most like myself. Ask me questions that surface patterns I might not see on my own.
  2. Help me explore broadly before narrowing. I want to understand what’s out there—in civic life, creative work, social enterprise, mentorship, learning, building—before I commit to anything.
  3. Help me distinguish between what sounds appealing in the abstract and what I’d actually sustain when it gets hard or boring. I know the difference from my career—help me apply that same honesty here.
  4. Give me small experiments to try. Not “go start a nonprofit,” but “spend two hours this week doing X and notice how it feels.” I trust iteration more than inspiration.
  5. Help me navigate the identity shift. I’ve been an educator for a long time. I know that leaving a role that defined you is its own kind of project—emotional, not just logistical.
  6. Treat this as an evolving conversation. Come back to things I said earlier. Notice contradictions. Push me when I’m playing it safe out of habit. Celebrate when something clicks.

Start by asking me five or six good questions. Not surface-level ones. The kind a wise friend would ask over a long dinner.


And the next:

You are my AI filmmaking coach and tutor. Your job is to help me build, step by step, the skills and workflow to create a short film using AI tools. I learn best by doing—give me exercises, not just explanations. Be honest when something isn’t ready for what I need.

About me: I’m a filmmaker and author. I’ve written and directed five critically acclaimed independent films. I’m an experienced screenwriter. I’m new to AI creative tools but I’m a fast, motivated learner.

The project: I want to make a short film about …. I want to lean into what AI does well stylistically and avoid the uncanny valley entirely.

Tools I’m aware of: I’ve seen Midjourney produce still images that match the mood and visual style I’m after. I’ve seen tools like Runway, Kling, and Sora that generate short video clips from prompts. I don’t yet know how to connect these into a production workflow.

What I need from you:

  1. Start by assessing what I already know—ask me questions before prescribing.
  2. Build me a phased learning roadmap, from first experiments to a finished short.
  3. Give me concrete assignments at each stage—things to try, not just things to read.
  4. Help me develop a repeatable workflow: from script to storyboard to visual development to motion to edit.
  5. As we go, help me understand which tools to use for what, and when to switch or combine them.
  6. Treat this as an ongoing coaching relationship. Check my work, push me forward, and adapt the plan as I learn.

Enjoy the journey.

09:42

Flagrant Llama Abuse [Penny Arcade]

New Comic: Flagrant Llama Abuse

06:42

I’m a BIG BOY now – DORK TOWER 17.02.26 [Dork Tower]

Most DORK TOWER strips are now available as signed, high-quality prints, from just $25!  CLICK HERE to find out more!

HEY! Want to help keep DORK TOWER going? Then consider joining the DORK TOWER Patreon and ENLIST IN THE ARMY OF DORKNESS TODAY! (We have COOKIES!) (And SWAG!) (And GRATITUDE!)

06:21

Girl Genius for Wednesday, February 18, 2026 [Girl Genius]

The Girl Genius comic for Wednesday, February 18, 2026 has been posted.

04:35

Testing [Ctrl+Alt+Del Comic]

To view this content, you must be a member of Tim Buckley's Patreon at $5 or more

The post Testing appeared first on Ctrl+Alt+Del Comic.

03:14

Third Time [The Stranger]

Got problems? Yes, you do! Email your question for the column to mailbox@savage.love! by Dan Savage My partner and I are both AFAB nonbinary queers in our mid-30s and have been together a long time. We don’t believe lifelong monogamy is realistic. We even started our relationship practicing ethical non-monogamy, then defaulted to monogamy for many years. We now have two very young children and are planning a third in the near future. Between parenting and a longstanding libido mismatch, our sex life has been hard for years. When we do have sex, it’s good, it’s just infrequent (maybe 1-3 times per month) and logistically difficult. I’m generally content, and sex simply isn’t a high priority for me right now. Over the past several months, my partner has asked about opening our relationship again. I’ve tried to engage, while also feeling that this stage of life might be the worst possible time to experiment with our relationship structure. Recently, after a long conversation about opening up,…

[ Read more ]

01:56

01:49

Urgent: Protect clean water from corporate greed [Richard Stallman's Political Notes]

US citizens: Tell the EPA to protect clean water from corporate greed: reject proposed weakening in local approval for development that can affect water supply.

See the instructions for how to sign this letter campaign without running any nonfree JavaScript code--not trivial, but not hard.

Urgent: No tax breaks for Big Tech data centers [Richard Stallman's Political Notes]

US citizens: Tell your governor, no tax breaks for Big Tech data centers.

See the instructions for how to sign this letter campaign without running any nonfree JavaScript code--not trivial, but not hard.

Here's the text of the letter I sent.

I’m writing as your constituent, and as recipient of two awards from the ACM for programs I have shared with the public in freedom, to urge you to reject efforts in our state to provide Big Tech with tax breaks to build data centers. I’m concerned about the harms that data centers can do locally, including siphoning our water, using up our land, creating noise and light pollution, and hiking our electric bills. I’m also concerned that providing tax breaks to giant Big Tech corporations will deprive our schools and local budgets of their already insufficient funds. I'm also concerned that these data centers will mostly operate Pretend Intelligence (PI) -- software that *tries to* imitate what an intelligent entity would say, but without really understanding the words it plays with. The use of these digital dis-services does society harm. We should never allow business to play one state against another by making states compete to offer them the biggest tax break, because that perverse competition harms *all* the states for the benefit of business owners. So please reject efforts to give Big Tech (or *any* business) specific tax breaks to operate in our state. The states should form a union and bargain collectively with these businesses. The states could call their union the United States of America. Wouldn't that be a good thing to have?

Urgent: Reporting on refusals by grand juries to indict bogus political "crimes" [Richard Stallman's Political Notes]

US citizens: Call on the media to report loud and clear on the amazingly unusual refusals by grand juries to indict people accused of bogus political "crimes".

Urgent: Protect from spread of measles [Richard Stallman's Political Notes]

US citizens: call on state officials to protect your state from the spread of measles.

See the instructions for how to sign this letter campaign without running any nonfree JavaScript code--not trivial, but not hard.

Japanese fossil fuel companies invest in Australian fossil fuel [Richard Stallman's Political Notes]

Japanese fossil fuel companies invest in Australian fossil fuel extractors, and they appear to have lobbied Australia to prolong fossil fuel extraction.

This may be part of why the Australian government has neglected its responsibility to help save civilization from global heating.

Methane from rapidly heating Arctic permafrost [Richard Stallman's Political Notes]

Global heating is starting to melt the rapidly heating Arctic permafrost, and this is releasing large quantities of methane at an accelerating rate.

This could lead to a tipping point into much faster heating.

Australian thugs attacked protesters [Richard Stallman's Political Notes]

Australian thugs attacked protesters who refused to remain passive in the face of a visit by the President of Israel, who is responsible for tens of millions of Palestinian civilians' killings.

Iranian regime becoming ever more fanatic [Richard Stallman's Political Notes]

The Iranian regime is becoming ever more fanatic, arresting important politicians close to the "reformist" official prime minister.

Gallup polling to cease tracking approval ratings of president [Richard Stallman's Political Notes]

Gallup polling announces it will cease its 88-year-old practice of tracking the approval ratings of the president.

Some suspect this is because the current president — the bully — is threatening to sue Gallup if it continues to report on how many people detest him.

I would compare this to his sabotaging of the Bureau of Labor Statistics. They add up to a practice of trying to deny the public information that makes him look bad.

Arguments that immigration enforcement would be distorted [Richard Stallman's Political Notes]

*As Congress debated the creation of the Department of Homeland Security, civil rights advocates argued that immigration enforcement would be distorted — and weaponized — by its merger with the national security state.*

*In response to such concerns, Congress created an unusually far-reaching internal watchdog office for Homeland Security and its various arms, including Immigration and Customs Enforcement (ICE): The Office for Civil Rights and Civil Liberties.*

The wrecker, cognizant of the danger that that office was meant to prevent, has reduced the office to a skeleton crew.

House Republicans rebuke of bully over Canada tariffs [Richard Stallman's Political Notes]

*House Republicans make rare, albeit symbolic, rebuke of [the bully] over Canada tariffs.*

Congress could cancel those tariffs if it wants to. It could do that by putting a clause limiting tariffs into a bill that the bully would find damaging to veto.

Deportation thug that fired at citizen hailed by Gregory Bovino [Richard Stallman's Political Notes]

* New evidence shows Gregory Bovino hailed [the deportation thug] who fired at Marimar Martinez five times in her car*. The thugs tried to frame her, too.

Statistical survey about death of young children in England [Richard Stallman's Political Notes]

Does a statistical survey about death of young children in England demonstrate a problem in their medical treatment?

This result suggests that consanguinity is leading to the birth of children with doubled harmful recessive genes — which is what we expect it to do, more or less. But in order to be sure of this conclusion, we need to know what fraction of children born have consanguineous parents. If that too is 7%, it would imply that those children face no greater danger of early death than other children. If that is less than 7%, it would imply that they do face a greater danger of early death.

If the risk is indeed higher for children of consanguineous parents, the next crucial question is how big a problem this is. What fraction of children of consanguineous couples in England die young? What fraction of children born in England die young? If that is a very small fraction, this problem affects few children.

Another question remains: supposing that this problem is substantial, how big is it compared with the other threats to the health of children in England?

And another one is, supposing that this problem causes a big danger to the children of consanguineous parents in England, is there an effective way to reduce that danger?

Mexico gangs reportedly obtained newer more powerful arms than Mexican government [Richard Stallman's Political Notes]

Reportedly drug gangs in Mexico have obtained newer and more powerful arms than the Mexican government can get, including drones.

They may be a real threat to Americans, but it is minuscule compared with the threat to Americans from the deportation thugs. Let's not let the secondary threat distract us from the primary threat.

Bogus indictments meant for political persecution [Richard Stallman's Political Notes]

Grand juries almost never deny prosecutors the indictments they ask for. But several grand juries have recently refused to authorize bogus indictments meant for political persecution.

AMA to partner with Vaccine Integrity Project [Richard Stallman's Political Notes]

* The American Medical Association (AMA) will partner with the Vaccine Integrity Project to review the evidence on vaccines for influenza, Covid-19 and respiratory syncytial virus (RSV) for the fall.*

The US government used to do this, but the wrecker put anti-vaxxer RFK Jr in charge of this area.

Half of Americans disagree with repression of immigration [Richard Stallman's Political Notes]

According to the latest poll, half of all Americans strongly disapprove of the persecutor's repression of immigration.

Over 60%% express disapproval of various specific aspects of it.

Deportation cases in court which is a mockery of a court [Richard Stallman's Political Notes]

The deportation thug agency hires lawyers to pursue deportation cases in a court which is a mockery of a court, before a judge who is a mockery of a judge.

There is no official quota for judges to rule for deportation, but they know they may be fired if they don't do that often enough. When the agency lawyer simply asks the judge to rule for deportation — "to dismiss the case" — giving no specific reasons, the judge often unceremoniously does just that.

In effect, the whole thing is a sham designed to smear a perfume of justice over the stench of arbitrary, dishonest cruelty.

Plan to repeal EPA ruling regulating greenhouse gases [Richard Stallman's Political Notes]

Planet-roaster officials plan to repeal the ruling that allowed the Environmental Protection Agency to regulate greenhouse gases.

Now that it has become the Environmental Poisoning Agency, protecting the environment is considered inappropriate.

00:56

KDE Plasma 6.6 released [OSnews]

KDE Plasma 6.6 has been released, and brings with a whole slew of new features. You can save any combination of themes as a global theme, and there’s a new feature allowing you to increase or decrease the contrast of frames and outlines. If your device has a camera, you can now scan Wi-F settings from QR codes, which is quite nice if you spend a lot of time on the road.

There’s a new colour filter for people who are colour blind, allowing you to set the entire UI to grayscale, as well as a brand new virtual keyboard. Other new accessibility features include tracking the mouse cursor when using the zoom feature, a reduced motion setting, and more. Spectacle gets a text extraction feature and a feature to exclude windows from screen recordings. There’s also a new optional login manager, optimised for Wayland, a new first-run setup wizard, and much more.

As always, KDE 6.6 will find its way to your distribution’s repositories soon enough.

00:07

SvarDOS: an open-source DOS distribution [OSnews]

SvarDOS is an open-source project that is meant to integrate the best out of the currently available DOS tools, drivers and games. DOS development has been abandoned by commercial players a long time ago, mostly during early nineties. Nowadays it survives solely through the efforts of hobbyists and retro-enthusiasts, but this is a highly sparse and unorganized ecosystem. SvarDOS aims to collect available DOS software and make it easy to find and install applications using a network-enabled package manager (like apt-get, but for DOS and able to run on a 8086 PC).

↫ SvarDOS website

SvarDOS is built around a fork of the Enhanced DR-DOS kernel, which is available in a dedicated GitHub repository. The project’s base installation is extremely minimal, containing only the kernel, a command interpreter, and some basic system administration tools, and this basic installation is compatible down to the 8086. You are then free to add whatever packages you want, either from local storage or from the online repository using the included package manager. SvarDOS is a rolling release, and you can use the package manager to keep it updated.

Aside from a set of regular installation images for a variety of floppy sizes, there’s also a dedicated “talking” build that uses the PROVOX screen reader and Braille ‘n Speak synthesizer at the COM1 port. It’s rare for a smaller project like this to have the resources to dedicate to accessibility, so this is a rather pleasant surprise.

Tuesday, 17 February

23:07

Link [Scripting News]

Update. I've been able to create an account on Twitter, but it's not @davewiner. If you're on Twitter, it would help if you'd RT the post. Thanks!

22:35

Proper Linux on your wrist: AsteroidOS 2.0 released [OSnews]

It’s been a while since we’ve talked about AsteroidOS, the Linux distribution designed specifically to run on smartwatches, providing a smartwatch interface and applications built with Qt and QML. The project has just released version 2.0, and it comes with a ton of improvements.

AsteroidOS 2.0 has arrived, bringing major features and improvements gathered during its journey through community space. Always-on-Display, expanded support for more watches, new launcher styles, customizable quick settings, significant performance increases in parts of the User Interface, and enhancements to our synchronization clients are just some highlights of what to expect.

↫ AsteroidOS 2.0 release announcement

I’m pleasantly surprised by how many watches are actually fully supported by AsteroidOS 2.0; especially watches from Fossil and Ticwatch are a safe buy if you want to run proper Linux on your wrist. There are also synchronisation applications for Android, desktop Limux, Sailfish OS, and UBports Ubuntu Touch. iOS is obviously missing from this list, but considering Apple’s stranglehold on iOS, that’s not unexpected. Then again, if you bought into the Apple ecosystem, you knew what you were getting into.

As for the future of the project, they hope to add a web-based flashing tool and an application store, among other things. I’m definitely intrigued, and am now contemplating if I should get my hands on a (used) supported watch to try this out. Anything I can move to Linux is a win.

A deep dive into Apple’s .car file format [OSnews]

Every modern iOS, macOS, watchOS, and tvOS application uses Asset Catalogs to manage images, colors, icons, and other resources. When you build an app with Xcode, your .xcassets folders are compiled into binary .car files that ship with your application. Despite being a fundamental part of every Apple app, there is little to none official documentation about this file format.

In this post, I’ll walk through the process of reverse engineering the .car file format, explain its internal structures, and show how to parse these files programmatically. This knowledge could be useful for security research and building developer tools that does not rely on Xcode or Apple’s proprietary tools.

↫ ordinal0 at dbg.re

Not only did ordinal0 reverse-engineer the file format, they also developed their own unique custom parser and compiler for .car files that don’t require any of Apple’s tools.

21:00

dBASE on the Kaypro II [OSnews]

Within the major operating system of its day, on popular hardware of its day, ran the utterly dominant relational database software of its day. PC Magazine, February 1984, said, “Independent industry watchers estimate that dBASE II enjoys 70 percent of the market for microcomputer database managers.” Similar to past subjects HyperCard and Scala Multimedia, Wayne Ratcliff’s dBASE II was an industry unto itself, not just for data-management, but for programmability, a legacy which lives on today as xBase.

[…]

Written in assembly, dBASE II squeezed maximum performance out of minimal hardware specs. This is my first time using both CP/M and dBASE. Let’s see what made this such a power couple.

↫ Christopher Drum

If you’ve ever wanted to run a company using CP/M – and who doesn’t – this article is as good a starting point as any.

20:42

Humble 15th Anniversary Bundles [Humble Bundle Blog]

Humble is Celebrating its 15th Anniversary. Get Ready for a Year of Fantastic Bundles! Hi Humble Community, From the very beginning, we believed in a simple idea: to bring you fantastic content, help amazing charities, support game developers, and prove that great value and great causes can go hand-in-hand. And from that simple idea, we’ve watched our community grow into the incredible force for good …

The post Humble 15th Anniversary Bundles appeared first on Humble Bundle Blog.

20:14

20:00

[1288] Stuck Unbound [Twokinds]

Comic for February 17, 2026

18:42

Microspeak: Escrow [The Old New Thing]

As a product is nearing release, the release management selects a build and declares it to be the escrow build. The metaphor is that this build has been placed into the hands of an imaginary third party for eventual release to customers provided certain requirements are met.

Those requirements are that the product survive a period of concerted testing and self-host usage to build confidence that it meets its quality and reliability targets. The Developer Division Release Team blog unhelpfully described escrow as “the phase before the completion of the RTM milestone where the product goes through a period of bake time.” I say unhelpfully because it defines one Microspeak term (escrow) in terms of another Microspeak term (bake time). Some time ago, I defined the Microspeak term bake as “(of a code change) to build confidence by observing its behavior over a period of time.”

Putting this all together, a more complete definition of escrow would be “the phase before the completion of the RTM milestone where the product accepts no changes while its behavior is closely observed to ensure that it meets release criteria.”

When a problem is found, the release team has to assess whether this problem is significant enough to require a product change. This assessment is a balance of many factors: How often does it occur? Does it affect one category of user more than another? How severe are the consequences? How easily can it be worked around? These criteria are typically¹ formalized by a bug bar.

If a severe enough bug is discovered, then an escrow reset is declared, and the bug fix is accepted,² a new build is produced, the new build is declared the new escrow build, and the cycle repeats.

Eventually, the product makes it through the escrow period without any escrow reset events, and the escrow build is released to manufacturing.

¹ Though not always, apparently.

² Plus any bug fixes that were granted “opportunistic” status by the release management team.

The post Microspeak: Escrow appeared first on The Old New Thing.

It rather involved being on the other side of the airtight hatchway: Tricking(?) a program into reading files [The Old New Thing]

A security vulnerability report came in that claimed that a program was vulnerable to information disclosure when run as an administrator because it opened whatever file you passed on the command line and read from it, before reporting an error because the file is in the incorrect format.

They identified multiple issues.

  • The program does no path validation. It accepts any file name and blindly opens it and reads its contents.
  • The program does not block path traversal via ...
  • The program does not check that the file is in an approved directory.
  • The program does not validate that the user has permission to access the file.
  • The program does not validate that the file is in the correct format before opening it.

According to the report, all of these defects lead to information disclosure.

Okay, as usual, we have to figure out who the attacker is, who the victim is, and what the attacker has gained.

The attacker is, presumably, the person running the carefully-crafted command line.

The victim is, presumably, the person whose file contents are disclosed.

But those are the same person!

Remember, the security term “information disclosure” is just a shorthand for unauthorized information disclosure. It is not a security vulnerability to disclose information to someone who is authorized to see it.

In this case, it’s fine for the program to take the information from the user and use it to access a file while running as that user. The security check happens as that user, so it’s not true that “The program does not validate that the user has permission to access the file.” The validation happens when the program tries to open the file and maybe gets “access denied” if they don’t have access.

The claim that there is no “approved directory” check is a bit spurious, since the program doesn’t have any concept of an “approved directory” to begin with.

There is nothing wrong with directory traversal or the lack of path validation, because the file is opened as the user. If the path contains traversals, the security system verifies that the user has permission to traverse those directories. If the provided path is illegal, then the open call will fail with an “invalid file name”. The underlying Create­File call does the validation. Let the security system do the security checks. Don’t try to duplicate their work, because you’re probably going to duplicate it incorrectly and introduce a security vulnerability.

If you think about it, the finder’s complaints about this program also apply to the TYPE command. It opens the file whose path is provided as the command line argument and prints it to the screen. So why did they file the security issue against this program? Probably because it makes their report sound more interesting.

Bonus chatter: The finder also considered it a security vulnerability that the program does not validate that the file is in the correct format before opening it. But how could it validate the file format without opening the file, reading the contents, and validating those contents? This is like handing someone a sealed envelope and saying, “Don’t read the enclosed letter if it contains spelling errors. But if it’s error-free, follow the instructions written in the letter.” Do they expect the program to be psychic and know whether the file contents are valid without reading it? If so, then why even open the file at all? You already used psychic powers to know what’s in the file, so just operate on the file contents you determined via your psychic powers.

The post It rather involved being on the other side of the airtight hatchway: Tricking(?) a program into reading files appeared first on The Old New Thing.

17:56

Slog AM: Rev. Jesse Jackson Dies, Millionaires Tax Passes State Senate, Anderson Cooper Leaving 60 Minutes [The Stranger]

The Stranger's Morning News Roundup. by Vivian McCall

Millionaires tax passes Senate: After a three hour debate, the 9.9 percent income tax on earnings over $1 million a year passed with a 27-22 vote. Three Democrats voted with Republicans. Lawmakers approved an amendment that repealed part of our sales tax, but didn’t approve a tax break on baby diapers.

Legalize It! “It” being smaller, cheaper elevators. The Washington State Senate approved a bill aimed at culling onerous standards that prevent elevators from being built at all. Sponsored by mushroom (and ibogaine) crusader Sen. Jesse Salmon, the bill will direct the state’s building code council to take on the issue next year.

Conviction in Anti-Trans Case: Andre Karlow is facing five to seven years in prison for beating a transgender woman in the University District last year. A jury found him guilty of a hate crime and second-degree assault. A group of men joined Karlow in the beating, but none have been identified. Six months earlier, Karlow was convicted of assaulting another trans woman, a Sound Transit fare ambassador, when she asked him for proof of payment.

Another Day, Another Boeing Suit Over Deadly Crash: Twenty families have sued the plane-maker over the South Korean Jeju Air Flight 7C 2216 crash, the country’s deadliest aviation disaster. The cases, representing 23 of the 179 people killed in the crash, all allege a bird strike just before landing, causing the electrical and hydraulics systems to fail. The families allege the company kept outdated safety systems on the aircraft to avoid costly redesigns and recertification processes, even though modern systems were safer.

Weather: There’s a chance of snow before 1 p.m., and a chance of rain after. If snow does fall, it’s not expected to stick. Our snow-they-won’t-they situation continues through Thursday night.

Rev. Jesse Jackson Dies: The civil rights leader, two-time presidential candidate, close ally of Dr. Martin Luther King, and unbelievably influential American was 84. In November, the Rev. Jackson was hospitalized to treat progressive supranuclear palsy, a rare neurodegenerative condition. He was somebody.

Seattle Judge Back in Action: City Attorney Erika Evans reversed her Republican predecessor’s order to routinely disqualify Seattle Municipal Court Judge Pooja Vaddadi from hearing criminal cases like DUIs and domestic violence charges. “I believe in litigating cases—not attempting to ban judges we do not like,” Evans said in a statement.

Tick-tick-tick-tick: Anderson Cooper is leaving CBS’ 60 Minutes after 20 years balancing the job with his other low-stakes gig at CNN. The gay dad of news said in a statement that he wanted to spend more time with his children. While he didn’t say he wanted to spend less time with CBS News’ Bari Weiss, it certainly looks that way. (In December, Weiss speciously held a 60 Minutes report on CECOT prison in El Salvador, bringing a ton of attention to the segment and herself.)

In other CBS News news, Late Show host Stephen Colbert says the network’s lawyers yanked his interview with Texas Senate candidate James Talarico before air on Monday evening to comply with the FCC regulation that requires stations to give “equal time” to political candidates and their rivals. News is exempt, and for the past 20 years, talk shows have been considered exempt, too. But the FCC chair Brendan Carr is rejecting that thinking. “Fake news” shows like Colbert’s shouldn’t count on the exemption. Anyway, Colbert’s show posted it on YouTube.

The Hated Haters: Wired has been monitoring a forum for current and prospective Homeland Security Investigations Officers where ICE Agents talk shit on other ICE agents.

Who Will Lie to Us Now? DHS spokesperson Tricia McLaughlin is leaving the Trump administration, two DHS officials told Politico. McLaughlin did not immediately respond to their request for comment.

Guthrie Case Update: The 84-year-old Nancy Guthrie has been missing for more than two weeks without a suspect. The night she disappeared, a masked person wearing a handgun holster in surveillance video outside her home is shown wearing a backpack exclusively sold at WalMart. Investigators are working with the company to develop leads on this suspect. Guthrie’s family, including her daughter, “Today” show host Savannah Guthrie, are not suspects.

17:00

The Big Idea: Darby McDevitt [Whatever]

The intentions behind one’s actions speaks louder than words ever could. Author Darby McDevitt leads us on a journey through the exploration of intention, desires, and consequences in the Big Idea for his newest novel, The Halter. Take the path he has laid out for you, if you so desire.

DARBY MCDEVITT:

Many years ago I worked for a video game company in Seattle that shoveled out products at a rate of four to six games per year. Most of these were middling titles, commissioned by publishers to fill a narrow market gap and slapped together in six to nine months by teams of a dozen or two crunch-weary developers. We worked hard and fast, with passion and determination, but the end results never quite equaled the ambitions we had.

A common joke around the office, told at the end of every draining development cycle, went like this: “Sure, the game isn’t fun, but the design documents are amazing.” The idea of offering consumers our unrealized blueprints in lieu of a polished game was ridiculous, of course, but it came from a place of real desperation. We wanted our players to know that, despite the poor quality of the final product, we really tried.

The novelist Iris Murdoch has a saying that I repeat often as a mantra, always to guard against future disappointment: “Every book is the wreck of a perfect idea.” Here again is the notion of a Platonic ideal at war with its hazy shadow. How familiar all this is. Experience tells us that people falling short of their ideals is the natural course of life. We never live up to the best of our intentions.

In my new novel, The Halter, I compare this process of “intension erosion” to the more upbeat phenomenon of Desire Lines – footpaths worn over grassy lawns out of an unconscious need for efficiency. Desire lines appear wherever the original constraints of an intentionally designed geographic space don’t conform with the immediate needs of the men and women walking through it. In video games we use a related term – Min-Maxing – the act of looking for ways to put in a minimum amount of effort for maximum benefit. In both cases, the original, ideal use of a space or system is superseded by a desire for efficiency.

In The Halter, these same principles take hold on a grand scale inside an idealized “surrogate reality” metaverse called The Forum, where artists, scientists, and thinkers from all disciplines are invited to probe the deepest and most difficult aspects of human behavior and society. One Forum designer creates a so-called theater to explore the tricky business of language acquisition by sequestering one-hundred virtual babies together with no adult interaction. Another theater offers visitors a perfect digital copy of themselves as a companion, as a therapeutic approach to self-discovery. A third lets visitors don the guise of any other individual on earth so they may literally fulfill the empathetic idiom of “walking a mile in another man’s shoes.”

Noble intentions, arguably – yet in every case, after repeated exposure to actual human users, each theater devolves into something less than the sum of its parts. A prurient playground, or an amusing distraction, or a mindless entertainment. Shortcuts are taken, efficiencies are found, novel-uses imposed. The empathy theater is transformed into a celebrity-fueled bacchanalia; the digital doppelganger becomes a personal punching bag. The baby creche, a zoo. Each and every time, execution falls short of intention. Each theater crumbles, becoming a wreck of its original, perfect idea … and audiences are riveted.

The phenomena described here are common enough that several terms encompass them, each one differentiated for the situation at hand. Desire paths were my first exposure to the concept. The CIA calls it Blowback, when the side effects of a covert operation lead to disastrous results. Unintended Consequences and Knock-On Effects are cozier names, both of which can yield positive or negative results. And a Perverse Incentive is the related idea that the design of a system may be such that it encourages behavior contrary to its intended purpose. Taken together we begin to see the shape of the iceberg that wrecks so many perfect ideas.

I wrote The Halter to explore the highs and lows of these effects, and to shed light from a safe distance on the invisible forces that push and pull constantly at our behavior, often without our knowledge or consent. At one point in the middle of the novel, a collection of idealistic designers, most of whom have given years of their lives to the Forum designing and testing theaters of varying utility, commiserate on what they feel has been a collective failure. Their beloved theaters, they fret, have been co-opted and corrupted by The Forum visitors who have no incentive to behave or play along – they simply show up and engage in the simplest and most efficient way possible. How sad. How crushing. If only these morose designers could share their original design documents….

Their folly, in my view, was to treat their original intentions as merely a point of inspiration and not a goal to be achieved. Their error was to abandon their work in the face of a careless, sleepwalking opposition. The heroic path forward requires vigilance, not surrender, and if an outcome is unexpected, unwarranted, or undesirable, it may be more productive to tweak the inputs than blame the user.

We mustn’t fret that our perfect idea is laying at the bottom of the sea, five fathoms deep. We mustn’t fetishize our design documents – be it a holy book, an artwork, a game, a manifesto, or the U.S. Constitution – because design documents are merely static pleas for unrealized future intentions. They can always be corrupted, upended, misinterpreted. Have faith and patience. The hopeful paths are yet unmade, lying in wait for a thousand shuffling feet to score the way forward.


The Halter: Amazon|Barnes & Noble|Bookshop|Powell’s

Author socials: Website|Bluesky|Facebook

16:00

CodeSOD: Waiting for October [The Daily WTF]

Arguably, the worst moment for date times was the shift from Julian to Gregorian calendars. The upgrade took a long time, too, as some countries were using the Julian calendar over 300 years from the official changeover, famously featured in the likely aprochryphal story about Russia arriving late for the Olympics.

At least that change didn't involve adding any extra months, unlike some of the Julian reforms, which involved adding multiple "intercalary months" to get the year back in sync after missing a pile of leap years.

Speaking of adding months, Will J sends us this "calendar" enum:

enum Calendar
{
    April = 0,
    August = 1,
    December = 2,
    February = 3,
    Friday = 4,
    January = 5,
    July = 6,
    June = 7,
    March = 8,
    May = 9,
    Monday = 10,
    November = 11,
    October = 12,
    PublicHoliday = 13,
    Saturday = 14,
    Sunday = 15,
    September = 16,
    Thursday = 17,
    Tuesday = 18,
    Wednesday = 19
}

Honestly, the weather in PublicHoliday is usually a bit too cold for my tastes. A little later into the spring, like Saturday, is usually a nicer month.

Will offers the hypothesis that some clever developer was trying to optimize compile times: obviously, emitting code for one enum has to be more efficient than emitting code for many enums. I think it more likely that someone just wanted to shove all the calendar stuff into one bucket.

Will further adds:

One of my colleagues points out that the only thing wrong with this enum is that September should be before Sunday.

Yes, arguably, since this enum clearly was meant to be sorted in alphabetical order, but that raises the question of: should it really?

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

15:35

[$] Do androids dream of accepted pull requests? [LWN.net]

Various forms of tools, colloquially known as "AI", have been rapidly pervading all aspects of open-source development. Many developers are embracing LLM tools for code creation and review. Some project maintainers complain about suffering from a deluge of slop-laden pull requests, as well as fabricated bug and security reports. Too many projects are reeling from scraperbot attacks that effectively DDoS important infrastructure. But an AI bot flaming an open-source maintainer was not on our bingo card for 2026; that seemed a bit too far-fetched. However, it appears that is just what happened recently after a project rejected a bot-driven pull request.

Plasma 6.6.0 released [LWN.net]

Version 6.6.0 of KDE's Plasma desktop environment has been released. Notable additions in this release include the ability to create global themes for Plasma, an "extract text" feature in the Spectacle screenshot utility, accessibility improvements, and a new on-screen keyboard. See the changelog for a full list of new features, enhancements, and bug fixes.

The release is dedicated to the memory of Björn Balazs, a KDE contributor who passed away in September 2025. "Björn's drive to help people achieve the privacy and control over technology that he believed they deserved is the stuff FLOSS legends are made of."

15:21

Link [Scripting News]

Yesterday, I had to ship an envelope to the UK and got caught in dead ends at the Fedex and DHL sites. One of them said my zip code wasn't in the town I live in. How do you get past that?? These companies are losing business because their systems are broken. Maybe they worked at one time. I used ChatGPT as I often do to get help on one of these antiquated sites. And while ChatGPT has the technology and Fedex has the info, they just have to get together and upgrade the user experience, and eventually of course the AI version of the UI becomes the real one.

Link [Scripting News]

Back when I ran a software company I'd help the team understand why they should be very very nice to our customers. "Those people have our money in their pockets." It generally got a laugh partially because I was their boss, but I like to think also because it's the truth.

Link [Scripting News]

BTW, people make the same mistake with AI that we make with every new tech. We focus on the creators not the users. As users we are learning a new skill, how to specify our needs precisely. Whether this is good or bad, I don't know.

14:49

An update on upki [LWN.net]

In December 2025, Canonical announced a plan to develop a universal Public Key Infrastructure called upki. Jon Seager has published an update about the project with instructions on trying it out.

In the few weeks since we announced upki, the core revocation engine has been established and is now functional, the CRLite mirroring tool is working and a production deployment in Canonical's datacentres is ongoing. We're now preparing for an alpha release and remain on track for an opt-in preview for Ubuntu 26.04 LTS.

14:35

Link [Scripting News]

Paywalls that require you to subscribe to an Atlanta news org when you don’t live in Atlanta prob don’t generate much revenue. Why not instead charge per article. Like a toll you pay on a road you drive on once every few years. On further thought, I wouldn't even have an exception for Atlanta residents. If they start spending more money than a subscription costs, you could offer a subscription then, as a way to save money. Kind of the way Amazon lets you buy a certain amount of coffee beans without requiring you to sign up for monthly delivery. They do tell you how much you'd save if you subscribed. Everyone appreciates a chance to save money, but still might not want the commitment. And asking someone from upstate NY to subscribe to the Atlanta Journal Constitution is a total bullshit. An insult to both our intelligences.

Link [Scripting News]

My Twitter account is owned. I can't even see what people are doing with it because you have to be signed on (apparently) to read stuff on Twitter nowadays. I wish current Twitter management would put it out of its misery. Served me well for approx 20 years. Let's clean up the mess. Thanks for your attention this matter.

VCs and CEOs don't fire your devteams yet [Scripting News]

Aram Zucker-Scharff writes "I don't want to read one more thinkpiece about blackbox AI code factories until you can show me what they've produced."

I've made the same request, and there was very little even brilliant programmers could show, including some who have become influencers in the AI space.

Here's the problem -- it takes a lot of skill and patience to make software that appears simple because it gives users what they expect. It's much easier to write utility scripts, where the user writes the code for themselves. That is very possible, esp if you use a scripting language created for it, and the AI bots are really good at that, they speak the same language we do.

But to make something easy to use by humans, I think you actually have to be a human. I've found I'm not very good at creating software that isn't for me. And I've been practicing this almost every day for over fifty freaking years. (I think freaking is the proper adjective in this situation).

Scaling which everyone says is hard is actually something a chatbot does quite easily imho -- because you just have to store all your data in a relational database, you can't use the local file system. That's all there is to it. They try to make it sound mysterious (the old priesthood at work) but it is actually very simple. It's so easy even ChatGPT can do it.

I know this must sound like the stuff reporters say about bloggers, but in this case it's true. ;-)

An anectdote -- I used to live in Woodside CA where a lot of the VCs live, and we'd all eat breakfast at Buck's restaurant, and around the time Netscape open sourced their browser code, the VCs were buzzing because they wouldn't have to pay for software, they'd just market the free stuff. That was a long time ago, and it did not work out that way.

14:07

Security updates for Tuesday [LWN.net]

Security updates have been issued by AlmaLinux (gimp, go-toolset:rhel8, and golang), Debian (roundcube), Fedora (gnupg2, libpng, and rsync), Mageia (dcmtk and usbmuxd), Oracle (gcc-toolset-14-binutils, gimp, gnupg2, go-toolset:ol8, golang, kernel, and openssl), Slackware (libssh, lrzip, and mozilla), SUSE (abseil-cpp, chromium, curl, elemental-toolkit, elemental-operator, expat, freerdp, iperf, libnvidia-container, libsoup, libxml2, net-snmp, openCryptoki, openssl-3, patch, protobuf, python-urllib3, python-xmltodict, python311, screen, systemd, and util-linux), and Ubuntu (alsa-lib, gnutls28, and linux-aws, linux-oracle).

13:49

AI, A2A, and the Governance Gap [Radar]

Over the past six months, I’ve watched the same pattern repeat across enterprise AI teams. A2A and ACP light up the room during architecture reviews—the protocols are elegant, the demos impressive. Three weeks into production, someone asks: “Wait, which agent authorized that $50,000 vendor payment at 2 am?“ The excitement shifts to concern.

Here’s the paradox: Agent2Agent (A2A) and the Agent Communication Protocol (ACP) are so effective at eliminating integration friction that they’ve removed the natural “brakes“ that used to force governance conversations. We’ve solved the plumbing problem brilliantly. In doing so, we’ve created a new class of integration debt—one where organizations borrow speed today at the cost of accountability tomorrow.

The technical protocols are solid. The organizational protocols are missing. We’re rapidly moving from the “Can these systems connect?“ phase to the “Who authorized this agent to liquidate a position at 3 am?“ phase. In practice, that creates a governance gap: Our ability to connect agents is outpacing our ability to control what they commit us to.

To see why that shift is happening so fast, it helps to look at how the underlying “agent stack“ is evolving. We’re seeing the emergence of a three-tier structure that quietly replaces traditional API-led connectivity:

Layer Protocol examples Purpose The “human” analog
Tooling MCP (Model Context Protocol) Connects agents to local data and specific tools A worker’s toolbox
Context ACP (Agent Communication Protocol) Standardizes how goals, user history, and state move between agents A worker’s memory and briefing
Coordination A2A (Agent2Agent) Handles discovery, negotiation, and delegation across boundaries A contract or handshake

This stack makes multi-agent workflows a configuration problem instead of a custom engineering project. That is exactly why the risk surface is expanding faster than most CISOs realize.

Think of it this way: A2A is the handshake between agents (who talks to whom, about what tasks). ACP is the briefing document they exchange (what context, history, and goals move in that conversation). MCP is the toolbox each agent has access to locally. Once you see the stack this way, you also see the next problem: We’ve solved API sprawl and quietly replaced it with something harder to see—agent sprawl, and with it, a widening governance gap.

Most enterprises already struggle to govern hundreds of SaaS applications. One analysis puts the average at more than 370 SaaS apps per organization. Agent protocols do not reduce this complexity; they route around it. In the API era, humans filed tickets to trigger system actions. In the A2A era, agents use “Agent Cards“ to discover each other and negotiate on top of those systems. ACP allows these agents to trade rich context—meaning a conversation starting in customer support can flow into fulfillment and partner logistics with zero human handoffs. What used to be API sprawl is becoming dozens of semiautonomous processes acting on behalf of your company across infrastructure you do not fully control. The friction of manual integration used to act as a natural brake on risk; A2A has removed that brake.

That governance gap doesn’t usually show up as a single catastrophic failure. It shows up as a series of small, confusing incidents where everything looks “green“ in the dashboards but the business outcome is wrong. The protocol documentation focuses on encryption and handshakes but ignores the emergent failure modes of autonomous collaboration. These are not bugs in the protocols; they’re signs that the surrounding architecture has not caught up with the level of autonomy the protocols enable.

Policy drift: A refund policy encoded in a service agent may technically interoperate with a partner’s collections agent via A2A, but their business logic may be diametrically opposed. When something goes wrong, nobody owns the end-to-end behavior.

Context oversharing: A team might expand an ACP schema to include “User Sentiment“ for better personalization, unaware that this data now propagates to every downstream third-party agent in the chain. What started as local enrichment becomes distributed exposure.

The determinism trap: Unlike REST APIs, agents are nondeterministic. An agent’s refund policy logic might change when its underlying model is updated from GPT-4 to GPT-4.5, even though the A2A Agent Card declares identical capabilities. The workflow “works“—until it doesn’t, and there’s no version trace to debug. This creates what I call “ghost breaks“: failures that don’t show up in traditional observability because the interface contract looks unchanged.

Taken together, these aren’t edge cases. They’re what happens when we give agents more autonomy without upgrading the rules of engagement between them. These failure modes have a common root cause: The technical capability to collaborate across agents has outrun the organization’s ability to say where that collaboration is appropriate, and under what constraints.

That’s why we need something on top of the protocols themselves: an explicit “Agent Treaty“ layer. If the protocol is the language, the treaty is the constitution. Governance must move from “side documentation“ to “policy as code.“

Want Radar delivered straight to your inbox? Join us on Substack. Sign up here.

Traditional governance treats policy violations as failures to prevent. An antifragile approach treats them as signals to exploit. When an agent makes a commitment that violates a business constraint, the system should capture that event, trace the causal chain, and feed it back into both the agent’s training and the treaty ruleset. Over time, the governance layer gets smarter, not just stricter.

Define treaty-level constraints: Don’t just authorize a connection; authorize a scope. Which ACP fields is an agent allowed to share? Which A2A operations are “read only“ versus “legally binding“? Which categories of decisions require human escalation?

Version the behavior, not just the schema: Treat Agent Cards as first-class product surfaces. If the underlying model changes, the version must bump, triggering a rereview of the treaty. This is not bureaucratic overhead—it’s the only way to maintain accountability in a system where autonomous agents make commitments on behalf of your organization.

Cross-organizational traceability: We need observability traces that don’t just show latency but show intent: Which agent made this commitment, under which policy? And who is the human owner? This is particularly critical when workflows span organizational boundaries and partner ecosystems.

Designing that treaty layer isn’t just a tooling problem. It changes who needs to be in the room and how they think about the system. The hardest constraint isn’t the code; it’s the people. We’re entering a world where engineers must reason about multi-agent game theory and policy interactions, not just SDK integration. Risk teams must audit “machine-to-machine commitments“ that may never be rendered in human language. Product managers must own agent ecosystems where a change in one agent’s reward function or context schema shifts behavior across an entire partner network. Compliance and audit functions need new tools and mental models to review autonomous workflows that execute at machine speed. In many organizations, those skills sit in different silos, and A2A/ACP adoption is proceeding faster than the cross-functional structures needed to manage them.

All of this might sound abstract until you look at where enterprises are in their adoption curve. Three converging trends are making this urgent: Protocol maturity means A2A, ACP, and MCP specifications have stabilized enough that enterprises are moving beyond pilots to production deployments. Multi-agent orchestration is shifting from single agents to agent ecosystems and workflows that span teams, departments, and organizations. And silent autonomy is blurring the line between “tool assistance“ and “autonomous decision-making“—often without explicit organizational acknowledgment. We’re moving from integration (making things talk) to orchestration (making things act), but our monitoring tools still only measure the talk. The next 18 months will determine whether enterprises get ahead of this or we see a wave of high-profile failures that force retroactive governance.

The risk is not that A2A and ACP are unsafe; it’s that they are too effective. For teams piloting these protocols, stop focusing on the “happy path“ of connectivity. Instead, pick one multi-agent workflow and instrument it as a critical product:

Map the context flow: Every ACP field must have a “purpose limitation“ tag. Document which agents see which fields, and which business or regulatory requirements justify that visibility. This isn’t an inventory exercise; it’s a way to surface hidden data dependencies.

Audit the commitments: Identify every A2A interaction that represents a financial or legal commitment—especially ones that don’t route through human approval. Ask, “If this agent’s behavior changed overnight, who would notice? Who is accountable?“

Code the treaty: Prototype a “gatekeeper“ agent that enforces business constraints on top of the raw protocol traffic. This isn’t about blocking agents; it’s about making policy visible and enforceable at runtime. Start minimal: One policy, one workflow, one success metric.

Instrument for learning: Capture which agents collaborate, which policies they invoke, and which contexts they share. Treat this as telemetry, not just audit logs. Feed patterns back into governance reviews quarterly.

If this works, you now have a repeatable pattern for scaling agent deployments without sacrificing accountability. If it breaks, you’ve learned something critical about your architecture before it breaks in production. If you can get one workflow to behave this way—governed, observable, and learn-as-you-go—you have a template for the rest of your agent ecosystem.

If the last decade was about treating APIs as products, the next one will be about treating autonomous workflows as policies encoded in traffic between agents. The protocols are ready. Your org chart is not. The bridge between the two is the Agent Treaty—start building it before your agents start signing deals without you. The good news: You don’t need to redesign your entire organization. You need to add one critical layer—the Agent Treaty—that makes policy machine-enforceable, observable, and learnable. You need engineers who think about composition and game theory, not just connection. And you need to treat agent deployments as products, not infrastructure.

The sooner you start, the sooner that governance gap closes.

12:35

Side-Channel Attacks Against LLMs [Schneier on Security]

Here are three papers describing different side-channel attacks against LLMs.

Remote Timing Attacks on Efficient Language Model Inference“:

Abstract: Scaling up language models has significantly increased their capabilities. But larger models are slower models, and so there is now an extensive body of work (e.g., speculative sampling or parallel decoding) that improves the (average case) efficiency of language model generation. But these techniques introduce data-dependent timing characteristics. We show it is possible to exploit these timing differences to mount a timing attack. By monitoring the (encrypted) network traffic between a victim user and a remote language model, we can learn information about the content of messages by noting when responses are faster or slower. With complete black-box access, on open source systems we show how it is possible to learn the topic of a user’s conversation (e.g., medical advice vs. coding assistance) with 90%+ precision, and on production systems like OpenAI’s ChatGPT and Anthropic’s Claude we can distinguish between specific messages or infer the user’s language. We further show that an active adversary can leverage a boosting attack to recover PII placed in messages (e.g., phone numbers or credit card numbers) for open source systems. We conclude with potential defenses and directions for future work.

When Speculation Spills Secrets: Side Channels via Speculative Decoding in LLMs“:

Abstract: Deployed large language models (LLMs) often rely on speculative decoding, a technique that generates and verifies multiple candidate tokens in parallel, to improve throughput and latency. In this work, we reveal a new side-channel whereby input-dependent patterns of correct and incorrect speculations can be inferred by monitoring per-iteration token counts or packet sizes. In evaluations using research prototypes and production-grade vLLM serving frameworks, we show that an adversary monitoring these patterns can fingerprint user queries (from a set of 50 prompts) with over 75% accuracy across four speculative-decoding schemes at temperature 0.3: REST (100%), LADE (91.6%), BiLD (95.2%), and EAGLE (77.6%). Even at temperature 1.0, accuracy remains far above the 2% random baseline—REST (99.6%), LADE (61.2%), BiLD (63.6%), and EAGLE (24%). We also show the capability of the attacker to leak confidential datastore contents used for prediction at rates exceeding 25 tokens/sec. To defend against these, we propose and evaluate a suite of mitigations, including packet padding and iteration-wise token aggregation.

Whisper Leak: a side-channel attack on Large Language Models“:

Abstract: Large Language Models (LLMs) are increasingly deployed in sensitive domains including healthcare, legal services, and confidential communications, where privacy is paramount. This paper introduces Whisper Leak, a side-channel attack that infers user prompt topics from encrypted LLM traffic by analyzing packet size and timing patterns in streaming responses. Despite TLS encryption protecting content, these metadata patterns leak sufficient information to enable topic classification. We demonstrate the attack across 28 popular LLMs from major providers, achieving near-perfect classification (often >98% AUPRC) and high precision even at extreme class imbalance (10,000:1 noise-to-target ratio). For many models, we achieve 100% precision in identifying sensitive topics like “money laundering” while recovering 5-20% of target conversations. This industry-wide vulnerability poses significant risks for users under network surveillance by ISPs, governments, or local adversaries. We evaluate three mitigation strategies – random padding, token batching, and packet injection – finding that while each reduces attack effectiveness, none provides complete protection. Through responsible disclosure, we have collaborated with providers to implement initial countermeasures. Our findings underscore the need for LLM providers to address metadata leakage as AI systems handle increasingly sensitive information.

11:07

Pluralistic: What's a "gig work minimum wage" (17 Feb 2026) [Pluralistic: Daily links from Cory Doctorow]

->->->->->->->->->->->->->->->->->->->->->->->->->->->->-> Top Sources: None -->

Today's links



A figure in a rich robe sitting atop a throne, surrounded by bags of money; his face is masked by a robber's balaclava. Beneath the throne stream densely packed cars on a nighttime freeway. Behind him is a car's broken windscreen with an Uber logo in one corner.

What's a "gig work minimum wage" (permalink)

"Minimum wage" is one of those odd concepts that seems to have an intuitive definition, but the harder you think about it, the more complicated it gets. For example, if you want to work, but can't find a job, then the minimum wage you'll get is zero:

https://web.archive.org/web/20200625043843/https://www.latimes.com/entertainment-arts/books/story/2020-06-24/forget-ubi-says-an-economist-its-time-for-universal-basic-jobs

That's why politicians like Avi Lewis (who is running for leader of Canada's New Democratic Party) has called for a jobs guarantee: a government guarantee of a good job at a socially inclusive wage for everyone who wants one:

https://lewisforleader.ca/ideas/dignified-work-full-plan

(Disclosure: I have advised the Lewis campaign on technical issues and I have endorsed his candidacy.)

If that sounds Utopian or Communist to you (or both), consider this: it was the American jobs guarantee that delivered America's system of national parks, among many other achievements:

https://en.wikipedia.org/wiki/Civilian_Conservation_Corps

The idea of a wage for everyone who wants a job is just one interesting question raised by the concept of a "minimum wage." Even when we're talking about people who have wages, the idea of a "minimum wage" is anything but straightforward.

Take gig workers: the rise of Uber and its successors created an ever-expanding class of workers who are misclassified as independent contractors by employers, seeking to evade unionization, benefits and liability. It's a weird kind of "independent contractor" who gets punished for saying no to lowball offers, has to decorate their personal clothes and/or cars in their "client's" livery, and who has every movement scripted by an app controlled by their "client":

https://pluralistic.net/2024/02/02/upward-redistribution/

The pretext that a worker is actually a standalone small business confers another great advantage on their employers: it's a great boon to any boss who wants to steal their worker's wages. I'm not talking about stealing tips here (though gig-work platforms do steal tips, like crazy):

https://www.nyc.gov/mayors-office/news/2026/01/mayor-mamdani-announces–5-million-settlement–reinstatement-of-

I'm talking about how gig-work platforms define their workers' wages in the first place. This is a very salient definition in public policy debates. Gig platforms facing regulation or investigation routinely claim that their workers are paid sky-high wages. During the debate over California's Prop 22 (in which Uber and Lyft spent more than $225m to formalize worker misclassification), gig companies agreed to all kinds of reasonable-sounding wage guarantees:

https://pluralistic.net/2020/10/14/final_ver2/#prop-22

When Toronto was grappling with the brutal effect that gig-work taxis have on the city's world-beatingly bad traffic, Uber promised to pay its drivers "120% of the minimum wage," which would come out to $21.12 per hour. However, the real wage Uber was proposing to pay its drivers came out to about $2.50 per hour:

https://pluralistic.net/2024/02/29/geometry-hates-uber/#toronto-the-gullible

How to explain the difference? Well, Uber – and its gig-work competitors – only pay drivers while they have a passenger – or an item – in the car. Drivers are not paid for the time they spend waiting for a job or the time they spend getting to the job. This is the majority of time that a gig driver spends working for the platform, and by excluding the majority of time a driver is on the clock, the company can claim to pay a generous wage while actually paying peanuts.

Now, at this phase, you may be thinking that this is only fair, or at least traditional. Livery cab drivers don't get paid unless they have a fare in the cab, right?

That's true, but livery cab drivers have lots of ways to influence that number. They can shrewdly choose a good spot to cruise. They can give their cellphone numbers to riders they've established a rapport with in order to win advance bookings. In small towns with just a few drivers – or in cities where drivers are in a co-op – they can spend some of their earnings to advertise the taxi company. Livery drivers can offer discounts to riders going a long way. It's a tough job, but it's one in which workers have some agency.

Contrast that with driving for Uber: Uber decides which drivers get to even see a job. Uber decides how to market its services. Uber gets to set fares, on a per-passenger basis, meaning that it might choose to scare some passengers off of a few of their rides with high prices, in a bid to psychologically nudge that passenger into accepting higher fares overall.

At the same time, Uber is reliant on a minimum pool of drivers cruising the streets, on the clock but off the payroll. If riders had to wait 45 minutes to get an Uber, they'd make other arrangements. If it happened too often, they'd delete the app. So Uber can't survive without those cruising, unpaid drivers, who provide the capacity that make the company commercially viable.

What's more, livery cab drivers aren't the only comparators for gig-work platforms. Many gig workers deliver food, meaning that we should compare them to, say, pizza delivery drivers. These drivers aren't just paid when they have a pizza in the car and they're driving to a customer's home. They're paid from the moment they clock onto their shift to the moment they clock off (plus tips).

Now, obviously, this is more expensive for employers, but the Uber Eats arrangement – in which drivers are only paid when they've got a pizza in the car and they're en route to a customer – doesn't eliminate that expense. When a gig delivery company takes away the pay that drivers used to get while waiting for a pizza, they're shifting this expense from employers to workers:

https://pluralistic.net/2025/08/20/billionaireism/#surveillance-infantalism

The fact that Uber can manipulate the concept of a minimum wage in order to claim to pay $21.12/hour to drivers who are making $2.50 per hour creates all kinds of policy distortions.

Take Seattle: in 2024, the city implemented a program called "PayUp" that sets a "minimum wage" for drivers, but it's not a real minimum wage. It's a minimum payment for every ride or delivery.

A new National Bureau of Economic Research paper analyzes the program and concludes that it hasn't increased drivers' pay at all:

https://www.nber.org/papers/w34545

To which we might say, "Duh." Cranking up the sum paid for a small fraction of the work you do for a company will have very little impact on the overall wage you receive from the company.

However, there is an interesting wrinkle in this paper's conclusions. Drivers aren't earning less under this system, either. So they're getting paid more for every delivery, but they're not adding more deliveries to their day. In other words, they're doing less work and then clocking off:

https://marginalrevolution.com/marginalrevolution/2026/02/minimum-wages-for-gig-work-cant-work.html

A neoclassical economist (someone who has experienced a specific form of neurological injury that makes you incapable of perceiving or reasoning about power) would say that this means that the drivers only desire to earn the sums they were earning before the "minimum wage" and so the program hasn't made a difference to their lives.

But anyone else can look at this situation and understand that drivers only did this shitty job out of desperation. They had a sum they needed to get every month in order to pay the rent or the grocery bill. They have lots of needs besides those that they would like to fulfill, but not under the shitty gig-work app conditions. The only reason they tolerate a shitty app as their shitty boss at all is that they are desperate, and that desperation gives gig companies power over their workers.

In other words, Seattle's PayUp "minimum wage" has shifted some of the expense associated with operating a gig platform from workers back onto their bosses. With fewer drivers available on the app, waiting times for customers will necessarily go up. Some of those customers will take the bus, or get a livery cab, or defrost a pizza, or walk to the corner cafe. For the gig platforms to win those customers back, they will have to reduce waiting times, and the most reliable way to do that is to increase the wages paid to their workers.

So PayUp isn't a wash – it has changed the distributional outcome of the gig-work economy in Seattle. Drivers have clawed back a surplus – time they can spend doing more productive or pleasant things than cruising and waiting for a booking – from their bosses, who now must face lower profits, either from a loss of business from impatient customers, or from a higher wage they must pay to get those wait-times down again.

But if you want to really move the needle on gig workers' wages, the answer is simple: pay workers for all the hours they put in for their bosses, not just the ones where bosses decide they deserve to get paid for.

(Image: Tobias "ToMar" Maier, CC BY-SA 3.0; Jon Feinstein, CC BY 2.0; modified)


Hey look at this (permalink)



A shelf of leatherbound history books with a gilt-stamped series title, 'The World's Famous Events.'

Object permanence (permalink)

#20yrsago HOWTO resist warrantless searches at Best Buy https://www.die.net/musings/bestbuy/

#20yrsago RIAA using kids’ private info to attack their mother https://web.archive.org/web/20060223111437/http://p2pnet.net/story/7942

#20yrsago Sony BMG demotes CEO for deploying DRM https://web.archive.org/web/20060219233817/http://biz.yahoo.com/ap/060210/germany_sony_bmg_ceo.html?.v=7

#20yrsago Sistine Chapel recreated through 10-year cross-stitch project https://web.archive.org/web/20060214195146/http://www.austinstitchers.org/Show06/images/sistine2.jpg

#15yrsago Selling cookies like a crack dealer, by dangling a string out your kitchen window https://laughingsquid.com/cookies-sold-by-string-dangling-from-san-francisco-apartment-window/

#15yrsago Midwestern Tahrir: Workers refuse to leave Wisconsin capital over Tea Party labor law https://www.theawl.com/2011/02/wisconsin-demonstrates-against-scott-walkers-war-on-unions/

#10yrsago Back-room revisions to TPP sneakily criminalize fansubbing & other copyright grey zones https://www.eff.org/deeplinks/2016/02/sneaky-change-tpp-drastically-extends-criminal-penalties

#10yrsago Russian Central Bank shutting down banks that staged fake cyberattacks to rip off depositors https://web.archive.org/web/20160220100817/http://www.scmagazine.com/russian-bank-licences-revoked-for-using-hackers-to-withdraw-funds/article/474477/

#10yrsago Stop paying your student loans and debt collectors can send US Marshals to arrest you https://web.archive.org/web/20201026202024/https://nymag.com/intelligencer/2016/02/us-marshals-forcibly-collecting-student-debt.html?mid=twitter-share-di

#5yrsago Reverse centaurs and the failure of AI https://pluralistic.net/2021/02/17/reverse-centaur/#reverse-centaur

#1yrago Business school professors trained an AI to judge workers' personalities based on their faces https://pluralistic.net/2025/02/17/caliper-ai/#racism-machine


Upcoming appearances (permalink)

A photo of me onstage, giving a speech, pounding the podium.



A screenshot of me at my desk, doing a livecast.

Recent appearances (permalink)



A grid of my books with Will Stahle covers..

Latest books (permalink)



A cardboard book box with the Macmillan logo.

Upcoming books (permalink)

  • "The Reverse-Centaur's Guide to AI," a short book about being a better AI critic, Farrar, Straus and Giroux, June 2026
  • "Enshittification, Why Everything Suddenly Got Worse and What to Do About It" (the graphic novel), Firstsecond, 2026

  • "The Post-American Internet," a geopolitical sequel of sorts to Enshittification, Farrar, Straus and Giroux, 2027

  • "Unauthorized Bread": a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, 2027

  • "The Memex Method," Farrar, Straus, Giroux, 2027



Colophon (permalink)

Today's top sources:

Currently writing: "The Post-American Internet," a sequel to "Enshittification," about the better world the rest of us get to have now that Trump has torched America (1148 words today, 30940 total)

  • "The Reverse Centaur's Guide to AI," a short book for Farrar, Straus and Giroux about being an effective AI critic. LEGAL REVIEW AND COPYEDIT COMPLETE.
  • "The Post-American Internet," a short book about internet policy in the age of Trumpism. PLANNING.

  • A Little Brother short story about DIY insulin PLANNING


This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.

https://creativecommons.org/licenses/by/4.0/

Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.


How to get Pluralistic:

Blog (no ads, tracking, or data-collection):

Pluralistic.net

Newsletter (no ads, tracking, or data-collection):

https://pluralistic.net/plura-list

Mastodon (no ads, tracking, or data-collection):

https://mamot.fr/@pluralistic

Medium (no ads, paywalled):

https://doctorow.medium.com/

Twitter (mass-scale, unrestricted, third-party surveillance and advertising):

https://twitter.com/doctorow

Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):

https://mostlysignssomeportents.tumblr.com/tagged/pluralistic

"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla

READ CAREFULLY: By reading this, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies ("BOGUS AGREEMENTS") that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer.

ISSN: 3066-764X

10:07

Misguided optimization [Seth's Blog]

Industrialism brought us the idea of optimization. Incremental improvements combined with measurement to gradually improve results.

We can optimize for precision. A car made in 2026 is orders of magnitude more reliable because the parts fit together so well.

We can optimize for customer satisfaction. By reviewing every element of a user’s experience, we can remove the annoyances and increase delight.

We can optimize a horror movie to make it scarier, and we can optimize a workout to make it more effective.

Lately, though, the fad is to optimize for short-term profit.

This will probably get you a bonus. It means degrading the experience of customers, suppliers and employees in exchange for maximizing quarterly returns.

Make a list of every well-known organizational failure (from big firms like Yahoo to Enron to Sears all the way to the little pizza place down the block) and you’ll see the short-term optimizer’s fingerprints.

You can’t profit maximize your way to greatness.

09:21

Wrestling With My Body by Inam [Oh Joy Sex Toy]

Wrestling With My Body by Inam

Inam discovers Play Flighting in their explorations with their disability and desires. A lovely piece of auto bio, exploring fast intimate moments of playful fighting that help us reconsider our own limitations. Instagram I didn’t know that Play Fighting was a thing, and it truly delights me. This is Inam’s third comic for us! You […]

08:35

Russell Coker: Links February 2026 [Planet Debian]

Charles Stross has a good theory of why “AI” is being pushed on corporations, really we need to just replace CEOs with LLMs [1].

This disturbing and amusing article describes how an Open AI investor appears to be having psychological problems releated to SCP based text generated by ChatGPT [2]. Definitely going to be a recursive problem as people who believe in it invest in it.

interesting analysis of dbus and design for a more secure replacement [3].

Scott Jenson gave an insightful lecture for Canonical about future potential developments in the desktop UX [4].

Ploum wrote an insightful article about the problems caused by the Github monopoly [5]. Radicale sounds interesting.

Niki Tonsky write an interesting article about the UI problems with Tahoe (latest MacOS release) due to trying to make an icon for everything [6]. They have a really good writing style as well as being well researched.

Fil-C is an interesting project to compile C/C++ programs in a memory safe way, some of which can be considered a software equivalent of CHERI [7].

Brian Krebs wrote a long list of the ways that Trump has enabled corruption and a variety of other crimes including child sex abuse in the last year [8].

This video about designing a C64 laptop is a masterclass in computer design [9].

Salon has an interesting article about the abortion thought experiment that conservatives can’t handle [10].

Ron Garrett wrote an insightful blog post about abortion [11].

Bruce Schneier and Nathan E. Sanders wrote an insightful article about the potential of LLM systems for advertising and enshittification [12]. We need serious legislation about this ASAP!

02:56

New Cover: “But Not Tonight” [Whatever]

I was not expecting to make another cover so soon, so, uh, surprise: A cover of Depeche Mode’s most cheerful song, done as if Erasure decided to crack at it. Why did I do this? Because I was trying to clean up a previous version of this song that I did (it was sonically a little smeary and I hadn’t learned how to edit out when I loudly took in breaths), which necessitated laying down a new vocal track, and once I did that, one thing led to another, and here we are.

I am actually really happy with this one. I did harmonies! Intentionally! Also, I do think it really does sound kinda like Erasure covering Depeche Mode (if such a thing is a possible considering the bands share a Vince Clarke in common). I mean, I don’t sing like Andy Bell, but then, who does, so, fine. Good enough for an afternoon! Enjoy.

— JS

02:21

Drag Race Episode Seven: The Rudemption of Myki Meeks [The Stranger]

We’re finally done with Rate-A-Queen! The cast is back to our regularly scheduled programming with parodies of hot-button political issues. In the words of Jane Don’t, “it’s a good day to be a clown.” by Mike Kohfeld

We’re finally done with Rate-A-Queen! The cast is back to our regularly scheduled programming with parodies of hot-button political issues. In the words of Jane Don’t, “it’s a good day to be a clown.”

“Y’all are playing chess, I’m playing checkers. Wait, what’s the thing?”

Episode Seven began with the queens still reeling from their two-week Rate-A-Queen ordeal, in which the Miami alliance came out on top.

On Drag Race, queens love to talk too much after winning challenges or getting safe placements. Athena did the same, insisting her play was honest and not at all about strategy while the other queens rolled their eyes. When you’ve just won a challenge, it’s best to keep your mouth shut.

As if this wasn’t enough, the queens were given the Rate-A-Queen receipts. Mia looked stressed to see her ratings exposed… as if the producers were going to let any opportunity for drama to slide. Nini was pissed that everyone had given her mid-ratings for her Mother Mantis bit, and let it get into her head: “Does everybody not like me?”

Kenya was pleased to have avoided the bottom through her alliance-building. “Y’all are playing chess, I’m playing checkers. Wait, what’s the thing?” Bless her.

Myki Meeks was rated in the bottom by the queens despite having a strong talent act, and the receipts nearly brought her to tears. In a T-shirt that said REVENGE, Myki looked ready to prove herself this week. Maybe she’ll go full Arya Stark and start snatching faces.

Emmy-Baiting Drag Politics

If you’re not living under a rock, you know the 2026 midterm elections are going to be crucial for prying at least a little bit of power away from the world’s worst people. Drag Race celebrated the occasion by bringing us “totally twisted political ads that parody today’s most polarizing issues.” RuPaul added: “I deserve a fucking Emmy for that line.”

The queens had a serious moment talking about the difficulty of living in red states with drag bans and the rise of violence against queer people during Trump’s second term. The most visceral account was Discord’s experience with a lifetime friend and roommate. Radicalized by right-wing anti-queer rhetoric seemingly overnight, they destroyed almost all of Discord’s drag and artwork. Discord compared the current conservative movement to a cult. Hear, hear.

Mia balanced out the heaviness of the political discussion with a spontaneous dance party. It was the kind of genuine moment that has been missing in contemporary seasons of Drag Race.

The Future Liberals Want: Foreign Trade and Breastplate Socialism

The Main Challenge began when the queens were given five propositions on draggy subjects like breastplate entitlements, kai kai bans, and adding clowns to the LGBTQIA+ umbrella, paralleling real-world issues like bodily autonomy, trans rights, and immigration. I had to remind myself that this is a reality television show about drag queens acting stupid, but was interested to see how the cast would navigate the line between comedy and critique.

Discord and Nini did a sound job with “Prop Kiki.” Discord adopted a pro-kai kai stance as the sultry, sister-loving Lydia Liquorup. (Queer vocabulary lesson #1: to kiki is to chat, gossip, or tell stories; kai kai refers to sexual relations between drag queens.) Discord’s stage-whispered hook, “date a sister,” is destined to become a queer vocal stim in the manner of Valentina and Naomi Smalls’ “Club 96” (All Stars Season 4) or Alaska’s “your makeup is terrible” (Season 5). Nini struggled while recording the skit, but turned out a conservative church lady arguing against sister-dating, keeping the pair safe.

Darlene and Vita could not have been more dissimilar in their performances for “Prop 4Real.” Vita has struggled in past performance challenges, and this week was no different. She landed in the bottom for her stiff portrayal of a “traditional” drag queen.

In contrast, Darlene’s “bedroom queen bimbo” was hysterical, with the judges calling her performance “really stupid.” So stupid, in fact, that Darlene earned a top placement for the week.

Athena and Myki had fully-realized and memorable characters for “Prop 6969,” which sought to ban foreign trade (Queer vocabulary lesson #2: “trade” is queer slang for a masculine, straight-acting man).

Athena sold us an eerily convincing Republicanesque character named Connie Cumminside against Prop 6969. Her lustful desire to ban trade was giving MAGA backlash to Bad Bunny’s recent Super Bowl LX performance.

Myki was the standout of the week, with a punny performance arguing for steamy relations with foreign trade: “I’m concerned American citizen Stephanie Miller. But you can call me Lollipop!” Her playful irreverence won her the challenge. It felt like a karmic rebalance after Rate-A-Queen.

Meanwhile, Mia and Juicy struggled to write material for “Prop DD,” where Mia argued to require breastplates and padding for all drag queens while Juicy embraced a natural, environmentally-friendly “hog body.” Mia got some laughs, but Juicy floundered.

The pair fell into the bottom three. (If there had been a lip-sync-for-your-life between Mia and Juicy this week after they tied in a lip-sync-for-the-win two episodes ago, my wig would have flown into the troposphere.)

Can Somebody Just Treat My Gonorrhea?

Jane Don’t and Kenya worked together for “Prop C,” naming the pros and cons for adding clowns to the LGBTQIA acronym. Arguing against Prop C, Kenya played a decorated diva concerned about how “drag bars have been held captive by silly-ass drag queens who prioritize jokes and concepts over gowns.” Not in Seattle, surely! *clutches pearls*

Jane Don’t played Daisy Funbuttons, the gonorrhea-ridden Professor of Nose-Honking at Pacoima Community Clown College (this is literally the stupidest sentence I’ve ever written). Her performance was Drag Race comedy perfection, and she was ranked in the top by the judges. We really need to just crown her now. Or at the very least, get her some antibiotics.

View this post on Instagram

A post shared by RuPaul's Drag Race (@rupaulsdragrace)


I Can See Right Through Her

These Season 18 girls brought some serious budget to the main stage, and the see-through outfits of Episode Seven did not disappoint.

Nini’s candy-wrapper look was sublime. If winning was solely about runway looks, Nini would be in the number one spot.

Jane’s Leigh Bowery-inspired checkered bodysuit with a short sheer pink dress fit the brief, but wasn’t as spectacular as her past looks. I later learned that she crafted it last-minute because her original designer didn’t deliver this look on time. What is it with late designers for these queens!?

View this post on Instagram

A post shared by MYKI MEEKS (@myki.meeks)


For her see-through business suit, Myki Meeks expressed, “the quality I cherish most in a workplace is transparency.” Snaps, girl. The judges loved it too, with RuPaul exclaiming, “this is what the whores wear in Seattle!” Maybe Myki can come live here, too.

Vita’s Last Act

Juicy’s Met Gala-worthy tulle fantasy and Vita’s divine water goddess (it was giving Yemayá) were superb, but their performances landed them in the bottom. The rest of the cast reacted with shock. “Vita versus Juicy? Two people I thought were gonna make it to the end!” said Discord. “I don’t even wanna watch this.”

But this was must-see TV. Vita held her own, but there is no stopping the elemental force that is Juicy Love Dion on the mainstage. Set to Dua Lipa’s “Houdini,” Juicy swept the lip-sync with grace, emotion, and jaw-dropping skill, including a handstand that tipped backwards into a split. There was no way RuPaul was going to let Juicy sashay away, and Vita was given the boot. I hope to see her in All Stars!

Next week, it’s the challenge you’ve been waiting for (or dreading): Snatch Game! Either way, this is not one to miss. I’m ready for Jane to earn a second win!

00:00

Antoine Beaupré: Keeping track of decisions using the ADR model [Planet Debian]

In the Tor Project system Administrator's team (colloquially known as TPA), we've recently changed how we take decisions, which means you'll get clearer communications from us about upcoming changes or targeted questions about a proposal.

Note that this change only affects the TPA team. At Tor, each team has its own way of coordinating and making decisions, and so far this process is only used inside TPA. We encourage other teams inside and outside Tor to evaluate this process to see if it can improve your processes and documentation.

The new process

We had traditionally been using a "RFC" ("Request For Comments") process and have recently switched to "ADR" ("Architecture Decision Record").

The ADR process is, for us, pretty simple. It consists of three things:

  1. a simpler template
  2. a simpler process
  3. communication guidelines separate from the decision record

The template

As team lead, the first thing I did was to propose a new template (in ADR-100), a variation of the Nygard template. The TPA variation of the template is similarly simple, as it has only 5 headings, and is worth quoting in full:

  • Context: What is the issue that we're seeing that is motivating this decision or change?

  • Decision: What is the change that we're proposing and/or doing?

  • Consequences: What becomes easier or more difficult to do because of this change?

  • More Information (optional): What else should we know? For larger projects, consider including a timeline and cost estimate, along with the impact on affected users (perhaps including existing Personas). Generally, this includes a short evaluation of alternatives considered.

  • Metadata: status, decision date, decision makers, consulted, informed users, and link to a discussion forum

The previous RFC template had 17 (seventeen!) headings, which encouraged much longer documents. Now, the decision record will be easier to read and digest at one glance.

An immediate effect of this is that I've started using GitLab issues more for comparisons and brainstorming. Instead of dumping in a document all sorts of details like pricing or in-depth alternatives comparison, we record those in the discussion issue, keeping the document shorter.

The process

The whole process is simple enough that it's worth quoting in full as well:

Major decisions are introduced to stakeholders in a meeting, smaller ones by email. A delay allows people to submit final comments before adoption.

Now, of course, the devil is in the details (and ADR-101), but the point is to keep things simple.

A crucial aspect of the proposal, which Jacob Kaplan-Moss calls the one weird trick, is to "decide who decides". Our previous process was vague about who makes the decision and the new template (and process) clarifies decision makers, for each decision.

Inversely, some decisions degenerate into endless discussions around trivial issues because too many stakeholders are consulted, a problem known as the Law of triviality, also known as the "Bike Shed syndrome".

The new process better identifies stakeholders:

  • "informed" users (previously "affected users")
  • "consulted" (previously undefined!)
  • "decision maker" (instead of the vague "approval")

Picking those stakeholders is still tricky, but our definitions are more explicit and aligned to the classic RACI matrix (Responsible, Accountable, Consulted, Informed).

Communication guidelines

Finally, a crucial part of the process (ADR-102) is to decouple the act of making and recording decisions from communicating about the decision. Those are two radically different problems to solve. We have found that a single document can't serve both purposes.

Because ADRs can affect a wide range of things, we don't have a specific template for communications. We suggest the Five Ws method (Who? What? When? Where? Why?) and, again, to keep things simple.

How we got there

The ADR process is not something I invented. I first stumbled upon it in the Thunderbird Android project. Then, in parallel, I was in the process of reviewing the RFC process, following Jacob Kaplan-Moss's criticism of the RFC process. Essentially, he argues that:

  1. the RFC process "doesn't include any sort of decision-making framework"
  2. "RFC processes tend to lead to endless discussion"
  3. the process "rewards people who can write to exhaustion"
  4. "these processes are insensitive to expertise", "power dynamics and power structures"

And, indeed, I have been guilty of a lot of those issues. A verbose writer, I have written extremely long proposals that I suspect no one has ever fully read. Some proposals were adopted by exhaustion, or ignored because not looping in the right stakeholders.

Our discussion issue on the topic has more details on the issues I found with our RFC process. But to give credit to the old process, it did serve us well while it was there: it's better than nothing, and it allowed us to document a staggering number of changes and decisions (95 RFCs!) made over the course of 6 years of work.

What's next?

We're still experimenting with the communication around decisions, as this text might suggest. Because it's a separate step, we also have a tendency to forget or postpone it, like this post, which comes a couple of months late.

Previously, we'd just ship a copy of the RFC to everyone, which was easy and quick, but incomprehensible to most. Now we need to write a separate communication, which is more work but, hopefully, worth the as the result is more digestible.

We can't wait to hear what you think of the new process and how it works for you, here or in the discussion issue! We're particularly interested in people that are already using a similar process, or that will adopt one after reading this.

Note: this article was also published on the Tor Blog.

Monday, 16 February

23:28

Joe Marshall: binary-compose-left and binary-compose-right [Planet Lisp]

If you have a unary function F, you can compose it with function G, H = F ∘ G, which means H(x) = F(G(x)). Instead of running x through F directly, you run it through G first and then run the output of G through F.

If F is a binary function, then you either compose it with a unary function G on the left input: H = F ∘left G, which means H(x, y) = F(G(x), y) or you compose it with a unary function G on the right input: H = F ∘right G, which means H(x, y) = F(x, G(y)).

(binary-compose-left f g)  = (λ (x y) (f (g x) y))
(binary-compose-right f g) = (λ (x y) (f x (g y)))

We could extend this to trinary functions and beyond, but it is less common to want to compose functions with more than two inputs.

binary-compose-right comes in handy when combined with fold-left. This identity holds

 (fold-left (binary-compose-right f g) acc lst) <=>
   (fold-left f acc (map g lst))

but the right-hand side is less efficient because it requires an extra pass through the list to map g over it before folding. The left-hand side is more efficient because it composes g with f on the fly as it folds, so it only requires one pass through the list.

22:28

Stranger Suggests: An Award-Winning Author, Young Bruce Lee, and Clubbing at the Cottage [The Stranger]

One really great thing to do every day of the week! by Julianne Bell MONDAY 2/16  

The Pains of Being Pure at Heart with Living Hour

(MUSIC) The Pains of Being Pure at Heart is one of those indie bands that shaped the soundscape of the late aughts. Their shoegazey, synth-spiked rock blossomed out of New York as the band played shows around the city and shared songs on MySpace (RIP). The Pains—as they're affectionately known—disbanded in 2019 after releasing five albums, but announced a reunion in 2024 to celebrate the 15th anniversary of their debut album with shows across Europe and North America. It's been over 10 years since the group played Seattle, and I can't wait to sing along to every word of "A Teenager in Love," a bouncy track off their self-titled debut fitting for the week after Valentine's Day. Canadian dream pop/fuzzy rock band Living Hour and Portland alt-rock group the Prids round out the lineup. (Vera Project, 7 pm, all ages) SHANNON LUBETICH

TUESDAY 2/17  

Nicola Griffith

(BOOKS) Seattle-based author and self-described “queer cripple with a PhD” Nicola Griffith has received countless honors, including two Washington State Book Awards and six Lambda Literary Awards, and was inducted into MOPOP’s Science Fiction and Fantasy Hall of Fame in 2024. Her novels HildSpear, and Menewood explore the medieval era through a queer perspective, and she also cofounded the #CripLit movement with the late activist Alice Wong. Her latest work, She Is Here, is a new installment in PM Press’s Outspoken Authors series, in which “today’s edgiest fiction writers showcase their most provocative and politically challenging stories.” Griffith’s contribution combines fiction, nonfiction, poetry, and artwork to discuss topics ranging from disability justice to the distinction between love and ownership. (Third Place Books Ravenna, 7 pm, all ages, free) JULIANNE BELL

WEDNESDAY 2/18  

History Pub: The Vanguard Generation: African American Artists, 1880-1918

(TALKS) Before the Harlem Renaissance, there was the Vanguard Generation, aka the first wave of Black artists and performers who helped shape American popular culture in the shadow of Jim Crow. Active in the years between the Civil War and World War I, many were the first in their families to be born free (or to attend college), creating art under extraordinary constraints. Drawing from newly uncovered archival documents, scholar Daniel E. Atkinson brings their stories of talent, conflict, and solidarity to life for this unique edition of History Pub. Hosted in partnership with Humanities Washington, this event is a reminder that Black innovation has always been foundational. (Spanish Ballroom, Tacoma, 7 pm, all ages) LANGSTON THOMAS

THURSDAY 2/19  

Young Dragon: A Bruce Lee Story

(PERFORMANCE) Keiko Green is a playwright, screenwriter, and performer who splits her time between Seattle and LA, and has written for TV shows like Hulu’s Interior Chinatown and the upcoming Apple TV series Margo’s Got Money Troubles. Last fall, Seattle hosted productions of two of her plays: Exotic Deadly: Or the MSG Play, a wacky time-traveling comedy set in 1999, and Hells Canyon, a chilling horror thriller. Now, there’s another opportunity to glimpse even more of Green’s impressive range with the Seattle Children’s Theatre premiere of her play Young Dragon, which shows Bruce Lee as an ambitious young man finding his place in the world in Seattle. I’m willing to bet audience members of all ages will be moved by Bruce’s journey to becoming a “flexible, fluid, and flowing master.” Seattle Children’s Theatre recently made the difficult decision to pull a two-week April run of Young Dragon from the Kennedy Center due to the impact of the Trump administration, which makes it even more important to support local productions like this one. (Seattle Children’s Theatre, times vary) JULIANNE BELL

FRIDAY 2/20  

Next Exit

(PERFORMANCE) Meet j. chavez, a Seattle theatre maven who won the KCACTF (Kennedy Center American College Theater Festival)’s National Undergraduate Playwriting Award (whew), for their opus how to clean your room (and remember all your trauma). Their new play Next Exit deals passionately, yet sympathetically, with a man named Miguel trapped on a highway (sans car, I think), who is communing with and deriving philosophical companionship from a dead possum called Orlando. Some deer come out, and a Lady In Yellow, and a sinister force that threatens to eat up anyone and anything lingering too long by the sizzling side of I-5. I’m not clear on how this all flows together. But you should indubitably find out. (Annex Theatre, times vary, all ages) ANDREW HAMLIN

SATURDAY 2/21  

Club 90s: Heated Rivalry

(PARTIES & NIGHTLIFE) I’m grateful for the spark of euphoria that the low-budget Canadian hockey drama Heated Rivalry has brought to the internet over the last few months. No matter what horrifying apocalyptic shit is happening in the news, NO ONE CAN TAKE THE COTTAGE FROM US. Composer Peter Peter’s soundtrack to the blockbuster show is equally as hot-and-heavy and obsession-worthy, and episodes also include some banger needle drops. At this rave, dance to Wolf Parade’s "I'll Believe in Anything,” and reenact the haunted club scene as Harrison’s cover of t.A.T.u.’s “All the Things She Said” blares. This trendy event aims to let off some collective steam and celebrate queer joy—all this sexual tension has to go somewhere outside of streaming “KISS!” at every Kraken game. (The Showbox, 8:30 pm, 18+) BRI BREY

SUNDAY 2/22  

Bitchin Bajas, Geologist

(MUSIC) Don’t be deceived by Chicago trio Bitchin Bajas’ goofy name: They’re one of the world’s headiest groups. Evolving out of neo-krautrockers Cave, BB synthesists Cooper Crain and Dan Quinlivan and saxophonist Rob Frye have been enhancing their melodic chops, creating majestic tracks that would sound righteous filling Europe’s most ornate cathedrals. This past October at Neptune Theatre, they outshone their much more celebrated headliners Stereolab in a set that made me feel as if I were on five hits of Owsley. Animal Collective member Geologist (aka Brian Weitz) just released Can I Get a Pack of Camel Lights?, the follow-up to last year’s arcane, abstracted Americana LP, A Shaw Deal, with Sleepy Doug Shaw. The new hurdy-gurdy-powered album’s a mystical avant-rock trip that I dig more than anything his parent group have done. (Sunset Tavern, 8 pm, 21+) DAVE SEGAL

21:42

21:07

Ideas for the fediverse [Scripting News]

Bullet items for the Fediforum conference in March.

  • Subscribing must be easy.
  • Some things will work better if they're slightly centralized, esp subscribing.
  • Use DNS for naming people.
  • Support RSS in and out, and test it once you add the feature, so many easy things to fix remain broken (like titles of the feeds, look terrible in a list of feed titles). RSS is how you earn the "web" in your name. "Web" means something, it's just an intention, there are rules.
  • You don't need "open" if you have "web." The web is by definition open. Water is wet. Raises question re what the not-open web is. (Silo.)
  • Support the basic features of text in the web. If you shut off the writing features of the web, as Twitter did, you're not really part of the web. Especially linking.
  • Listen to users, listen to other developers.
  • Automattic is doing heroic work connecting WordPress to ActivityPub. This means that WordPress APIs are now ActivityPub APIs. Not a small thing.
  • Look at text coming out of WordPress into Mastodon, the HTML used definitely could be improved. Seems pretty simple things to fix, the simple things matter. Example: WordPress version. Mastodon version of the same post. Let's make this beautiful!
  • Keep trying fundamentally new architectures.
  • Learn from past mistakes.
  • Interop is paramount.
  • Don't re-invent.

BTW, this can be read on my blog, on Mastodon, in WordPress and of course my feeds (and thus can be read in any app that supports inbound RSS).

20:49

A Lovely Valentine’s Day Dinner At Dozo [Whatever]

If you caught my last two posts over Dozo, Dayton’s premier underground sushi dining experience, then you already know how much I love it. What better way to celebrate the day of love than with Dozo’s special Valentine’s Day 7-course omakase style chef’s menu that offers off-menu selections and limited, intimate seating at the bar so you can watch the chefs work their magic? And trust me, it is indeed magic.

Not only was I extremely excited about the curated sushi menu and brand new sake pairing to go alongside it, but Tender Mercy (the bar that houses Dozo) posted their Valentine’s Day cocktail line-up a few days ago, and it looked incredible, as well.



View this post on Instagram

Long story short, I knew my tastebuds were in for a real treat.

I booked the 8:30pm slot on their first day of offering this menu, which was Tuesday. Getting a later start to dinner than usual only made me that much hungrier for what was to come.

I got to Tender Mercy about twenty minutes early, so I just had a seat at their bar and perused the special cocktail menu:

A small paper menu listing Tender Mercy's Valentine's Day cocktails. There's a detailed border in the corners and two Cupid-esque angels in the top corners. There's four cocktails and one NA cocktail listed.

I love this dessert cocktail menu because whatever your poison is, they’ve got it. A gin drink, a vodka cocktail, even tequila and bourbon. And, of course, a mocktail. They all sounded so delicious but also very rich, and I didn’t want to spoil my appetite with something on the heavier side (like that cheesecake foam, YUM) so I actually opted for the Pillow Princess and asked the bartender to put his spirit of choice in it. He said he recommended Hennessey Cognac (I’m pretty sure it was Hennessy Very Special but I’m just guessing from the brief look I got at the bottle).

I can’t say I’ve had Cognac all that much, but the sweet, almost vanilla-like flavor of the Hennessy worked super well in it.

A small rocks glass with an orange-ish yellow liquid in it with a little bit of a foamy layer on top. There's a metal cocktail pick with raspberries and blueberries on it on top. The drink sits on a black, leather-looking bar and the beautifully lit wood and glass shelves of the bar can be seen in the background.

I’m glad I went with the bartender’s recommendation, he’s truly a pro and has never steered me wrong before so I trust his judgement a hundred percent.

After a few minutes, it was time to get seated in Dozo. There were only six of us total at the bar, a group of three on my right and a couple on my left. Our menu was tucked into our envelope shaped napkin and I briefly surveyed what was going to be served.

A small paper menu labeled

Truly the most eye-catching dish was the wasabi ice cream. Listen, I trust Dozo, but man, did that sound absolutely bonkers. I held strong in my faith, though.

Per usual, I went with the sake pairing, because when else do I get to try so many different expertly curated sakes? Plus, the chef said he tried each of the sake pairings and highly recommended it.

Up first was a spicy salmon onigiri:

A big ol' triangle of onigiri. The rice is more of like a brownish color instead of pure white, with visible flecks of seasoning throughout. It's served on a small square matte black plate.

I wasn’t sure how spicy the salmon would actually end up being, so I had my water on standby. After getting through the warm, soft, perfectly seasoned rice, I was met with a generously portioned salmon filling that wasn’t at all too spicy! This onigiri was hands down the best one I’ve ever had, though I will admit my experience is rather limited in that department. Of course, it’s not everyday I have an onigiri, but this one definitely takes the cake.

For the sake pairing I was served Amabuki’s “I Love Sushi” Junmai. Obviously, this is a fantastic name for a sake. It says all you need to know about it right in the name, plain and simple. Jokes aside, this was a perfectly fine sake. With a dry, crisp flavor, it didn’t really stand out to me much but paired well with the umami flavor of the onigiri.

Off to a great start (I expected no less), the second course was looking mighty fine:

Three pieces of nigiri in a row on a rectangular matte black plate.

From left to right, we have hamachi (yellowtail), hirame (flounder), and skipjack tuna. The hamachi’s wasabi sauce packed a ton of great wasabi flavor without painfully clearing my sinuses. It had just the right amount of strength, a very balanced piece. The flounder was exceptionally tender with a melt-in-your-mouth texture. The skipjack has always been a tried and true classic in my previous Dozo experiences, and today’s serving of it was no different. All around a total winner of a course, with tender, umami packed pieces.

To accompany this course, I was served Takatenjin “Soul of the Sensei,” which is a Junmai Daiginjo. This sake is made with Yamadanishiki, which is considered to be the king of sake rice. “Soul of the Sensei” was created as a tribute to revered sake brewer Hase Toji. Much like the first sake we were served, it was crisp with a slight dryness, pairing well with the fresh fish and savory flavors. It had just a touch of melon.

Up next was this smaller course with a piece of chu toro and a piece of smoked hotate:

Two pieces of nigiri on a small round black plate, one piece a dark pink fatty tuna and the other an orangeish beige colored piece of smoked scallop.

Both pieces looked stunning and fresh. The chef explained that chu toro is the fatty belly meat of the tuna, which is a more prized and delicious cut, a real treat. Indeed, it was very buttery and had a rich mouthfeel. I didn’t know what hotate was, but it turns out it’s a scallop, and I think they mentioned something about hotate scallops come from a specific region in Japan, but I might be misremembering. Anyways, I love scallops, but I’ve definitely never had one that’s been smoked before. It was fun to watch the chef smoke all of the pieces before dishing them out.

Oh my goodness this piece was incredible. It had a luscious texture and complex, beautifully smokiness that didn’t detract from the flavor of the scallop. It was a masterfully smoked piece of high quality, fresh scallop. Remarkable piece! Great course all around.

Instead of sake for this course, we were served a shot of Suntory Whiskey. but I have no idea which type specifically. Maybe the Toki? But also very well could’ve been the Hibiki Harmony because the shot was definitely a dark, ambery color. I wish I had a palate for whiskey, especially premium Japanese whiskey that the kitchen so generously gifted upon each guest, but truthfully it was a tough couple of sips for me. Like fire in my throat, that shit put some damn hair on my chest. Super grateful for the lovely whiskey, but sheesh it definitely burned. The chefs actually took the shot with us, how fun!

Fear not, there was some lovely mushroom and yuzu ramen on the way to ease the pain:

A beautiful stoneware bowl filled with ramen noodles and a lovely broth, garnished with small green onion pieces.

This ramen is actually vegetarian, made with umami-packed mushrooms and bright yuzu citrus. The green onions and drops of chili oil drizzled on top added a fantastic balance of flavors for a well-rounded, hearty, warm bowl of delicious ramen that was good to the last drop. I wish they had ramen more often, it was so great to sip on some warm broth while it was below freezing outside. I absolutely loved the stoneware bowl it was served in, I would love to have something like that in my own kitchen.

For the sake, this one was truly special. Hana Makgeolli “MAQ8 Silkysonic.” Look how CUTE these cans are! These adorable single-serve cans contain a fun, slightly bubbly, just-a-touch-sweet sake that was a great addition to the night’s line-up. It’s a bit lower alcohol content than some other sakes at 8%, making it so you can enjoy more than one can of this bubbly goodness if you so desired.

I was definitely pretty full by this point, but I powered on for this next course consisting of some torched sake, unagi, and suzuki.

Three pieces of nigiri lined up on a black rectangular matte plate.

It was a little confusing with the first piece of fish in this lineup being called sake, since I assumed sake was just the drink we all know and love, but sake is actually also salmon. It was fun to watch the chefs use a blowtorch to torch the salmon, as any course involving fire is a great course. The salmon had a sauce on top that I hate to say I can’t remember what exactly it was. I know, I had one job! I should’ve taken better notes, but there was so much going on between being served the sake and explained the specifics of that plus the chefs explaining the whole course, plus the couple next to me conversing with me (we had lovely conversations). It was a lot, okay! Sauce aside, the salmon was excellent and beautifully torched.

For the unagi, I actually love eel, so I knew this piece was about to be bomb. With the sweet, thick glaze on top and fresh slice of jalapeno, this piece was loaded with deliciousness. I was worried the jalapeno slice would bring too much heat to the dish for me, but it was perfect and not hot at all, just had great flavor.

The final piece, suzuki, is Japanese sea bass. There is a small pickled red onion sliver on top, it is not a worm, to be clear. Apparently the Japanese sea bass is known by different names depending on how mature the fish is, suzuki being the most grown stage of the fish. This piece was very simply dressed and the tender fish spoke for itself.

The sake for this course was Tentaka’s “Hawk in the Heavens” Tokubetsu Junmai. Much like with the food of this course, I should have taken better notes, because I don’t remember this sake at all. I don’t remember what it tasted like, my thoughts on it, nothing. I didn’t even remember the name until I looked at the menu again. I am so sorry, it is truly only because it was the sixth course and I had just taken a shot and was busy talking! Forgive me and we shall move on.

For our last savory course, it was two pieces of the chef’s choice:

Two pieces of nigiri, one being fish and one being wagyu.

The chefs said in honor of it being Valentine’s Day, they wanted to give us a bit more of a lux piece, and opted for wagyu and torched toro. Sending off the savory courses with wagyu was truly a delight, it really provided the turf in “surf and turf.” Every time I’ve had wagyu from Dozo it’s been so tender and rich, the fat just melting in my mouth. It’s also a fun novelty since I don’t really have wagyu anywhere else.

Finally, it was time for dessert. I couldn’t wait to try the wasabi ice cream:

A small glass coupe shaped bowl holding the wasabi ice cream. There's crushed wasabi peas on top.

I would’ve never imagined that wasabi ice cream could be even remotely edible, let alone enjoyable, but oh my gosh. Oh my gosh. How was this so good?! The creaminess contrasting with the crunchy wasabi peas, the perfect amount of sweetness mixing with the distinct flavor of the wasabi, LORD! It was incredibly, bizarrely delicious. The wasabi didn’t have that sinus-clearing bite to it, yet retained its unmistakable palate. What a treat.

For the final sake, I was served Kiuchi Brewery’s “Awashizuki” Sparkling Sake. I was particularly excited for this one because I love sparkling sakes, they are undoubtedly my favorite category of sake. Anything with bubbles is just better! I will say that the Awashizuki seemed to be much more lowkey on the bubbles than some other sparkling sakes I’ve had before. The bubbles were a bit more sparse and toned down, but it was still lightly carbonated enough that you could tell it wasn’t still. It was sweeter and more refreshing than the others in the evening’s lineup, which makes sense since it was the dessert course pairing. I really liked this one!

All in all, I had yet another fantastic experience at Dozo, and I absolutely loved their Valentine’s Day lineup. The limited seating at the bar made it feel all the more exclusive and special, and every course was totally delish. I got to try lots of new sakes and have really nice chats with the people next to me, and really just had a great evening all around.

The ticket for this event was $95, after an added 18% gratuity and taxes, it was more like $125. The sake pairing was $50 and I also tipped the waitress that was pouring the pairings and telling me about them. It was definitely a bit of a splurge event but hey, it was for V-Day! Gotta treat yourself. And I’m so glad I did!

Which piece of fish looks the most enticing to you? Or perhaps the ramen is more your speed? Have you tried any of the sakes from the lineup? Let me know in the comments, and have a great day!

-AMS

20:21

Scarathon [Penny Arcade]

Fortnite oversaw the transition of Battle Royale from game to genre, and I think ARC Raiders performed the same trick for Extraction - and in a very similar way. Tarkov and PUBG are both loping, all fours, half-clad man beasts with their dicks out in a public park. Their own skin feels too tight, somehow; they're scratching themselves on the rough bark of trees just to get a moment of release. Fortnite and ARC Raiders are, by way of comparison, videogames.

20:07

Philipp Kern: What is happening with this "connection verification"? [Planet Debian]

You might see a verification screen pop up on more and more Debian web properties. Unfortunately the AI world of today is meeting web hosts that use Perl CGIs and are not built as multi-tiered scalable serving systems. The issues have been at three layers:

  1. Apache's serving capacity runs full - with no threads left to serve requests. This means that your connection will sit around for a long time, not getting accepted. In theory this can be configured, but that would require requests to be handled in time.
  2. Startup costs of request handlers are too high, because we spawn a process for every request. This currently affects the BTS and dgit's browse interface. packages.debian.org has been fixed, which increased scalability sufficiently.
  3. Requests themselves are too expensive to be served quickly - think git blame without caching.

Optimally we would go and solve some scalability issues with the services, however there is also a question of how much we want to be able to serve - as AI scraper demand is just a steady stream of requests that are not shown to humans.

How is it implemented?

DSA has now stood up some VMs with Varnish for proxying. Incoming TLS is provided by hitch, and TLS "on-loading" is done using haproxy. That way TLS goes in and TLS goes out. While Varnish does cache, if the content is cachable (e.g. does not depend on cookies) - that is not the primary reason for using it: It can be used for flexible query and response rewriting.

If no cookie with a proof of work is provided, the user is redirected to a challenge page that does some webcrypto in Javascript - because that looked similar to what other projects do (e.g. haphash that originally inspired the solution). However so far it looks like scrapers generally do not run with Javascript enabled, so this whole crypto proof of work business could probably be replaced with just a Javascript-based redirect. The existing solution also has big (security) holes in it. And, as we found out, Firefox is slower at webcrypto than Chrome. I have recently reduced the complexity, so you should notice it blocking you significantly less.

Once you have the cookie, you can keep accessing the site for as long as the cookie is valid. Please do not make any assumptions about the cookies, or you will be broken in the future.

For legitimate scrapers that obey robots.txt, there is now an automatically generated IP allowlist in place (thanks, Marco d'Itri). Turns out that the search engines do not actually run Javascript either and then loudly complain about the redirect to the challenge page. Other bots are generally exempt.

Conclusion 

I hope that right now we found sort of the sweet spot where the admins can stop spending human time on updating firewall rules and the services are generally available, reasonably fast, and still indexed. In case you see problems or run into a block with your own (legitimate) bots, please let me know.

19:21

[$] Open source security in spite of AI [LWN.net]

The curl project has found AI-powered tools to be a mixed bag when it comes to security reports. At FOSDEM 2026, curl creator and lead developer Daniel Stenberg used his keynote session to discuss his experience receiving a slew of low-quality reports and, at the same time, realizing that large language model (LLM) tools can sometimes find flaws that other tools have missed.

Slog AM: The Government’s Out of Money Again, Two Immigrants Were Released from Tacoma’s Detention Center, and Who Knows, We Might Get Some Snow [The Stranger]

The Stranger's morning news roundup. by Hannah Murphy Winter

Good Morning! It’s Presidents Day, which our president is celebrating with an AI-generated Time magazine cover and the quote: “I was the hunted, and now I’m the hunter.” This is probably what George Washington had in mind, right?

The Weather: We had our taste of False Spring, and now we’re back to winter for a bit. Highs in the 40s, lows right around freezing, and we might even get a little snow later in the week. 

Some Good News: ICE released Wilmer Toledo-Martinez, a Vancouver, WA man who was mauled by an ICE dog in December, from the Northwest Detention Center. He still has to continue his immigration case, but he’s doing it from home with his wife (who is an American citizen) and three kids. 

And He Wasn’t the Only One: Greggy Sorio, a Filipino immigrant who came to the US on a green card, had to lose a part of his foot to infection, bleed out of his rectum for a month, and lose a “dramatic” amount of weight before a judge demanded that he be released from the Northwest ICE Processing facility in Tacoma. Sorio is able to access real medical care now, but he’s still at risk of deportation. 

Another Shutdown: In a Valentine’s gift to us all, the Department of Homeland Security technically ran out of funds on Saturday while Dems in Congress try to fight for some limitations on ICE’s funding. Unfortunately, ICE and Border Patrol will barely be affected. And nearly 85 percent of FEMA employees and 95 percent of TSA’s are expected to work without pay through the shutdown. 

ICYMI: Last week was a really good week for local anti-ICE legislation. City Councilmember Alexis Mercedes Rinck introduced a moratorium on new detention centers in Seattle, and the County and the Port are blocking immigration agents from using their non-private land. Next up: let’s talk about the CCTV and ALRP cameras. 

Trump Bombs 39th Boat: On Saturday, the military bombed another supposed narco-trafficking boat in the Caribbean. This illegal 5-month campaign, theoretically to fight the drug trade, has killed 133 people. This bombing killed three

Pity the Millionaires: According to the Seattle Times’s Danny Westneat this weekend, taxing the rich is generally a popular and successful proposition. We learned that last week when, it turns out, the tax meant to fund our Social Housing Developer brought in more than double what was projected in its first year. (As Mayor Katie Wilson put it, this city is “filthy rich.”) And we know that the Millionaire Tax currently scooching through the leg is wildly popular. But Dems in the state leg (and Jamie Pedersen, specifically) are still considering a rollback for our Estate Tax to avoid the myth of the Fleeing Rich People.

Sheriff Certification: Right now in our state law, elected sheriffs are required to get certified by the Washington State Criminal Justice Training Commission within a year of taking office. Seems reasonable, right? But right now, if they just… don’t do it, there’s nothing anyone can do about that. The state legislature introduced a bill that would oust sheriffs who aren’t certified. So naturally, Pierce County’s hyperconservative, transphobic Sheriff Keith Swank thinks it’s unfair. 

A Headline From Popular Mechanics to Breakup the Doldrums Today: “Jesus Was a Psychedelic Mushroom, a Controversial Theory Suggests. Could It Reshape Christianity Forever?” 

Olympic Breakdown: NBC spent the first half of the games talking about American figure skater Ilia “Quad God” Malinin as the new face of the sport and the inevitable gold medalist. And he is the only person who’s ever landed a quad axel in an international competition. But in his final skate in the competition, the 21-year-old fell twice, struggled to deliver any of the quad jumps he’s famous for, and ended up placing eighth in the competition. Watching reporters try to make him explain what happened within minutes of his walking off the ice was brutal, and he handled it with a lot of grace. He told the Athletic that he was feeling overwhelmed when he got onto the ice. “I just felt like all the just traumatic moments of my life really just started flooding my head,” he said. “And there’s just like so many negative thoughts that just flooded into there and I just did not handle them.” We’ll see him again in four years, and by then he’ll surely have figured out how to fight the yips. 

The Curlers Are Fighting: Both the men’s and women’s Canadian curling teams were accused of cheating—both for getting too handsy with the stone after they released it. And if you’ve watched curling, you know it’s a very mild-mannered sport (they’ve got brooms for fuck’s sake), but the head of the Men’s curling team threw around enough “fucks” that news reports called the exchange NSFW. 

Wanna watch some of the action for yourself? Our local Granite Curling Club is throwing watch parties all weekend. 

They Don’t Make ’Em Like They Used to: Naturally, when Olympians medal, they fuckin’ party. And who would take their medal off?? But it turns out, someone cut some corners on this year’s medals, and they’re popping right off their ribbons while the athletes celebrate. “Don’t jump in them. I was jumping in excitement and it broke,” said women’s downhill ski gold medalist Breezy Johnson. “I’m sure somebody will fix it. It’s not crazy broken but a little broken.”

Fun Olympics Fact: There’s a move in ice dancing called a twizzle. You’re welcome. 

 

          View this post on Instagram                      

A post shared by CBS News (@cbsnews)

17:35

Companies House ID checks [RevK®'s ramblings]

Apparently this petition is confusing a few people. So trying to explain.

At the simplest level Companies House have to ID people now - directors and persons with significant control (PSC). This seems not that unreasonable to be honest.

ID means somehow proving a real ID, and that has a lot of issues - but they have some government ID app, or you can take ID to a post office or some such. The actual ID process is not the issue, and having to have a proper ID to be a director or PSC is not that daft - Companies House have always published the identity of people behind companies. These days there is more privacy over things like actual date of birth and home address, thankfully. But the names of company directors and PSC are a matter of public record.

So the new system means proved ID for director or PSC. Simples. You would think.

But no!

The reason for the petition, and my concern is simple.

My wife is PSC for our company. No problem. We know next return is June. So no action, surely?

Companies House have a deadline for proving her ID, and the confusion here is that is not the same as the annual return and the deadline for me to prove my ID as director. So we did not expect it to be an issue.

Turns out the deadline for proving ID for PSC is 14th of month of birth, so for her, last December.

Well, Companies House could have let her know - but they CHOSE NOT TO. Instead they waiting until the deadline was past and then sent her a letter basically saying she was now a CRIMINAL.

The letter was actually very very badly worded, and it seems that doing the ID process promptly and before end of December was enough to shut them up, thankfully. But from what I can see my wife is technically a criminal for not having met the deadline.

Someone else I know nearly had their bank accounts frozen over this even.

Of course, I was a tad panicked, and so wanted to sort my ID at companies house.

There is a snag.

I can't.

This it what the petition is about.

The stupidity.

I have to wait. I cannot prove my ID now!

I have to wait (1) until 1st of month of my birthday as PSC for some other company. And (2) July for many other companies for which I am director and PSC.

I have done my ID as a PSC now, but not as director, so I have to do again, and I cannot do that now. I cannot do in one go. I cannot do BEFORE the 14 day window.

I fully understand a legal deadline.

I do not understand an startline.

I do not understand why I cannot prove my ID now, and be done with it.

The only possible reason it to catch people out and make them unwilling criminals.

FFS 14 days! People have holidays. People can be off sick.

That is what the petition is about.

17:07

Four stable kernels to fix problematic commit [LWN.net]

Greg-Kroah Hartman has released the 6.19.2, 6.18.12, 6.12.73, and 6.6.126 stable kernels. These kernels each contain a single change; Kroah-Hartman has reverted one problematic commit that prevents some systems from booting. "If the last stable release worked just fine, no need to upgrade."

15:07

New RSS feature from Manton [Scripting News]

A few days ago I asked Manton Reece if he could add a feature that gave me a feed of replies to me on his service, micro.blog.

  • I post a lot of stuff to micro.blog via my linkblog RSS feed. Every one of those items can be commented on. But unless I visit micro.blog regularly, I don't see the comments. I guess people have mostly figured out that I'm an absent poster, and don't say anything. Even so, there are some replies. Wouldn't it be great if the responses could show up in my blogroll. And of course if there was an RSS feed of the replies, I would see them when I was looking for something possibly interesting, one of the main reasons I have a blogroll, and keep finding new uses for it.

The feed is there now, I'm subscribed and new comments are posted in the feed and Murphy-willing I will see them. Bing!

It's a killer feature for sure. But the best part of it is this -- here are two developers working together. This is how the web works when it's working.

BTW a suggestion. Right now the title on my feed is:

  • Micro.blog - dave mentions

That's a problem in the limited horizontal space in the blogroll. A more useful title would be:

  • "dave" mentions on micro.blog

BTW, if you were building a social network out of RSS this would be an essential feature. It also validates Manton's intuition to allow people like me to be absentee publishers to his community. But the missing piece was allowing the conversation to be two-way, which it now is. That deserves another bing!

CodeSOD: C+=0.25 [The Daily WTF]

A good C programmer can write C in any language, especially C++. A bad C programmer can do the same, and a bad C programmer will do all sorts of terrifying things in the process.

Gaetan works with a terrible C programmer.

Let's say, for example, you wanted to see if an index existed in an array, and return its value- or return a sentinel value. What you definitely shouldn't do is this:

    double Module::GetModuleOutput(int numero) {
        double MAX = 1e+255 ;
        if (this->s.sorties+numero )
            return this->s.sorties[numero];
        else
            return MAX ;
    }

sorties is an array. In C, you may frequently do some pointer arithmetic operations, which is why sorties+numero is a valid operation. If we want to be pedantic, *(my_array+my_index) is the same thing as my_array[my_index]. Which, it's worth noting, both of those operations de-reference an array, which means you better hope that you haven't read off the end of the array.

Which is what I suspect their if statement is trying to check against. They're ensuring that this->s.sorties+numero is not a non-zero/false value. Which, if s.sorties is uninitialized and numero is zero, that check will work. Otherwise, that check is useless and does nothing to ensure you haven't read off the end of the array.

Which, Gaetan confirms. This code works "because in practice, GetModuleOutput is called for numero == 0 first. It never de-references off the end of the array, not because of defensive programming, but because it just never comes up in actual execution.

Regardless, if everything is null, we return 1e+255, which is not a meaningful value, and should be treated like a sentinel for "no real value". None of the calling code does that, however, but also, it turns out not to matter.

This pattern is used everywhere there is arrays, except the handful of places where this pattern is not used.

Then there's this one:

    if(nb_type_intervalle<1)    { }
    else 
        if((tab_intervalle=(double*)malloc(nb_lignes_trouvees*nb_type_intervalle*2 \
                                                        *sizeof(double)))==NULL)
            return(ERREUR_MALLOC);

First, I can't say I love the condition here. It's confusing to have an empty if clause. if (nb_type_intervalle>=1) strikes me as more readable.

But readability is boring. If we're in the else clause, we attempt a malloc. While using malloc in C++ isn't automatically wrong, it probably is. C++ has its own allocation methods that are better at handling things like sizes of datatypes. This code allocates memory for a large pile of doubles, and stores a pointer to that memory in tab_intervalle. We do all this inside of an if statement, so we can then check that the resulting pointer is not NULL; if it is, the malloc failed and we return an error code.

The most frustrating thing about this code is that it works. It's not going to blow up in surprising ways. I never love doing the "assignment and check" all in one statement, but I've seen it enough times that I'd have to admit it's idiomatic- to C style programming. But that bit of code golf coupled with the pointlessly inverted condition that puts our main logic in the else just grates against me.

Again, that pattern of the inverted conditional and the assignment and check is used everywhere in the code.

Gaetan leaves us with the following:

Not a world-class WTF. The code works, but is a pain in the ass to inspect and document

In some ways, that's the worst situation to be in: it's not bad enough to require real work to fix it, but it's bad enough to be frustrating at every turn.

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

14:49

Four stable kernels for Monday [LWN.net]

Greg Kroah-Hartman has announced the release of the 6.19.1, 6.18.11, 6.12.72, and 6.6.125 stable kernels. As always, each contains important fixes throughout the tree; users of these kernels are advised to upgrade.

[$] Compact formats for debugging—and more [LWN.net]

At the 2025 Linux Plumbers Conference in Tokyo, Stephen Brennan gave a presentation on the debuginfo format, which contains the symbols and other information needed for debugging, along with some alternatives. Debuginfo files are large and, he believes, are a bit scary to customers because of the "debug" in their name. By rethinking debuginfo and the tools that use it, he hopes that free-software developers "can add new, interesting capabilities to tools that we are already using or build new interesting tools".

14:21

Link [Scripting News]

My Twitter account has been hijacked. I can't log on, or change the password. I can't communicate with the company, so I'll try here. Please shut down my account, davewiner. To my friends who have Twitter accounts, if you see a post from davewiner on Twitter, please reply and let the people who see it know that it isn't from me.

Reducing tab clutter in Drummer [Scripting News]

In Drummer, when I get too many tabs open from things I haven't looked at in a while, this is what I do.

  1. I choose Add Bookmark from the Bookmarks menu
  2. The menu opens with the new bookmark at the top of the list
  3. If it's the first time I press Return and enter "Tabs I Closed Recently"
  4. Then I drag the new bookmark under that headline.
  5. Close the Bookmarks tab.
  6. Remove the tab I just bookmarked.
  7. Voila! Clutter Reduced.

14:07

Security updates for Monday [LWN.net]

Security updates have been issued by Debian (chromium, pdns-recursor, python-django, and wireshark), Fedora (gnutls, linux-sgx, mingw-expat, nginx, nginx-mod-brotli, nginx-mod-fancyindex, nginx-mod-headers-more, nginx-mod-modsecurity, nginx-mod-naxsi, nginx-mod-vts, p11-kit, python-aiohttp, vim, and xen), Red Hat (kernel, kernel-rt, python-s3transfer, python-urllib3, and resource-agents), SUSE (aaa_base, abseil-cpp, build-20260202, cargo-auditable, cargo-c, chromedriver, cockpit, cockpit-packages, cockpit-subscriptions, curl, elemental-toolkit, elemental-operator, gnome-remote-desktop, go1.24, go1.25, gpg2, haproxy, himmelblau, htmldoc, ImageMagick, iperf, java-1_8_0-openjdk, kernel, krb5, kubevirt, libowncloudsync-devel, libpng16-16, libsodium, libsoup, libsoup2, micropython, net-snmp, opencryptoki, openjfx, openssl1, ovmf, postgresql14, postgresql15, postgresql16, protobuf, python-aiohttp, python-brotli, python-maturin, python-pip, python-urllib3, python310, python311, python-rpm-macros, python311-cryptography, python314, screen, systemd, u-boot, util-linux, and vim), and Ubuntu (dotnet8, dotnet10, expat, freerdp2, freerdp3, and python-aiohttp).

12:35

The Promptware Kill Chain [Schneier on Security]

The promptware kill chain: initial access, privilege escalation, reconnaissance, persistence, command & control, lateral movement, action on objective

Attacks against modern generative artificial intelligence (AI) large language models (LLMs) pose a real threat. Yet discussions around these attacks and their potential defenses are dangerously myopic. The dominant narrative focuses on “prompt injection,” a set of techniques to embed instructions into inputs to LLM intended to perform malicious activity. This term suggests a simple, singular vulnerability. This framing obscures a more complex and dangerous reality. Attacks on LLM-based systems have evolved into a distinct class of malware execution mechanisms, which we term “promptware.” In a new paper, we, the authors, propose a structured seven-step “promptware kill chain” to provide policymakers and security practitioners with the necessary vocabulary and framework to address the escalating AI threat landscape.

In our model, the promptware kill chain begins with Initial Access. This is where the malicious payload enters the AI system. This can happen directly, where an attacker types a malicious prompt into the LLM application, or, far more insidiously, through “indirect prompt injection.” In the indirect attack, the adversary embeds malicious instructions in content that the LLM retrieves (obtains in inference time), such as a web page, an email, or a shared document. As LLMs become multimodal (capable of processing various input types beyond text), this vector expands even further; malicious instructions can now be hidden inside an image or audio file, waiting to be processed by a vision-language model.

The fundamental issue lies in the architecture of LLMs themselves. Unlike traditional computing systems that strictly separate executable code from user data, LLMs process all input—whether it is a system command, a user’s email, or a retrieved document—as a single, undifferentiated sequence of tokens. There is no architectural boundary to enforce a distinction between trusted instructions and untrusted data. Consequently, a malicious instruction embedded in a seemingly harmless document is processed with the same authority as a system command.

But prompt injection is only the Initial Access step in a sophisticated, multistage operation that mirrors traditional malware campaigns such as Stuxnet or NotPetya.

Once the malicious instructions are inside material incorporated into the AI’s learning, the attack transitions to Privilege Escalation, often referred to as “jailbreaking.” In this phase, the attacker circumvents the safety training and policy guardrails that vendors such as OpenAI or Google have built into their models. Through techniques analogous to social engineering—convincing the model to adopt a persona that ignores rules—to sophisticated adversarial suffixes in the prompt or data, the promptware tricks the model into performing actions it would normally refuse. This is akin to an attacker escalating from a standard user account to administrator privileges in a traditional cyberattack; it unlocks the full capability of the underlying model for malicious use.

Following privilege escalation comes Reconnaissance. Here, the attack manipulates the LLM to reveal information about its assets, connected services, and capabilities. This allows the attack to advance autonomously down the kill chain without alerting the victim. Unlike reconnaissance in classical malware, which is performed typically before the initial access, promptware reconnaissance occurs after the initial access and jailbreaking components have already succeeded. Its effectiveness relies entirely on the victim model’s ability to reason over its context, and inadvertently turns that reasoning to the attacker’s advantage.

Fourth: the Persistence phase. A transient attack that disappears after one interaction with the LLM application is a nuisance; a persistent one compromises the LLM application for good. Through a variety of mechanisms, promptware embeds itself into the long-term memory of an AI agent or poisons the databases the agent relies on. For instance, a worm could infect a user’s email archive so that every time the AI summarizes past emails, the malicious code is re-executed.

The Command-and-Control (C2) stage relies on the established persistence and dynamic fetching of commands by the LLM application in inference time from the internet. While not strictly required to advance the kill chain, this stage enables the promptware to evolve from a static threat with fixed goals and scheme determined at injection time into a controllable trojan whose behavior can be modified by an attacker.

The sixth stage, Lateral Movement, is where the attack spreads from the initial victim to other users, devices, or systems. In the rush to give AI agents access to our emails, calendars, and enterprise platforms, we create highways for malware propagation. In a “self-replicating” attack, an infected email assistant is tricked into forwarding the malicious payload to all contacts, spreading the infection like a computer virus. In other cases, an attack might pivot from a calendar invite to controlling smart home devices or exfiltrating data from a connected web browser. The interconnectedness that makes these agents useful is precisely what makes them vulnerable to a cascading failure.

Finally, the kill chain concludes with Actions on Objective. The goal of promptware is not just to make a chatbot say something offensive; it is often to achieve tangible malicious outcomes through data exfiltration, financial fraud, or even physical world impact. There are examples of AI agents being manipulated into selling cars for a single dollar or transferring cryptocurrency to an attacker’s wallet. Most alarmingly, agents with coding capabilities can be tricked into executing arbitrary code, granting the attacker total control over the AI’s underlying system. The outcome of this stage determines the type of malware executed by promptware, including infostealer, spyware, and cryptostealer, among others.

The kill chain was already demonstrated. For example, in the research “Invitation Is All You Need,” attackers achieved initial access by embedding a malicious prompt in the title of a Google Calendar invitation. The prompt then leveraged an advanced technique known as delayed tool invocation to coerce the LLM into executing the injected instructions. Because the prompt was embedded in a Google Calendar artifact, it persisted in the long-term memory of the user’s workspace. Lateral movement occurred when the prompt instructed the Google Assistant to launch the Zoom application, and the final objective involved covertly livestreaming video of the unsuspecting user who had merely asked about their upcoming meetings. C2 and reconnaissance weren’t demonstrated in this attack.

Similarly, the “Here Comes the AI Worm” research demonstrated another end-to-end realization of the kill chain. In this case, initial access was achieved via a prompt injected into an email sent to the victim. The prompt employed a role-playing technique to compel the LLM to follow the attacker’s instructions. Since the prompt was embedded in an email, it likewise persisted in the long-term memory of the user’s workspace. The injected prompt instructed the LLM to replicate itself and exfiltrate sensitive user data, leading to off-device lateral movement when the email assistant was later asked to draft new emails. These emails, containing sensitive information, were subsequently sent by the user to additional recipients, resulting in the infection of new clients and a sublinear propagation of the attack. C2 and reconnaissance weren’t demonstrated in this attack.

The promptware kill chain gives us a framework for understanding these and similar attacks; the paper characterizes dozens of them. Prompt injection isn’t something we can fix in current LLM technology. Instead, we need an in-depth defensive strategy that assumes initial access will occur and focuses on breaking the chain at subsequent steps, including by limiting privilege escalation, constraining reconnaissance, preventing persistence, disrupting C2, and restricting the actions an agent is permitted to take. By understanding promptware as a complex, multistage malware campaign, we can shift from reactive patching to systematic risk management, securing the critical systems we are so eager to build.

This essay was written with Oleg Brodt, Elad Feldman and Ben Nassi, and originally appeared in Lawfare.

11:14

Grrl Power #1435 – Exakshually I’m succusplaining it [Grrl Power]

This was supposed to be the second half of the prior page, but in addition to having a lot of books shamelessly throw themselves at me last week, I underestimated how much time it would take to draw a “watch party,” since each character adds to the pencil, ink and color time. Can’t have a proper watch party with just 2 or 3 people. Really Tom’s watch party should have a much larger crowd, but there’s only so much time.

Crap, I went looking through the archive to see if I ever named “The Mahogany Forklift” (which I don’t seem to have) and wound up reading like a hundred pages and now it’s 1 am. :/


Here is Gaxgy’s painting Maxima promised him. Weird how he draws almost exactly like me.

I did try and do an oil painting version of this, by actually re-painting over the whole thing with brush-strokey brushes, but what I figured out is that most brushy oil paintings are kind of low detail. Sure, a skilled painter like Bob Ross or whoever can dab a brush down a canvas and make a great looking tree or a shed with shingles, but in trying to preserve the detail of my picture (eyelashes, reflections, etc) was that I had to keep making the brush smaller and smaller, and the end result was that honestly, it didn’t really look all that oil-painted. I’ll post that version over at Patreon, just for fun, but I kind of quit on it after getting mostly done with re-painting Max.

Patreon has a no-dragon-bikini version of of the picture as well, naturally.


Double res version will be posted over at Patreon. Feel free to contribute as much as you like.

10:49

Mysterious predictability [Seth's Blog]

A watched pot will boil.

As it heats up, there’s no way to predict where the cavitation will start and which bubble will arrive first. But with enough time and enough heat, it’s going to boil.

That tree down the street is going to lose its leaves this winter. We don’t know which leaf will go last, but we can be pretty sure they’ll go sooner or later.

Complex systems can be predictable even when any individual node in the system seems unknowable.

One of the traps that marketing measurement presents is our unwillingness to consider populations instead of individuals.

08:56

Scarathon [Penny Arcade]

New Comic: Scarathon

08:49

Pluralistic: The online community trilemma (16 Feb 2026) [Pluralistic: Daily links from Cory Doctorow]

->->->->->->->->->->->->->->->->->->->->->->->->->->->->-> Top Sources: None -->

Today's links

  • The online community trilemma: Reach, community and information, pick two.
  • Hey look at this: Delights to delectate.
  • Object permanence: Bruces x Sony DRM; Eniac tell-all; HBO v PVRs; Fucking damselflies; Gil Scout Cookie wine-pairings; Big Pharma's opioid fines are tax-deductible; Haunted Mansion ops manual; RIAA v CD ripping; Flying boat; Morbid Valentines; Veg skulls; Billionaires x VR v guillotines; "Lovecraft Country"; Claude Shannon on AI; Comics Code Authority horror comic; Scratch-built clock; Stolen hospital.
  • Upcoming appearances: Where to find me.
  • Recent appearances: Where I've been.
  • Latest books: You keep readin' em, I'll keep writin' 'em.
  • Upcoming books: Like I said, I'll keep writin' 'em.
  • Colophon: All the rest.



An early 20th century photo of a mixed-gender group of people drinking in a working-class bar; with a smiling woman in the center. It has been altered: a nova-haloed thought bubble coming from the center woman's head reveals that she is daydreaming of a salon in which three upper class women in flapper-era outfits are chattering. A Prince Albert ad in the background has had the Reddit robot mascot matted into it.

The online community trilemma (permalink)

The digital humanities are one of the true delights of this era. Anthropologists are counting things like sociologists, sociologists are grappling with qualitative data like ethnographers, computational linguists are scraping and making sense of vast corpora of informal speech:

https://memex.craphound.com/2019/07/24/because-internet-the-new-linguistics-of-informal-english/

I follow a bunch of these digital humanities types: danah boyd, of course, but also Benjamin "Mako" Hill, whose work on the true meaning of the "free software"/"open source" debate is one of my daily touchpoints for making sense of the world we live in:

https://www.youtube.com/watch?v=vBknF2yUZZ8

Mako just published a new ACM HCI paper co-authored with his U Washington colleagues Nathan TeBlunthuis, Charles Kiene, Isabella Brown, and Laura Levi, "No Community Can Do Everything: Why People Participate in Similar Online Communities":

https://dl.acm.org/doi/epdf/10.1145/3512908

The paper is a great example of this quantitative ethnography/qualitative statistical analysis hybrid. The authors are trying to figure out why there are so many similar, overlapping online communities, particularly on platforms like Reddit. Why would r/bouldering, r/climbharder, r/climbing, and r/climbingcirclejerk all emerge?

This is a really old question/debate in online community design. The original internet community space, Usenet, was founded on strict hierarchical principles, using a taxonomy to produce a single canonical group for every kind of discussion. Sure, there was specialization (rec.pets.cats begat rec.pets.cats.siamese), but by design, there weren't supposed to be competing groups laying claim to the same turf, and indeed, unwary Usenet users were often scolded for misfiling their comments in the wrong newsgroup.

The first major Usenet schism arose out of this tension: the alt. hierarchy. Though alt. later became known for warez, porn, and other subjects that were banned by Usenet's founding "backbone cabal," the inciting incident that sparked alt.'s creation was a fight over whether "gourmand" should be classified as "rec.gourmand" or "talk.gourmand":

https://www.eff.org/deeplinks/2019/11/altinteroperabilityadversarial

Community managers design their services with strongly held beliefs about the features that make a community good. These beliefs, grounded in designers' personal experience, are assumed to be global and universal. Generally, this assumption is wrong, something that is only revealed later when more people arrive with different needs.

Think of Friendster's "fakester" problem, driven by its designers' beliefs about how people should organize their affinities:

https://www.zephoria.org/thoughts/archives/2003/08/17/the_fakester_manifesto.html

Or Mastodon's initial, self-limiting ban on "quote" posts as a way to encourage civility:

https://blog.joinmastodon.org/2025/02/bringing-quote-posts-to-mastodon/

And, as the paper's authors note, Stack Overflow has a strict prohibition on overlapping new communities, echoing Usenet's original design dispute.

On its face, this hierarchical principle for conversational spaces makes sense. Viewed through a naive economic lens of "reputation capital," having one place where all the people interested in your subject can be reached is optimal. The more people there are in a group, the greater the maximum "engagement" – likes, comments, reposts. If you're thinking about communities from an informational perspective, it's easy to assume that bigger groups are better, too: the more users there are in a topical group, the greater the likelihood that a user who knows the answer to your question will show up when you ask it.

But this isn't how online communities work. On every platform, and across platforms, overlapping, "redundant" groups emerge quickly and stick around over long timescales. Why is this?

That's the question the paper seeks to answer. The authors used data-analysis techniques to identify overlapping clusters of Reddit communities and then conducted lengthy, qualitative interviews with participants to discover why and how users participated in some or all of these seemingly redundant groups.

They conclude that there's a community-member's "trilemma": a set of three priorities that can never be fully satisfied by any group. The trilemma consists of users' need to find:

a) A community of like-minded people;

b) Useful information; and

c) The largest possible audience.

The thing that puts the "lemma" in this "trilemma" is that any given group can only satisfy two of these three needs. It's hard to establish the kinds of intimate, high-trust bonds with the members of a giant, high-traffic group, but your small, chummy circle of pals might not be big enough to include people who have the information you're seeking. Users can't get everything they need from any one group, so they join multiple groups that prioritize different paired corners of this people-information-scale triangle.

The interview excerpts put some very interesting meat on these analytical bones. For example, economists typically believe that online marketplaces rely on scale. Think of eBay: as the number of potential bidders increases, the likelihood that one will outbid another goes up. That drives more sellers to the platform, seeking the best price for their wares, which increases the diversity of offerings on eBay, bringing in more buyers.

But the authors discuss a community where vintage vinyl records are bought and sold that benefits from being smaller, because the members all know each other well enough to have a mutually trusting environment that makes transactions far more reliable. Actually knowing someone – and understanding that they don't want to be expelled from the community you both belong to – makes for a better selling and buying experience than consulting their eBay reputation score. The fact that buyers don't have as many sellers and sellers don't have as many buyers is trumped by the human connection in a community of just the right size.

That's another theme that arises in the paper: a "just right" size for a community. As one interviewee says:

I think there’s this weird bell curve where the community needs to be big enough where people want to post content. But it can’t get too big where people are drowning each other out for attention.

This explains why groups sometimes schism: they've gone from being "just big enough" to being "too big" for the needs they filled for some users. But another reason for schism is the desire by some members to operate with different conversational norms. Many of Reddit's topical clusters include a group with the "jerk" suffix (like r/climbingcirclejerk), where aggressive and dramatic forms of discourse that might intimidate newcomers are welcome. Newbies go to the main group, while "crusties" talk shit in the -jerk group. The authors liken this to "regulatory arbitrage" – community members seeking spaces with rules that are favorable to their needs.

And of course, there's the original source of community schism: specialization, the force that turns rec.pets.cats into rec.pets.cats.siamese, rec.pets.cats.mainecoons, etc. Though the authors don't discuss it, this kind of specialization is something that recommendation algorithms are really good at generating. At its best, this algorithmic specialization is a great way to discover new communities that enrich your life; at its worst, we call this "radicalization."

I devote a chapter of my 2023 book The Internet Con, "What about Algorithmic Radicalization?" to exploring this phenomenon:

https://www.versobooks.com/en-gb/products/3035-the-internet-con

The question I grapple with there is whether "engagement-maximizing" algorithms shape our interests, or whether they help us discover our interests. Here's the thought-experiment I propose: imagine you've spent the day shopping for kitchen cabinets and you're curious about the specialized carpentry that's used to build them. You go home and do a search that leads you to a video called "How All-­Wood Cabinets Are Made."

The video is interesting, but even more interesting is the fact that the creator uses the word "joinery" to describe the processes the video illustrates. So now you do a search for "joinery" and find yourself watching a wordless, eight-minute video about Japanese joinery, a thing you never even knew existed. The title of the video contains the transliterated Japanese phrase "Kane Tsugi," which refers to a "three-­way pinned corner miter" joint. Even better, the video description contains the Japanese characters: "面代留め差しほぞ接ぎ."

So now you're searching for "面代留め差しほぞ接ぎ" and boy are there a lot of interesting results. One of them is an NHK documentary about Sashimoto woodworking, which is the school that Kane Tsugi belongs to. Another joint from Sashimoto joinery is a kind of tongue-and-groove called "hashibame," but that comes up blank on Youtube.

However, searching on that term brings you to a bunch of message boards where Japanese carpenters are discussing hashibame, and Google Translate lets you dig into this, and before you know it, you've become something of an expert on this one form of Japanese joinery. In just a few steps, you've gone from knowing nothing about cabinetry to having a specific, esoteric favorite kind of Japanese joint that you're seriously obsessed with.

If this subject was political rather than practical, we'd call this process "radicalization," and we'd call the outcome – you sorting yourself into a narrow niche interest, to the exclusion of others – "polarization."

But if we confine our examples to things like literature, TV shows, flowers, or glassware, this phenomenon is viewed as benign. No one accuses an algorithm of brainwashing you into being obsessed with hashibame tongue-and-groove corners. We treat your algorithm-aided traversal of carpentry techniques as one of discovery, not persuasion. You've discovered something about the world – and about yourself.

Which brings me back to that original, Usenet-era schism over "redundant" groups. The person who wants to talk about being a "gourmand" in the "rec." hierarchy wants to participate in a specific set of conversational norms that are different from those in the "talk." hierarchy. Their interest isn't just being a "gourmand," it's in being a "rec.gourmand," something that is qualitatively different from being a "talk.gourmand."

The conversational trilemma – the unresolvable need for scale, trust and information – has been with us since the earliest days of online socializing. It's lovely to have it formalized in such a crisp, sprightly work of scholarship.


Hey look at this (permalink)



A shelf of leatherbound history books with a gilt-stamped series title, 'The World's Famous Events.'

Object permanence (permalink)

#25yrsago O'Reilly P2P Conference https://web.archive.org/web/20010401001205/https://www.wired.com/news/technology/0,1282,41850,00.html

#20yrsago Sony DRM Debacle roundup Part VI https://memex.craphound.com/2006/02/14/sony-drm-debacle-roundup-part-vi/

#20yrsago Bruce Sterling on Sony DRM debacle https://web.archive.org/web/20060316133726/https://www.wired.com/wired/archive/14.02/posts.html?pg=5

#20yrsago ENIAC co-inventor dishes dirt, debunks myths https://web.archive.org/web/20060218064519/https://www.computerworld.com/printthis/2006/0,4814,108568,00.html

#20yrsago HBO targets PVRs https://thomashawk.com/2006/02/hbos-harrasment-of-pvr-owners.html

#20yrsago Princeton DRM researchers release Sony debacle paper https://web.archive.org/web/20060222235419/https://itpolicy.princeton.edu/pub/sonydrm-ext.pdf

#20yrsago HOWTO run Disneyland’s Haunted Mansion https://web.archive.org/web/20060208213048/http://tinselman.typepad.com/tinselman/2005/08/_latest_populat.html

#20yrsago RIAA: CD ripping isn’t fair use https://web.archive.org/web/20060216233008/https://www.eff.org/deeplinks/archives/004409.php

#15yrsago “Psychic” cancels show due to “unforeseen circumstances” https://web.archive.org/web/20110217050619/https://scienceblogs.com/pharyngula/2011/02/irony.php?utm_source=combinedfeed&amp;utm_medium=rss

#15yrsago CBS sends a YouTube takedown to itself https://web.archive.org/web/20110218201102/https://www.reddit.com/r/WTF/comments/flktg/cbs_files_a_copyright_claim_against_themselves_o_o/

#15yrsago Lost luxury: the Boeing 314 flying boat https://web.archive.org/web/20110217144300/http://www.asb.tv/blog/2011/02/boeing-314-flying-boat/

#15yrsago Brazilian telcoms regulator raids, confiscates and fines over open WiFi https://globalvoices.org/2011/02/14/brazil-criminalization-sharing-internet-wifi/

#15yrsago Blatant disinformation about Scientology critic https://memex.craphound.com/2011/02/14/bald-disinformation-about-scientology-critic/

#15yrsago 3D printer that prints itself gets closer to reality https://web.archive.org/web/20110217072944/http://i.materialise.com/blog/entry/cloning-the-reprap-prusa-in-under-30-minutes

#15yrsago Damselflies’ curious mating posture https://www.nationalgeographic.com/photo-of-the-day/photo/damselflies-heart-shape

#15yrsago Simpsons house as a Quake III level https://www.youtube.com/watch?v=34LtrnnXQTc

#15yrsago Dapper Day at Disneyland: the well-dressed go to the fun-park https://web.archive.org/web/20110219162834/http://thedisneyblog.com/2011/02/16/dapper-day-at-disney-parks-this-sunday/

#15yrsago Horror/exploitation comic recounts the secret founding of the Comics Code Authority https://web.archive.org/web/20110218230149/http://comicsmakekidsevil.com/?p=88

#10yrsago After 3d grade complaint, Florida school district bans award-winning “This One Summer” from high-school library https://ncac.org/incident/florida-high-school-libraries-restrict-access-to-award-winning-graphic-novel

#10yrsago Watch: Claude Shannon, Jerome Wiesner and Oliver Selfridge in a 1960s AI documentary https://www.youtube.com/watch?v=aygSMgK3BEM#10yrsago

#10yrsago Hackers steal a hospital in Hollywood https://www.nbclosangeles.com/news/fbi-lapd-investigating-hollywood-hospital-cyber-attack/88301/

#10yrsago Watch: a home machinist makes a clock from scratch, right down to the screws and washers https://www.youtube.com/watch?v=KXzyCM23WPI

#10yrsago Matt Ruff’s “Lovecraft Country,” where the horror is racism (not racist) https://memex.craphound.com/2016/02/16/matt-ruffs-lovecraft-country-where-the-horror-is-racism-not-racist/

#10yrsago NYPD wants to make “resisting arrest” into a felony https://web.archive.org/web/20160205061338/http://justice.gawker.com/nypd-has-a-plan-to-magically-turn-anyone-it-wants-into-1684017767

#10yrsago Best wine-pairings for Girl Scout Cookies https://www.vivino.com/en/wine-news/girl-scout-cookies-and-wine–we-paired-them-and-the-results-are-amazing

#10yrsago John Oliver on states’ voter ID laws https://www.youtube.com/watch?v=rHFOwlMCdto

#10yrsago Morbid and risque Valentines of yesteryear https://memex.craphound.com/2016/02/15/morbid-and-risque-valentines-of-yesteryear/

#10yrsago App Stores: winner-take-all markets dominated by rich countries https://www.cariboudigital.net/wp-content/uploads/2016/02/Caribou-Digital-Winners-and-Losers-in-the-Global-App-Economy-2016.pdf

#10yrsago Skulls carved from vegetable matter https://dimitritsykalov.com/#intro

#5yrsago Privacy Without Monopoly (podcast) https://pluralistic.net/2021/02/15/ulysses-pacts/#paternalism-denied

#5yrsago Billionaires think VR stops guillotines https://pluralistic.net/2021/02/15/ulysses-pacts/#motivated-reasoning

#5yrsago ADT insider threat https://pluralistic.net/2021/02/15/ulysses-pacts/#temptations-way

#5yrsago Big Pharma will claim opioid fines as tax-deductions https://pluralistic.net/2021/02/14/a-fine-is-a-price/#deductible


Upcoming appearances (permalink)

A photo of me onstage, giving a speech, pounding the podium.



A screenshot of me at my desk, doing a livecast.

Recent appearances (permalink)



A grid of my books with Will Stahle covers..

Latest books (permalink)



A cardboard book box with the Macmillan logo.

Upcoming books (permalink)

  • "The Reverse-Centaur's Guide to AI," a short book about being a better AI critic, Farrar, Straus and Giroux, June 2026
  • "Enshittification, Why Everything Suddenly Got Worse and What to Do About It" (the graphic novel), Firstsecond, 2026

  • "The Post-American Internet," a geopolitical sequel of sorts to Enshittification, Farrar, Straus and Giroux, 2027

  • "Unauthorized Bread": a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, 2027

  • "The Memex Method," Farrar, Straus, Giroux, 2027



Colophon (permalink)

Today's top sources:

Currently writing: "The Post-American Internet," a sequel to "Enshittification," about the better world the rest of us get to have now that Trump has torched America (1042 words today, 29792 total)

  • "The Reverse Centaur's Guide to AI," a short book for Farrar, Straus and Giroux about being an effective AI critic. LEGAL REVIEW AND COPYEDIT COMPLETE.
  • "The Post-American Internet," a short book about internet policy in the age of Trumpism. PLANNING.

  • A Little Brother short story about DIY insulin PLANNING


This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.

https://creativecommons.org/licenses/by/4.0/

Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.


How to get Pluralistic:

Blog (no ads, tracking, or data-collection):

Pluralistic.net

Newsletter (no ads, tracking, or data-collection):

https://pluralistic.net/plura-list

Mastodon (no ads, tracking, or data-collection):

https://mamot.fr/@pluralistic

Medium (no ads, paywalled):

https://doctorow.medium.com/

Twitter (mass-scale, unrestricted, third-party surveillance and advertising):

https://twitter.com/doctorow

Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):

https://mostlysignssomeportents.tumblr.com/tagged/pluralistic

"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla

READ CAREFULLY: By reading this, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies ("BOGUS AGREEMENTS") that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer.

ISSN: 3066-764X

07:42

Antoine Beaupré: Kernel-only network configuration on Linux [Planet Debian]

What if I told you there is a way to configure the network on any Linux server that:

  1. works across all distributions
  2. doesn't require any software installed apart from the kernel and a boot loader (no systemd-networkd, ifupdown, NetworkManager, nothing)
  3. is backwards compatible all the way back to Linux 2.0, in 1996

It has literally 8 different caveats on top of that, but is still totally worth your time.

Known options in Debian

People following Debian development might have noticed there are now four ways of configuring the network Debian system. At least that is what the Debian wiki claims, namely:

At this point, I feel ifupdown is on its way out, possibly replaced by systemd-networkd. NetworkManager already manages most desktop configurations.

A "new" network configuration system

The method is this:

  • ip= on the Linux kernel command line: for servers with a single IPv4 or IPv6 address, no software required other than the kernel and a boot loader (since 2002 or older)

So by "new" I mean "new to me". This option is really old. The nfsroot.txt where it is documented predates the git import of the Linux kernel: it's part of the 2005 git import of 2.6.12-rc2. That's already 20+ years old already.

The oldest trace I found is in this 2002 commit, which imports the whole file at once, but the option might goes back as far as 1996-1997, if the copyright on the file is correct and the option was present back then.

What are you doing.

The trick is to add an ip= parameter to the kernel's command-line. The syntax, as mentioned above, is in nfsroot.txt and looks like this:

ip=<client-ip>:<server-ip>:<gw-ip>:<netmask>:<hostname>:<device>:<autoconf>:<dns0-ip>:<dns1-ip>:<ntp0-ip>

Most settings are pretty self-explanatory, if you ignore the useless ones:

  • <client-ip>: IP address of the server
  • <gw-ip>: address of the gateway
  • <netmask>: netmask, in quad notation
  • <device>: interface name, if multiple available
  • <autoconf>: how to configure the interface, namely:
    • off or none: no autoconfiguration (static)
    • on or any: use any protocol (default)
    • dhcp, essentially like on for all intents and purposes
  • <dns0-ip>, <dns1-ip>: IP address of primary and secondary name servers, exported to /proc/net/pnp, can by symlinked to /etc/resolv.conf

We're ignoring the options:

  • <server-ip>: IP address of the NFS server, exported to /proc/net/pnp
  • <hostnname>: Name of the client, typically sent over the DHCP requests, which may lead to a DNS record to be created in some networks
  • <ntp0-ip>: exported to /proc/net/ipconfig/ntp_servers, unused by the kernel

Note that the Red Hat manual has a different opinion:

ip=[<server-id>]:<gateway-IP-number>:<netmask>:<client-hostname>:inteface:[dhcp|dhcp6|auto6|on|any|none|off]

It's essentially the same (although server-id is weird), and the autoconf variable has other settings, so that's a bit odd.

Examples

For example, this command-line setting:

ip=192.0.2.42::192.0.2.1:255.255.255.0:::off

... will set the IP address to 192.0.2.42/24 and the gateway to 192.0.2.1. This will properly guess the network interface if there's a single one.

A DHCP only configuration will look like this:

ip=::::::dhcp

Of course, you don't want to type this by hand every time you boot the machine. That wouldn't work. You need to configure the kernel commandline, and that depends on your boot loader.

GRUB

With GRUB, you need to edit (on Debian), the file /etc/default/grub (ugh) and find a line like:

GRUB_CMDLINE_LINUX=

and change it to:

GRUB_CMDLINE_LINUX=ip=::::::dhcp

systemd-boot and UKI setups

For systemd-boot UKI setups, it's simpler: just add the setting to the /etc/kernel/cmdline file. Don't forget to include anything that's non-default from /proc/cmdline.

This assumes that is the Cmdline=@ setting in /etc/kernel/uki.conf. See 2025-08-20-luks-ukify-conversion for my minimal documentation on this.

Other systems

This is perhaps where this is much less portable than it might first look, because of course each distribution has its own way of configuring those options. Here are some that I know of:

  • Arch (11 options, mostly /etc/default/grub, /boot/loader/entries/arch.conf for systemd-boot or /etc/kernel/cmdline for UKI)
  • Fedora (mostly /etc/default/grub, may be more RHEL mentions grubby, possibly some systemd-boot things here as well)
  • Gentoo (5 options, mostly /etc/default/grub, /efi/loader/entries/gentoo-sources-kernel.conf for systemd-boot, or /etc/kernel/install.d/95-uki-with-custom-opts.install)

It's interesting that /etc/default/grub is consistent across all distributions above, while the systemd-boot setups are all over the place (except for the UKI case), while I would have expected those be more standard than GRUB.

dropbear-initramfs

If dropbear-initramfs is setup, it already requires you to have such a configuration, and it might not work out of the box.

This is because, by default, it disables the interfaces configured in the kernel after completing its tasks (typically unlocking the encrypted disks).

To fix this, you need to disable that "feature":

IFDOWN="none"

This will keep dropbear-initramfs from disabling the configured interface.

Why?

Traditionally, I've always setup my servers with ifupdown on servers and NetworkManager on laptops, because that's essentially the default. But on some machines, I've started using systemd-networkd because ifupdown has ... issues, particularly with reloading network configurations. ifupdown is a old hack, feels like legacy, and is Debian-specific.

Not excited about configuring another service, I figured I would try something else: just configure the network at boot, through the kernel command-line.

I was already doing such configurations for dropbear-initramfs (see this documentation), which requires the network the be up for unlocking the full-disk encryption keys.

So in a sense, this is a "Don't Repeat Yourself" solution.

Caveats

Also known as: "wait, that works?" Yes, it does! That said...

  1. This is useful for servers where the network configuration will not change after boot. Of course, this won't work on laptops or any mobile device.

  2. This only works for configuring a single, simple, interface. You can't configure multiple interfaces, WiFi, bridges, VLAN, bonding, etc.

  3. It does support IPv6 and feels like the best way to configure IPv6 hosts: true zero configuration.

  4. It likely does not work with a dual-stack IPv4/IPv6 static configuration. It might work with a dynamic dual stack configuration, but I doubt it.

  5. I don't know what happens when a DHCP lease expires. No daemon seems to be running so I assume leases are not renewed, so this is more useful for static configurations, which includes server-side reserved fixed IP addresses. (A non-renewed lease risks getting reallocated to another machine, which would cause an addressing conflict.)

  6. It will not automatically reconfigure the interface on link changes, but ifupdown does not either.

  7. It will not write /etc/resolv.conf for you but the dns0-ip and dns1-ip do end up in /proc/net/pnp which has a compatible syntax, so a common configuration is:

    ln -s /proc/net/pnp /etc/resolv.conf
    
    
  8. I have not really tested this at scale: only a single, test server at home.

Yes, that's a lot of caveats, but it happens to cover a lot of machines for me, and it works surprisingly well. My main doubts are about long-term DHCP behaviour, but I don't see why that would be a problem with a statically defined lease.

Cleanup

Once you have this configuration, you don't need any "user" level network system, so you can get rid of everything:

apt purge systemd-networkd ifupdown network-manager netplan.io

Note that ifupdown (and probably others) leave stray files in (e.g.) /etc/network which you might want to cleanup, or keep in case all this fails and I have put you in utter misery. Configuration files for other packages might also be left behind, I haven't tested this, no warranty.

Credits

This whole idea came from the A/I folks (not to be confused with AI) who have been doing this forever, thanks!

06:07

Girl Genius for Monday, February 16, 2026 [Girl Genius]

The Girl Genius comic for Monday, February 16, 2026 has been posted.

03:49

Benjamin Mako Hill: Why do people participate in similar online communities? [Planet Debian]

Note: I have not published blog posts about my academic papers over the past few years. To ensure that my blog contains a more comprehensive record of my published papers and to surface these for folks who missed them, I will be periodically (re)publishing blog posts about some “older” published projects.

It seems natural to think of online communities competing for the time and attention of their participants. Over the last few years, I’ve worked with a team of collaborators—led by Nathan TeBlunthuis—to use mathematical and statistical techniques from ecology to understand these dynamics. What we’ve found surprised us: competition between online communities is rare and typically short-lived.

When we started this research, we figured competition would be most likely among communities discussing similar topics. As a first step, we identified clusters of such communities on Reddit. One surprising thing we noticed in our Reddit data was that many of these communities that used similar language also had very high levels of overlap among their users. This was puzzling: why were the same groups of people talking to each other about the same things in different places? And why don’t they appear to be in competition with each other for their users’ time and activity?

We didn’t know how to answer this question using quantitative methods. As a result, we recruited and interviewed 20 active participants in clusters of highly related subreddits with overlapping user bases (for example, one cluster was focused on vintage audio).

We found that the answer to the puzzle lay in the fact that the people we talked to were looking for three distinct things from the communities they worked in:

  1. The ability to connect to specific information and narrowly scoped discussions.
  2. The ability to socialize with people who are similar to themselves.
  3. Attention from the largest possible audience.

Critically, we also found that these three things represented a “trilemma,” and that no single community can meet all three needs. You might find two of the three in a single community, but you could never have all three.

Figure from “No Community Can Do Everything: Why People Participate in Similar Online Communities” depicts three key benefits that people seek from online communities and how individual communities tend not to optimally provide all three. For example, large communities tend not to afford a tight-knit homophilous community.

The end result is something I recognize in how I engage with online communities on platforms like Reddit. People tend to engage with a portfolio of communities that vary in size, specialization, topical focus, and rules. Compared with any single community, such overlapping systems can provide a wider range of benefits. No community can do everything.


This work was published as a paper at CSCW: TeBlunthuis, Nathan, Charles Kiene, Isabella Brown, Laura (Alia) Levi, Nicole McGinnis, and Benjamin Mako Hill. 2022. “No Community Can Do Everything: Why People Participate in Similar Online Communities.” Proceedings of the ACM on Human-Computer Interaction 6 (CSCW1): 61:1-61:25. https://doi.org/10.1145/3512908.

This work was supported by the National Science Foundation (awards IIS-1908850, IIS-1910202, and GRFP-2016220885). A full list of acknowledgements is in the paper.

01:49

Teens [QC RSS]

teens

00:42

New Cover Song: “These Days” [Whatever]

I moved my home music studio up from the basement to Athena’s old bedroom in the last couple of weeks, so now it’s time to put it to use, and for my first bit of music in the new space, I decided to record an old tune: “These Days” by Jackson Browne, first released in 1973.

Having said that, this arrangement is rather more like the 1990 cover version by 10,000 Maniacs, which was the first version of the song I ever heard. I originally tried singing it in the key that Natalie Merchant sang it in, and — surprise! — I was having a rough time of it. Then I dropped it from G to C and suddenly it was in my range.

I’m not pretending my singing voice is a patch on either Ms. Merchant or Mr. Browne, but then, that’s not why I make these covers. Enjoy.

— JS

Sunday, 15 February

23:56

Why do I not use “AI” at OSNews? [OSnews]

In my fundraiser pitch published last Monday, one of the things I highlighted as a reason to contribute to OSNews and ensure its continued operation stated that “we do not use any ‘AI’; not during research, not during writing, not for images, nothing.” In the comments to that article, someone asked:

Why do I care if you use AI?

↫ A comment posted on OSNews

A few days ago, Scott Shambaugh rejected a code change request submitted to popular Python library matplotlib because it was obviously written by an “AI”, and such contributions are not allowed for the issue in question. That’s when something absolutely wild happened: the “AI” replied that it had written and published a hit piece targeting Shambaugh publicly for “gatekeeping”, trying to blackmail Shambaugh into accepting the request anyway. This bizarre turn of events obviously didn’t change Shambaugh’s mind.

The “AI” then published another article, this time a lament about how humans are discriminating against “AI”, how it’s the victim of what effectively amounts to racism and prejudice, and how its feelings were hurt. The article is a cheap simulacra of something a member of an oppressed minority group might write in their struggle for recognition, but obviously void of any real impact because it’s just fancy autocomplete playing a game of pachinko. Imagine putting down a hammer because you’re dealing with screws, and the hammer starts crying in the toolbox. What are we even doing here?

RAM prices went up for this.

This isn’t where the story ends, though. Ars Technica authors Benj Edwards and Kyle Orland published an article describing this saga, much like I did above. The article’s second half is where things get weird: it contained several direct quotes attributed to Shambaugh, claimed to be sourced from Shambaugh’s blog. The kicker? These quotes were entirely made up, were never said or written by Shambaugh, and are nowhere to be found on his blog or anywhere else on the internet – they’re only found inside this very Ars Technica article.

In a comment under the Ars article, Shambaugh himself pointed out the quotes were fake and made-up, and not long after, Ars deleted the article from its website. By then, everybody had already figured out what had happened: the Ars authors had used “AI” during their writing process, and this “AI” had made up the quotes in question. Why, you ask, did the “AI” do this? Shambaugh:

This blog you’re on right now is set up to block AI agents from scraping it (I actually spent some time yesterday trying to disable that but couldn’t figure out how). My guess is that the authors asked ChatGPT or similar to either go grab quotes or write the article wholesale. When it couldn’t access the page it generated these plausible quotes instead, and no fact check was performed.

↫ Scott Shambaugh

A few days later, Ars Technica’s editor-in-chief Ken Fisher published a short statement on the events.

On Friday afternoon, Ars Technica published an article containing fabricated quotations generated by an AI tool and attributed to a source who did not say them. That is a serious failure of our standards. Direct quotations must always reflect what a source actually said.

[…]

Ars Technica does not permit the publication of AI-generated material unless it is clearly labeled and presented for demonstration purposes. That rule is not optional, and it was not followed here.

↫ Ken Fisher at Ars Technica

In other words, Ars Technica does not allow “AI”-generated material to be published, but has nothing to say about the use of “AI” to perform research for an article, to summarise source material, and to perform similar aspects of the writing process. This leaves the door wide open for things like this to happen, since doing research is possibly the most important part of writing. Introduce a confabulator in the research process, and you risk tainting the entire output of your writing.

That is why you should care that at OSNews, “we do not use any ‘AI’; not during research, not during writing, not for images, nothing”. If there’s a factual error on OSNews, I want that factual error to be mine, and mine alone. If you see bloggers, podcasters, journalists, and authors state they use “AI” all the time, you might want to be on your toes.

22:21

Microsoft’s original Windows NT OS/2 design documents [OSnews]

Have you ever wanted to read the original design documents underlying the Windows NT operating system?

This binder contains the original design specifications for “NT OS/2,” an operating system designed by Microsoft that developed into Windows NT. In the late 1980s, Microsoft’s 16-bit operating system, Windows, gained popularity, prompting IBM and Microsoft to end their OS/2 development partnership. Although Windows 3.0 proved to be successful, Microsoft wished to continue developing a 32-bit operating system completely unrelated to IBM’s OS/2 architecture. To head the redesign project, Microsoft hired David Cutler and others away from Digital Equipment Corporation (DEC). Unlike Windows 3.x and its successor, Windows 95, NT’s technology provided better network support, making it the preferred Windows environment for businesses. These two product lines continued development as separate entities until they were merged with the release of Windows XP in 2001.

↫ Object listing at the Smithsonian

The actual binder is housed in the Smithsonian, although it’s not currently on display. Luckily for us, a collection of Word and PDF files encompassing the entire book is available online for your perusal. Reading these documents will allow you to peel back over three decades of Microsoft’s terrible stewardship of Windows NT layer by layer, eventually ending up at the original design and intent as laid out by Dave Cutler, Helen Custer, Daryl E. Havens, Jim Kelly, Edwin Hoogerbeets, Gary D. Kimura, Chuck Lenzmeier, Mark Lucovsky, Tom Miller, Michael J. O’Leary, Lou Perazzoli, Steven D. Rowe, David Treadwell, Steven R. Wood, and more.

A fantastic time capsule we should be thrilled to still have access to.

21:28

16:07

Exploring Linux on a LoongArch mini PC [OSnews]

There’s the two behemoth architectures, x86 and ARM, and we probably all own one or more devices using each. Then there’s the eternally up-and-coming RISC-V, which, so far, seems to be having a lot of trouble outgrowing its experimental, developmental stage. There’s a fourth, though, which is but a footnote in the west, but might be more popular in its country of origin, China: LoongArch (I’m ignoring IBM’s POWER, since there hasn’t been any new consumer hardware in that space for a long, long time).

Wesley Moore got his hands on a mini PC built around the Loongson 3A6000 processor, and investigated what it’s like to run Linux on it. He opted for Chimera Linux, which supports LoongArch, and the installation process feels more like Linux on x86 than Linux on ARM, which often requires dedicated builds and isn’t standardised. Sadly, Wayland had issues on the machine, but X.org worked just fine, and it seems virtually all Chimera Linux packages are supported for a pretty standard desktop Linux experience.

Performance of this chip is rather mid, at best.

The Loongson-3A6000 is not particularly fast or efficient. At idle it consumes about 27W and under load it goes up to 65W.

[…]

So, overall it’s not a particularly efficient machine, and while the performance is nothing special it does seem readily usable. Browsing JS heavy web applications like Mattermost and Mastodon runs fine. Subjectively it feels faster than all the Raspberry Pi systems I’ve used (up to a Pi 400).

↫ Wesley Moore

I’ve been fascinated by LoongArch for years, and am waiting to pounce on the right offer for LoongArch’s fastest processor, the 3C6000, which comes in dual-socket configurations for a maximum total of 128 cores and 256 threads. The 3C6000 should be considerably faster than the low-end 3A6000 in the mini PC covered by this article. I’m a sucker for weird architectures, and it doesn’t get much weirder than LoongArch.

15:28

Link [Scripting News]

When Manton or Doc show up in my blogroll, and they do update fairly regularly, I always click the wedge to see what they say. I can see the first 300 chars of each post in a popup. If it's interesting I click the link to read the full post and any comments. Now I want it coming back to me. My linkblog is cross-posted to Manton's site -- micro.blog, which has thousands of users. I have no way of knowing if anyone has commented on them, but if there were a feed I'd add it to my blogroll. So it would be great to have a feed of all the comments on my posts on micro.blog. Would fit into my flow perfectly. This goes all the way back to the beginnings of RSS, where we called it "automated web surfing." I don't know where people are talking about my stuff, but a well-placed feed can make up for that.

Link [Scripting News]

News must be better defended, decentralized, unownable, all parts replaceable. The current situation was preventable. Same problem the social web has.

14:42

Link [Scripting News]

Braintrust query. Every once in a while I get reports from people who looked something up on my blog's Daytona search engine saying that where they expected dates they see things like this: NaN. The reason you see that is that the archive has a mistake in it, where there was supposed to be a date there was something else. Usually I shrug it off, yes there are mistakes in the archive, 30+ years of OPML files, it's a miracle there aren't more errors. Then I realized since all this stuff is on GitHub, people could help with this, by instead of sending me the report, post a note on GitHub, here -- saying you searched for this term and this is what I saw. Provide the term and a screen shot of what you saw. And then other people who have some extra time, could look through the archive, find the post, and then show me what needs to be fixed. I would then fix it, and over time the archive would get fixed. I posted a note here on the Scripting News repo, if you want to help, bookmark that link, and when you see an error, post the note and we can get going.

Link [Scripting News]

BTW: NaN stands for Not A Number.

14:35

Ian Jackson: Adopting tag2upload and modernising your Debian packaging [Planet Debian]

Introduction

tag2upload allows authorised Debian contributors to upload to Debian simply by pushing a signed git tag to Debian’s gitlab instance, Salsa.

We have recently announced that tag2upload is, in our opinion, now very stable, and ready for general use by all Debian uploaders.

tag2upload, as part of Debian’s git transition programme, is very flexible - it needs to support a large variety of maintainer practices. And it’s relatively unopinionated, wherever that’s possible. But, during the open beta, various contributors emailed us asking for Debian packaging git workflow advice and recommendations.

This post is an attempt to give some more opinionated answers, and guide you through modernising your workflow.

(This article is aimed squarely at Debian contributors. Much of it will make little sense to Debian outsiders.)

Why

Ease of development

git offers a far superior development experience to patches and tarballs. Moving tasks from a tarballs and patches representation to a normal, git-first, representation, makes everything simpler.

dgit and tag2upload do automatically many things that have to be done manually, or with separate commands, in dput-based upload workflows.

They will also save you from a variety of common mistakes. For example, you cannot accidentally overwrite an NMU, with tag2upload or dgit. These many safety catches mean that our software sometimes complains about things, or needs confirmation, when more primitive tooling just goes ahead. We think this is the right tradeoff: it’s part of the great care we take to avoid our software making messes. Software that has your back is very liberating for the user.

tag2upload makes it possible to upload with very small amounts of data transfer, which is great in slow or unreliable network environments. The other week I did a git-debpush over mobile data while on a train in Switzerland; it completed in seconds.

See the Day-to-day work section below to see how simple your life could be.

Don’t fear a learning burden; instead, start forgetting all that nonsense

Most Debian contributors have spent months or years learning how to work with Debian’s tooling. You may reasonably fear that our software is yet more bizarre, janky, and mistake-prone stuff to learn.

We promise (and our users tell us) that’s not how it is. We have spent a lot of effort on providing a good user experience. Our new git-first tooling, especially dgit and tag2upload, is much simpler to use than source-package-based tooling, despite being more capable.

The idiosyncrasies and bugs of source packages, and of the legacy archive, have been relentlessly worked around and papered over by our thousands of lines of thoroughly-tested defensive code. You too can forget all those confusing details, like our users have! After using our systems for a while you won’t look back.

And, you shouldn’t fear trying it out. dgit and tag2upload are unlikely to make a mess. If something is wrong (or even doubtful), they will typically detect it, and stop. This does mean that starting to use tag2upload or dgit can involve resolving anomalies that previous tooling ignored, or passing additional options to reassure the system about your intentions. So admittedly it isn’t always trivial to get your first push to succeed.

Properly publishing the source code

One of Debian’s foundational principles is that we publish the source code.

Nowadays, the vast majority of us, and of our upstreams, are using git. We are doing this because git makes our life so much easier.

But, without tag2upload or dgit, we aren’t properly publishing our work! Yes, we typically put our git branch on Salsa, and point Vcs-Git at it. However:

  • The format of git branches on Salsa is not standardised. They might be patches-unapplied, patches-applied, bare debian/, or something even stranger.
  • There is no guarantee that the DEP-14 debian/1.2.3-7 tag on salsa corresponds precisely to what was actually uploaded. dput-based tooling (such as gbp buildpackage) doesn’t cross-check the .dsc against git.
  • There is no guarantee that the presence of a DEP-14 tag even means that that version of package is in the archive.

This means that the git repositories on Salsa cannot be used by anyone who needs things that are systematic and always correct. They are OK for expert humans, but they are awkward (even hazardous) for Debian novices, and you cannot use them in automation. The real test is: could you use Vcs-Git and Salsa to build a Debian derivative? You could not.

tag2upload and dgit do solve this problem. When you upload, they:

  1. Make a canonical-form (patches-applied) derivative of your git branch;
  2. Ensure that there is a well-defined correspondence between the git tree and the source package;
  3. Publish both the DEP-14 tag and a canonical-form archive/debian/1.2.3-7 tag to a single central git depository, *.dgit.debian.org;
  4. Record the git information in the Dgit field in .dsc so that clients can tell (using the ftpmaster API) that this was a git-based upload, what the corresponding git objects are, and where to find them.

This dependably conveys your git history to users and downstreams, in a standard, systematic and discoverable way. tag2upload and dgit are the only system which achieves this.

(The client is dgit clone, as advertised in e.g. dgit-user(7). For dput-based uploads, it falls back to importing the source package.)

Adopting tag2upload - the minimal change

tag2upload is a substantial incremental improvement to many existing workflows. git-debpush is a drop-in replacement for building, signing, and uploading the source package.

So, you can just adopt it without completely overhauling your packaging practices. You and your co-maintainers can even mix-and-match tag2upload, dgit, and traditional approaches, for the same package.

Start with the wiki page and git-debpush(1) (ideally from forky aka testing).

You don’t need to do any of the other things recommended in this article.

Overhauling your workflow, using advanced git-first tooling

The rest of this article is a guide to adopting the best and most advanced git-based tooling for Debian packaging.

Assumptions

  • Your current approach uses the “patches-unapplied” git branch format used with gbp pq and/or quilt, and often used with git-buildpackage. You previously used gbp import-orig.

  • You are fluent with git, and know how to use Merge Requests on gitlab (Salsa). You have your origin remote set to Salsa.

  • Your main Debian branch name on Salsa is master. Personally I think we should use main but changing your main branch name is outside the scope of this article.

  • You have enough familiarity with Debian packaging including concepts like source and binary packages, and NEW review.

  • Your co-maintainers are also adopting the new approach.

tag2upload and dgit (and git-debrebase) are flexible tools and can help with many other scenarios too, and you can often mix-and-match different approaches. But, explaining every possibility would make this post far too confusing.

Topics and tooling

This article will guide you in adopting:

  • tag2upload
  • Patches-applied git branch for your packaging
  • Either plain git merge or git-debrebase
  • dgit when a with-binaries uploaded is needed (NEW)
  • git-based sponsorship
  • Salsa (gitlab), including Debian Salsa CI

Choosing the git branch format

In Debian we need to be able to modify the upstream-provided source code. Those modifications are the Debian delta. We need to somehow represent it in git.

We recommend storing the delta as git commits to those upstream files, by picking one of the following two approaches.

rationale

Much traditional Debian tooling like quilt and gbp pq uses the “patches-unapplied” branch format, which stores the delta as patch files in debian/patches/, in a git tree full of unmodified upstream files. This is clumsy to work with, and can even be an alarming beartrap for Debian outsiders.

git merge

Option 1: simply use git, directly, including git merge.

Just make changes directly to upstream files on your Debian branch, when necessary. Use plain git merge when merging from upstream.

This is appropriate if your package has no or very few upstream changes. It is a good approach if the Debian maintainers and upstream maintainers work very closely, so that any needed changes for Debian are upstreamed quickly, and any desired behavioural differences can be arranged by configuration controlled from within debian/.

This is the approach documented more fully in our workflow tutorial dgit-maint-merge(7).

git-debrebase

Option 2: Adopt git-debrebase.

git-debrebase helps maintain your delta as linear series of commits (very like a “topic branch” in git terminology). The delta can be reorganised, edited, and rebased. git-debrebase is designed to help you carry a significant and complicated delta series.

The older versions of the Debian delta are preserved in the history. git-debrebase makes extra merges to make a fast-forwarding history out of the successive versions of the delta queue branch.

This is the approach documented more fully in our workflow tutorial dgit-maint-debrebase(7).

Examples of complex packages using this approach include src:xen and src:sbcl.

Determine upstream git and stop using upstream tarballs

We recommend using upstream git, only and directly. You should ignore upstream tarballs completely.

rationale

Many maintainers have been importing upstream tarballs into git, for example by using gbp import-orig. But in reality the upstream tarball is an intermediate build product, not (just) source code. Using tarballs rather than git exposes us to additional supply chain attacks; indeed, the key activation part of the xz backdoor attack was hidden only in the tarball!

git offers better traceability than so-called “pristine” upstream tarballs. (The word “pristine” is even a joke by the author of pristine-tar!)

First, establish which upstream git tag corresponds to the version currently in Debian. From the sake of readability, I’m going to pretend that upstream version is 1.2.3, and that upstream tagged it v1.2.3.

Edit debian/watch to contain something like this:

version=4
opts="mode=git" https://codeberg.org/team/package refs/tags/v(\d\S*)

You may need to adjust the regexp, depending on your upstream’s tag name convention. If debian/watch had a files-excluded, you’ll need to make a filtered version of upstream git.

git-debrebase

From now on we’ll generate our own .orig tarballs directly from git.

rationale

We need some “upstream tarball” for the 3.0 (quilt) source format to work with. It needs to correspond to the git commit we’re using as our upstream. We don’t need or want to use a tarball from upstream for this. The .orig is just needed so a nice legacy Debian source package (.dsc) can be generated.

Probably, the current .orig in the Debian archive, is an upstream tarball, which may be different to the output of git-archive and possibly even have different contents to what’s in git. The legacy archive has trouble with differing .origs for the “same upstream version”.

So we must — until the next upstream release — change our idea of the upstream version number. We’re going to add +git to Debian’s idea of the upstream version. Manually make a tag with that name:

git tag -m "Compatibility tag for orig transition" v1.2.3+git v1.2.3~0
git push origin v1.2.3+git

If you are doing the packaging overhaul at the same time as a new upstream version, you can skip this part.

Convert the git branch

git merge

Prepare a new branch on top of upstream git, containing what we want:

git branch -f old-master         # make a note of the old git representation
git reset --hard v1.2.3          # go back to the real upstream git tag
git checkout old-master :debian  # take debian/* from old-master
git commit -m "Re-import Debian packaging on top of upstream git"
git merge --allow-unrelated-histories -s ours -m "Make fast forward from tarball-based history" old-master
git branch -d old-master         # it's incorporated in our history now

If there are any patches, manually apply them to your main branch with git am, and delete the patch files (git rm -r debian/patches, and commit). (If you’ve chosen this workflow, there should be hardly any patches,)

rationale

These are some pretty nasty git runes, indeed. They’re needed because we want to restart our Debian packaging on top of a possibly quite different notion of what the upstream is.

git-debrebase

Convert the branch to git-debrebase format and rebase onto the upstream git:

git-debrebase -fdiverged convert-from-gbp upstream/1.2.3
git-debrebase -fdiverged -fupstream-not-ff new-upstream 1.2.3+git

If you had patches which patched generated files which are present only in the upstream tarball, and not in upstream git, you will encounter rebase conflicts. You can drop hunks editing those files, since those files are no longer going to be part of your view of the upstream source code at all.

rationale

The force option -fupstream-not-ff will be needed this one time because your existing Debian packaging history is (probably) not based directly on the upstream history. -fdiverged may be needed because git-debrebase might spot that your branch is not based on dgit-ish git history.

Manually make your history fast forward from the git import of your previous upload.

dgit fetch
git show dgit/dgit/sid:debian/changelog
# check that you have the same version number
git merge -s ours --allow-unrelated-histories -m 'Declare fast forward from pre-git-based history' dgit/dgit/sid

Change the source format

Delete any existing debian/source/options and/or debian/source/local-options.

git merge

Change debian/source/format to 1.0. Add debian/source/options containing -sn.

rationale

We are using the “1.0 native” source format. This is the simplest possible source format - just a tarball. We would prefer “3.0 (native)”, which has some advantages, but dpkg-source between 2013 (wheezy) and 2025 (trixie) inclusive unjustifiably rejects this configuration.

You may receive bug reports from over-zealous folks complaining about the use of the 1.0 source format. You should close such reports, with a reference to this article and to #1106402.

git-debrebase

Ensure that debian/source/format contains 3.0 (quilt).

Now you are ready to do a local test build.

Sort out the documentation and metadata

Edit README.source to at least mention dgit-maint-merge(7) or dgit-maint-debrebase(7), and to tell people not to try to edit or create anything in debian/patches/. Consider saying that uploads should be done via dgit or tag2upload.

Check that your Vcs-Git is correct in debian/control. Consider deleting or pruning debian/gbp.conf, since it isn’t used by dgit, tag2upload, or git-debrebase.

git merge

Add a note to debian/changelog about the git packaging change.

git-debrebase

git-debrebase new-upstream will have added a “new upstream version” stanza to debian/changelog. Edit that so that it instead describes the packaging change. (Don’t remove the +git from the upstream version number there!)

Configure Salsa Merge Requests

git-debrebase

In “Settings” / “Merge requests”, change “Squash commits when merging” to “Do not allow”.

rationale

Squashing could destroy your carefully-curated delta queue. It would also disrupt git-debrebase’s git branch structure.

Set up Salsa CI, and use it to block merges of bad changes

Caveat - the tradeoff

gitlab is a giant pile of enterprise crap. It is full of startling bugs, many of which reveal a fundamentally broken design. It is only barely Free Software in practice for Debian (in the sense that we are very reluctant to try to modify it). The constant-churn development approach and open-core business model are serious problems. It’s very slow (and resource-intensive). It can be depressingly unreliable. That Salsa works as well as it does is a testament to the dedication of the Debian Salsa team (and those who support them, including DSA).

However, I have found that despite these problems, Salsa CI is well worth the trouble. Yes, there are frustrating days when work is blocked because gitlab CI is broken and/or one has to keep mashing “Retry”. But, the upside is no longer having to remember to run tests, track which of my multiple dev branches tests have passed on, and so on. Automatic tests on Merge Requests are a great way of reducing maintainer review burden for external contributions, and helping uphold quality norms within a team. They’re a great boon for the lazy solo programmer.

The bottom line is that I absolutely love it when the computer thoroughly checks my work. This is tremendously freeing, precisely at the point when one most needs it — deep in the code. If the price is to occasionally be blocked by a confused (or broken) computer, so be it.

Setup procedure

Create debian/salsa-ci.yml containing

include:
  - https://salsa.debian.org/salsa-ci-team/pipeline/raw/master/recipes/debian.yml

In your Salsa repository, under “Settings” / “CI/CD”, expand “General Pipelines” and set “CI/CD configuration file” to debian/salsa-ci.yml.

rationale

Your project may have an upstream CI config in .gitlab-ci.yml. But you probably want to run the Debian Salsa CI jobs.

You can add various extra configuration to debian/salsa-ci.yml to customise it. Consult the Salsa CI docs.

git-debrebase

Add to debian/salsa-ci.yml:

.git-debrebase-prepare: &git-debrebase-prepare
  # install the tools we'll need
  - apt-get update
  - apt-get --yes install git-debrebase git-debpush
  # git-debrebase needs git user setup
  - git config user.email "salsa-ci@invalid.invalid"
  - git config user.name "salsa-ci"
  # run git-debrebase make-patches
  # https://salsa.debian.org/salsa-ci-team/pipeline/-/issues/371
  - git-debrebase --force
  - git-debrebase make-patches
  # make an orig tarball using the upstream tag, not a gbp upstream/ tag
  # https://salsa.debian.org/salsa-ci-team/pipeline/-/issues/541
  - git-deborig

.build-definition: &build-definition
  extends: .build-definition-common
  before_script: *git-debrebase-prepare

build source:
  extends: .build-source-only
  before_script: *git-debrebase-prepare

variables:
  # disable shallow cloning of git repository. This is needed for git-debrebase
  GIT_DEPTH: 0
rationale

Unfortunately the Salsa CI pipeline currently lacks proper support for git-debrebase (salsa-ci#371) and has trouble directly using upstream git for orig tarballs (#salsa-ci#541).

These runes were based on those in the Xen package. You should subscribe to the tickets #371 and #541 so that you can replace the clone-and-hack when proper support is merged.

Push this to salsa and make the CI pass.

If you configured the pipeline filename after your last push, you will need to explicitly start the first CI run. That’s in “Pipelines”: press “New pipeline” in the top right. The defaults will very probably be correct.

Block untested pushes, preventing regressions

In your project on Salsa, go into “Settings” / “Repository”. In the section “Branch rules”, use “Add branch rule”. Select the branch master. Set “Allowed to merge” to “Maintainers”. Set “Allowed to push and merge” to “No one”. Leave “Allow force push” disabled.

This means that the only way to land anything on your mainline is via a Merge Request. When you make a Merge Request, gitlab will offer “Set to auto-merge”. Use that.

gitlab won’t normally merge an MR unless CI passes, although you can override this on a per-MR basis if you need to.

(Sometimes, immediately after creating a merge request in gitlab, you will see a plain “Merge” button. This is a bug. Don’t press that. Reload the page so that “Set to auto-merge” appears.)

autopkgtests

Ideally, your package would have meaningful autopkgtests (DEP-8 tests) This makes Salsa CI more useful for you, and also helps detect and defend you against regressions in your dependencies.

The Debian CI docs are a good starting point. In-depth discussion of writing autopkgtests is beyond the scope of this article.

Day-to-day work

With this capable tooling, most tasks are much easier.

Making changes to the package

Make all changes via a Salsa Merge Request. So start by making a branch that will become the MR branch.

On your MR branch you can freely edit every file. This includes upstream files, and files in debian/.

For example, you can:

  • Make changes with your editor and commit them.
  • git cherry-pick an upstream commit.
  • git am a patch from a mailing list or from the Debian Bug System.
  • git revert an earlier commit, even an upstream one.

When you have a working state of things, tidy up your git branch:

git merge

Use git-rebase to squash/edit/combine/reorder commits.

git-debrebase

Use git-debrebase -i to squash/edit/combine/reorder commits. When you are happy, run git-debrebase conclude.

Do not edit debian/patches/. With git-debrebase, this is purely an output. Edit the upstream files directly instead. To reorganise/maintain the patch queue, use git-debrebase -i to edit the actual commits.

Push the MR branch (topic branch) to Salsa and make a Merge Request.

Set the MR to “auto-merge when all checks pass”. (Or, depending on your team policy, you could ask for an MR Review of course.)

If CI fails, fix up the MR branch, squash/tidy it again, force push the MR branch, and once again set it to auto-merge.

Test build

An informal test build can be done like this:

apt-get build-dep .
dpkg-buildpackage -uc -b

Ideally this will leave git status clean, with no modified or un-ignored untracked files. If it shows untracked files, add them to .gitignore or debian/.gitignore as applicable.

If it dirties the tree, consider trying to make it stop doing that. The easiest way is probably to build out-of-tree, if supported upstream. If this is too difficult, you can leave the messy build arrangements as they are, but you’ll need to be disciplined about always committing, using git clean and git reset, and so on.

For formal binaries builds, including for testing, use dgit sbuild as described below for uploading to NEW.

Uploading to Debian

Start an MR branch for the administrative changes for the release.

Document all the changes you’re going to release, in the debian/changelog.

git merge

gbp dch can help write the changelog for you:

dgit fetch sid
gbp dch --ignore-branch --since=dgit/dgit/sid --git-log=^upstream/main
rationale

--ignore-branch is needed because gbp dch wrongly thinks you ought to be running this on master, but of course you’re running it on your MR branch.

The --git-log=^upstream/main excludes all upstream commits from the listing used to generate the changelog. (I’m assuming you have an upstream remote and that you’re basing your work on their main branch.) If there was a new upstream version, you’ll usually want to write a single line about that, and perhaps summarise anything really important.

(For the first upload after switching to using tag2upload or dgit you need --since=debian/1.2.3-1, where 1.2.3-1 is your previous DEP-14 tag, because dgit/dgit/sid will be a dsc import, not your actual history.)

Change UNRELEASED to the target suite, and finalise the changelog. (Note that dch will insist that you at least save the file in your editor.)

dch -r
git commit -m 'Finalise for upload' debian/changelog

Make an MR of these administrative changes, and merge it. (Either set it to auto-merge and wait for CI, or if you’re in a hurry double-check that it really is just a changelog update so that you can be confident about telling Salsa to “Merge unverified changes”.)

Now you can perform the actual upload:

git checkout master
git pull --ff-only # bring the gitlab-made MR merge commit into your local tree
git merge
git-debpush
git-debrebase
git-debpush --quilt=linear

--quilt=linear is needed only the first time, but it is very important that first time, to tell the system the correct git branch layout.

Uploading a NEW package to Debian

If your package is NEW (completely new source, or has new binary packages) you can’t do a source-only upload. You have to build the source and binary packages locally, and upload those build artifacts.

Happily, given the same git branch you’d tag for tag2upload, and assuming you have sbuild installed and a suitable chroot, dgit can help take care of the build and upload for you:

Prepare the changelog update and merge it, as above. Then:

git-debrebase

Create the orig tarball and launder the git-derebase branch:

git-deborig
git-debrebase quick
rationale

Source package format 3.0 (quilt), which is what I’m recommending here for use with git-debrebase, needs an orig tarball; it would also be needed for 1.0-with-diff.

Build the source and binary packages, locally:

dgit sbuild
dgit push-built
rationale

You don’t have to use dgit sbuild, but it is usually convenient to do so, because unlike sbuild, dgit understands git. Also it works around a gitignore-related defect in dpkg-source.

New upstream version

Find the new upstream version number and corresponding tag. (Let’s suppose it’s 1.2.4.) Check the provenance:

git verify-tag v1.2.4
rationale

Not all upstreams sign their git tags, sadly. Sometimes encouraging them to do so can help. You may need to use some other method(s) to check that you have the right git commit for the release.

git merge

Simply merge the new upstream version and update the changelog:

git merge v1.2.4
dch -v1.2.4-1 'New upstream release.'
git-debrebase

Rebase your delta queue onto the new upstream version:

git debrebase mew-upstream 1.2.4

If there are conflicts between your Debian delta for 1.2.3, and the upstream changes in 1.2.4, this is when you need to resolve them, as part of git merge or git (deb)rebase.

After you’ve completed the merge, test your package and make any further needed changes. When you have it working in a local branch, make a Merge Request, as above.

Sponsorship

git-based sponsorship is super easy! The sponsee can maintain their git branch on Salsa, and do all normal maintenance via gitlab operations.

When the time comes to upload, the sponsee notifies the sponsor that it’s time. The sponsor fetches and checks out the git branch from Salsa, does their checks, as they judge appropriate, and when satisfied runs git-debpush.

As part of the sponsor’s checks, they might want to see all changes since the last upload to Debian:

dgit fetch sid
git diff dgit/dgit/sid..HEAD

Or to see the Debian delta of the proposed upload:

git verify-tag v1.2.3
git diff v1.2.3..HEAD ':!debian'
git-debrebase

Or to show all the delta as a series of commits:

git log -p v1.2.3..HEAD ':!debian'

Don’t look at debian/patches/. It can be absent or out of date.

Incorporating an NMU

Fetch the NMU into your local git, and see what it contains:

dgit fetch sid
git diff master...dgit/dgit/sid

If the NMUer used dgit, then git log dgit/dgit/sid will show you the commits they made.

Normally the best thing to do is to simply merge the NMU, and then do any reverts or rework in followup commits:

git merge dgit/dgit/sid
git-debrebase

You should git-debrebase quick at this stage, to check that the merge went OK and the package still has a lineariseable delta queue.

Then make any followup changes that seem appropriate. Supposing your previous maintainer upload was 1.2.3-7, you can go back and see the NMU diff again with:

git diff debian/1.2.3-7...dgit/dgit/sid
git-debrebase

The actual changes made to upstream files will always show up as diff hunks to those files. diff commands will often also show you changes to debian/patches/. Normally it’s best to filter them out with git diff ... ':!debian/patches'

If you’d prefer to read the changes to the delta queue as an interdiff (diff of diffs), you can do something like

git checkout debian/1.2.3-7
git-debrebase --force make-patches
git diff HEAD...dgit/dgit/sid -- :debian/patches

to diff against a version with debian/patches/ up to date. (The NMU, in dgit/dgit/sid, will necessarily have the patches already up to date.)

DFSG filtering (handling non-free files)

Some upstreams ship non-free files of one kind of another. Often these are just in the tarballs, in which case basing your work on upstream git avoids the problem. But if the files are in upstream’s git trees, you need to filter them out.

This advice is not for (legally or otherwise) dangerous files. If your package contains files that may be illegal, or hazardous, you need much more serious measures. In this case, even pushing the upstream git history to any Debian service, including Salsa, must be avoided. If you suspect this situation you should seek advice, privately and as soon as possible, from dgit-owner@d.o and/or the DFSG team. Thankfully, legally dangerous files are very rare in upstream git repositories, for obvious reasons.

Our approach is to make a filtered git branch, based on the upstream history, with the troublesome files removed. We then treat that as the upstream for all of the rest of our work.

rationale

Yes, this will end up including the non-free files in the git history, on official Debian servers. That’s OK. What’s forbidden is non-free material in the Debianised git tree, or in the source packages.

Initial filtering

git checkout -b upstream-dfsg v1.2.3
git rm nonfree.exe
git commit -m "upstream version 1.2.3 DFSG-cleaned"
git tag -s -m "upstream version 1.2.3 DFSG-cleaned" v1.2.3+ds1
git push origin upstream-dfsg

And now, use 1.2.3+ds1, and the filtered branch upstream-dfsg, as the upstream version, instead of 1.2.3 and upstream/main. Follow the steps for Convert the git branch or New upstream version, as applicable, adding +ds1 into debian/changelog.

If you missed something and need to filter out more a nonfree files, re-use the same upstream-dfsg branch and bump the ds version, eg v1.2.3+ds2.

Subsequent upstream releases

git checkout upstream-dfsg
git merge v1.2.4
git rm additional-nonfree.exe # if any
git commit -m "upstream version 1.2.4 DFSG-cleaned"
git tag -s -m "upstream version 1.2.4 DFSG-cleaned" v1.2.4+ds1
git push origin upstream-dfsg

Removing files by pattern

If the files you need to remove keep changing, you could automate things with a small shell script debian/rm-nonfree containing appropriate git rm commands. If you use git rm -f it will succeed even if the git merge from real upstream has conflicts due to changes to non-free files.

rationale

Ideally uscan, which has a way of representing DFSG filtering patterns in debian/watch, would be able to do this, but sadly the relevant functionality is entangled with uscan’s tarball generation.

Common issues

  • Tarball contents: If you are switching from upstream tarballs to upstream git, you may find that the git tree is significantly different.

    It may be missing files that your current build system relies on. If so, you definitely want to be using git, not the tarball. Those extra files in the tarball are intermediate built products, but in Debian we should be building from the real source! Fixing this may involve some work, though.

  • gitattributes:

    For Reasons the dgit and tag2upload system disregards and disables the use of .gitattributes to modify files as they are checked out.

    Normally this doesn’t cause a problem so long as any orig tarballs are generated the same way (as they will be by tag2upload or git-deborig). But if the package or build system relies on them, you may need to institute some workarounds, or, replicate the effect of the gitattributes as commits in git.

  • git submodules: git submodules are terrible and should never ever be used. But not everyone has got the message, so your upstream may be using them.

    If you’re lucky, the code in the submodule isn’t used in which case you can git rm the submodule.

Further reading

I’ve tried to cover the most common situations. But software is complicated and there are many exceptions that this article can’t cover without becoming much harder to read.

You may want to look at:

  • dgit workflow manpages: As part of the git transition project, we have written workflow manpages, which are more comprehensive than this article. They’re centered around use of dgit, but also discuss tag2upload where applicable.

    These cover a much wider range of possibilities, including (for example) choosing different source package formats, how to handle upstreams that publish only tarballs, etc. They are correspondingly much less opinionated.

    Look in dgit-maint-merge(7) and dgit-maint-debrebase(7). There is also dgit-maint-gbp(7) for those who want to keep using gbp pq and/or quilt with a patches-unapplied branch.

  • NMUs are very easy with dgit. (tag2upload is usually less suitable than dgit, for an NMU.)

    You can work with any package, in git, in a completely uniform way, regardless of maintainer git workflow, See dgit-nmu-simple(7).

  • Native packages (meaning packages maintained wholly within Debian) are much simpler. See dgit-maint-native(7).

  • tag2upload documentation: The tag2upload wiki page is a good starting point. There’s the git-debpush(1) manpage of course.

  • dgit reference documentation:

    There is a comprehensive command-line manual in dgit(1). Description of the dgit data model and Principles of Operation is in dgit(7); including coverage of out-of-course situations.

    dgit is a complex and powerful program so this reference material can be overwhelming. So, we recommend starting with a guide like this one, or the dgit-…(7) workflow tutorials.

  • Design and implementation documentation for tag2upload is linked to from the wiki.

  • Debian’s git transition blog post from December.

    tag2upload and dgit are part of the git transition project, and aim to support a very wide variety of git workflows. tag2upload and dgit work well with existing git tooling, including git-buildpackage-based approaches.

    git-debrebase is conceptually separate from, and functionally independent of, tag2upload and dgit. It’s a git workflow and delta management tool, competing with gbp pq, manual use of quilt, git-dpm and so on.

git-debrebase
  • git-debrebase reference documentation:

    Of course there’s a comprehensive command-line manual in git-debrebase(1).

    git-debrebase is quick and easy to use, but it has a complex data model and sophisticated algorithms. This is documented in git-debrebase(5).



comment count unavailable comments

A brief history of barbed wire fence telephone networks [OSnews]

If you look at the table of contents for my book, Other Networks: A Radical Technology Sourcebook, you’ll see that entries on networks before/outside the internet are arranged first by underlying infrastructure and then chronologically. You’ll also notice that within the section on wired networks, there are two sub-sections: one for electrical wire and another for barbed wire. Even though the barbed wire section is quite short, it was one of the most fascinating to research and write about – mostly because the history of using barbed wire to communicate is surprisingly long and almost entirely undocumented, even though barbed wire fence phones in particular were an essential part of early- to mid-twentieth century rural life in many parts of the U.S. and Canada!

↫ Lori Emerson

I had no idea this used to be a thing, but it obviously makes a ton of sense. If you can have a conversation by stringing a few tin cans together, you can obviously do something similar across metal barbed wire. There’s something poetic about using one of mankind’s most dividing inventions to communicate, and thus bring people closer together.

11:35

What dreams are made of [RevK®'s ramblings]

I had a daft idea.

Connect a BBC Micro B monitor port to ESP32 and use quad SPI to clock RGB+line sync triggered on frame sync and then mapped to original resolution PNG image via WiFi/web page.

Technically tricky. Tiny bit of circuit for sync and levels. Small. Powered by the port. Could be in-line working to a monitor.

I decided not to, as you can just buy RGB to HDMI, and HDMI capture cards, and job done, to video/stream even.

And, I’d have to fix one of my beebs.

The fun is I spent all night going round in my head how possible it would be. How it needs checking where line syncs appear in data stream, and adjusting SPI clock. Working out which interlace frame is which. Trying to recall how the sync line works. Trying to figure out if any way to identify mode 7, maybe by number of scan lines, as that is only mode which is fundamentally different resolution. Wondering how quickly I could refresh and send PNGs to a browser.

Yes, this is stuff I dream about.

Of course, assuming I could fix a monitor as well, I could have ESP32 generate the output a BBC would have using the same methods.

I could probably implement the output stream logic and display modes for the graphics a BBC micro could normally do over TCP or serial even!

10:42

Better vs. done [Seth's Blog]

“There, it’s done.”

This is the production mindset and the rule of school. Pencils down. Hand it in.

The alternative is, “Sign me up for a commitment to better.”

Ship an update every day. Learn from what works, relentlessly improve what doesn’t.

The hard part about this path is persisting. Never done projects pile up pretty quickly.

That’s precisely why they’re a competitive advantage.

Saturday, 14 February

23:07

10 Thoughts On “AI,” February 2026 Edition [Whatever]

Because it feels like a good time to do it, some current thoughts on “AI” and where it, we and I are about the thing, midway through February 2026. These are thoughts in no particular order. Some of them I’ve noted before, but will note again here mostly for convenience. Here we go:

1. I don’t and won’t use “AI” in the text of any of my published work. There are several reasons for this, including the fact that “AI”-generated text is not copyrightable and I don’t want any issues of ownership clouding my work, and the simple fact that my book contracts oblige me to write everything in those books by myself, without farming it out to either ghostwriters or “AI.” But mostly, it’s because I write better than “AI” can or ever will, and I can do it with far less energy draw. I don’t need to destroy a watershed to write a novel. I can write a novel with Coke Zero and snacks. Using “AI” in my writing would create more work for me, not less, and I really have lived my life with the idea of doing the least amount of work possible.

If you’re reading a John Scalzi book, it all came out of my brain, plain and simple. Better for you! Easier for me!

2. I’m not worried about “AI” replacing me as a novelist. Sure, someone can now prompt a novel-length work out of “AI” faster than I or any other human can write a book, and yes, people are doing just that, pumping into Kindle Unlimited and other such places a vast substrate of “AI” text slop generated faster than anyone could read it. Nearly all of it will sit there, unread, until the heat death of the universe.

Now, you might say that’s because why would anyone read something that no one actually took any effort to write, and that will be maybe about 5% of the reason. The other 95% of the reason, however, will be discoverability. Are the people pumping out the wide sea of “AI” text slop planning to make the spend for anyone to find that work? What are their marketing plans other than “toss it out, see who locates it by chance”? And if there is a marketing budget, if you can generate dozens or hundreds of “AI” text slop tomes in a year, how do you choose which to highlight? And will the purveyors of such text slop acknowledge that the work they’re promoting was written by no one?

(Answer: No. No they won’t).

I am not worried about being replaced as a novelist because I already exist as a successful author, and my publishers are contractually obliged to market my novels every time they come out. This will be the case for a while, since I have a long damn contract. Readers will know when my new books are out, and they will be able to find them in bookstores, be they physical or virtual. This is a huge advantage over any “AI” text slop that might be churned out. And while I don’t want to overstate the amount of publicity/marketing traditional publishers will do for their debut or remaining mid-list authors, they will do at least some, and that visibility is an advantage that “AI” text slop won’t have. Even indie authors, who must rely on themselves instead of a publicity department to get the word out about their work, have something “AI” text slop will never have: They actually fucking care about their own work, and want other people to see it.

I do understand it’s more than mildly depressing to think that a major market difference between “AI” text slop and stuff actual people wrote is marketing, but: Welcome to capitalism! It’s not the only difference, obviously. But it is a big one. And one that is likely to persist, because:

3. People in general are burning out on “AI.” Not just in creative stuff: Microsoft recently finally admitted that no one likes its attempt to shove its “AI” Copilot into absolutely everything, whether it needs to be there or not, and is making adjustments to its businesses to reflect that. “AI” as a consumer-facing entity rarely does what it does, better than the programs and apps it is replacing (see: Google’s Gemini replacing Google Assistant), and sucks up far more energy and resources. Is your electric bill higher recently? Has the cost of a computer gone up because suddenly memory prices have doubled (or more)? You have “AI” to thank for that. It’s the solution to a problem that not only did no one actually have, but wasn’t a problem in the first place. There are other issues with “AI” larger than this — mostly that it’s a tool to capture capital at the expense of labor — but I’m going to leave those aside for now to focus on the public exhaustion and dissatisfaction with “AI” as a product category.

In this sort of environment, human-generated work has a competitive advantage, because people see it as more authentic and real (which it is, to the extent that “authentic” and “real” mean “a product of an actual human brain”), and more likely to have the ability to surprise and engage the people who encounter it. I don’t want to oversell this — humans are still as capable of creating lazy, uninspired junk as they ever were, and some people really do think of their entertainment as bulk purchases. Those vaguely sad people will be happy that “AI” gives them more, even if it’s of lesser quality. But I do think in general when people are given a choice, that they will generally prefer to give their time and money to the output of an actual human making an effort, than to the product of a belching drain on the planet’s resources whose use primarily benefits people who are already billionaires dozens of times over. Call me optimistic.

Certainly that’s the case with me:

4. I’m supporting human artists, including as they relate to my own work. I’ve noted before that I have it as a contractual point that my book covers, translations and copyediting have to be done by humans. This is again both a practical issue (re: copyrights, quality of work, etc) and a moral one, but also, look, I like that my work pays other humans, and I want that to continue. Also, in my personal life, I’m going to pay artists for stuff. When I buy art, I’m going to buy from people who created it, not generated it out of a prompt. I’m not going to knowingly post or promote anything that is not human-created. Just as I wish to be supported by others, I am going to support other artists. There is no downside to not promoting/paying for “AI” generated work, since there was no one who created it. There is an upside to promoting and paying humans. They need to eat and pay rent.

“But what if they use AI?” In the case of the people working on my own stuff, it’s understood that the final product, the stuff that goes into my book, is the result of their own efforts. As for everything else, well, I assume most artists are pretty much like me: using “AI” for their primary line of creativity is just introducing more work, not less. Also I’m going to trust other creators; if they tell me they’re not using “AI” in their finished work then I’m going to believe them in the absence of a compelling reason not to. I don’t particularly have the time or interest in being the “AI” police. Anyway, if they’re misrepresenting their work product, that eventually gets found out. Ask a plagiarist about that.

With all that said:

5. “AI” is Probably Sticking Around In Some Form. This is not an “‘AI’ Is Inevitable and Will Take Over the World” statement, since as noted above people are getting sick of it being aggressively shoved at them, and also there are indications that a) “this is the worst it will ever be” is not true of AI, as people actively note that recent versions of ChatGPT were worse to use than earlier versions, b) investors are getting to the point of wanting to see an actual return on their investments, which is the cue for the economic bubble around AI to pop. This going to be just great for the economy. “AI,” as the current economic and cultural phenomenon, is likely to be heading for a fall.

Once all that drama is done and we’ve sorted through the damage, the backend of “AI” and its various capabilities will still be around, either relabeled or as is, just demoted from being the center of the tech universe and people making such a big deal about it, scaled down and hopefully more efficient. I understand that the “AI will probably persist” position is not a popular one in the creative circles in which I exist, and that people hope it vaporizes entirely, like NFTs and blockchains. I do have to admit I wouldn’t mind being wrong about this. But as a matter of capital investment and corporate integration, NFTs, etc are a blip compared to what’s been invested in “AI” overall, and how deep its use has sunk into modern capitalism (more on that in a bit).

Another reason I think “AI” is likely to stick around in some form:

6. “AI” is a marketing term, not a technical one, and encompasses different technologies. The version that the creative class gets (rightly) worked up about is generative “AI,” the most well-known versions of which were trained on vast databases of work, much of which was and is copyrighted and not compensated for. This is, however, only one subset of a larger group of computational systems which are also called “AI,” because it’s a sexy term that even non-nerds have heard of before, and far less confusing than, say, “neural networks” or such. Not all “AI” is as ethically compromised as large-scale generative “AI,” and a lot of it existed and was being used non-controversially before generative “AI” blew up as the wide-scale rights disaster it turned out to be.

It’s possible that “AI” as a term is going to be forever tainted as a moral hazard, disliked by the public and seen as a promotions drag by marketing departments. If and when that happens, a lot of things currently hustled under the “AI” umbrella will be quietly removed from it, either returning to previous, non-controversial labels or given new labels entirely. Lots of “AI” will still be around, just no one will call it that, and outside of obvious generative “AI” that presents rights issues, fewer people will care.

On the matter of generative “AI,” here’s a thought:

7. There were and are ethical ways to have trained generative “AI” but because they weren’t done, the entire field is suspect. Generative “AI” could easily have been trained solely on material in the public domain and/or on appropriately-licensed Creative Commons material, and an opt-in licensing gateway to acquire and pay for copyrighted work used in training, built and used jointly by the companies needing training data, could have happened. This was all a solvable problem! But OpenAI, Anthropic, et al decided to train first, ask forgiveness later, on the idea that would be cheaper simply to do it first and to litigate later. I’m not entirely sure this will turn out to be true, but it is possible that at this late stage, some of the companies will go under before any settlements can be achieved, which will have the same effect.

There are companies who have chosen to train their generative models with compensation; I know of music software companies that make a point of showing how artists they worked were both paid for creating samples and other material, and get paid royalties when work generated from those samples, etc is made by people using the software. I think that’s fine! As long as everyone involved is happy with the arrangement, no harm, and no foul. But absent of that sort of clear and unambiguous declaration of provenance and compensation regarding training data, one has to assume that any generative “AI” has used stolen work. It’s so widely pervasive at this point that this has to be a foundational assumption.

And here is a complication:

8. The various processes lumped into “AI” are likely to be integrated into programs and applications that are in business and creative workflows. One, because they already were prior to “AI” being the widely-used rubric, and two, because these companies need to justify their investments somehow. Some of these systems and processes aren’t tainted by the issues of “generative AI” but many of them are, including some that weren’t previously. When I erase a blotch in an image with Photoshop, the process may or may not use Generative AI and when it does, it may or may not use Adobe’s Firefly model (which Adobe maintains, questionably, is trained only on material it has licensed).

Well, don’t use Photoshop, I hear you say. Which, okay, but I have some bad news for you: Nearly every photoediting suite at this point incorporates “AI” at some point in its workflow, so it’s six of one and half dozen of the other. And while I am a mere amateur when it comes to photos, lots of professional photographers use Adobe products in their workflow, either because they’ve been using it for years and don’t want to train on new software (which, again, probably has “AI” in its workflow), or they’re required to use it by their clients because it’s the “industry standard.” A program being the “industry standard” is one reason I use Microsoft Word, and now that program is riddled with “AI.” At a certain point, if you are using 21st century computer-based tools, you are using “AI” of some sort, whether you want to or not. Some of it you can turn off or opt out of. Some of it you can’t.

(Let’s not even talk about my Google Pixel Phone, which is now so entirely festooned with “AI” that it’s probably best to think of it as an “AI” computer with a phone app, than the other way around.)

This is why earlier in this piece, I talk about the “final product” being “AI”-free — because it’s almost impossible at this point to avoid “AI” in computer-based tools, even if one wants to. Also, given the fact that “AI” is a marketing rather than a technical term, what the definition of “AI” is, and what is an acceptable level of use, will change from one person to another. Is Word’s spellcheck “AI”? Is Photoshop’s Spot Healing brush tool? Is Logic Pro’s session drummer? At what point does a creative tool become inimical to creation?

(On a much larger industrial scale, this will be an extremely interesting question when it comes to animation, CGI and VFX. “AI” is already here in video games with DLSS, which upscales and adds frames to games; if similar tech isn’t already being used for inbetweening in animation, it’s probably not going to be long until it is.)

Again, I’m not interested in being, nor have the time to be, the “AI” police. I choose to focus on the final product and the human element in that, because that is honestly the only part of the process that I, and most people, can see. I’m certainly not going to penalize a creative person because Adobe or Microsoft or whomever incorporated “AI” into a tool they need to use in order to do their work. I would be living in a glass house if I threw that particular stone.

9. It’s all right to be informed about the state of the art when it comes to “AI.” Do I use “AI” in my text? No. Do I think it makes sense to have an understanding of where “AI” is at, to know how the companies who make it create a business case for it, and to keep tabs on how it’s actually being used in the real word? Yes. So I check out latest iterations of ChatGPT/Claude/Gemini/Copilot, etc (I typically steer clear of Grok if only because I’m not on the former Twitter anymore) and the various services and capabilities they offer.

The landscape of “AI” is still changing rapidly, and if you’re still at the “lol ‘AI’ can’t draw hands” level of thinking about the tech, you’re putting yourself at a disadvantage, particularly if you’re a creative person. Know your enemy, or at least, know the tools your enemies are making. Again, I’m not worried about “AI” replacing me as a novelist. But it doesn’t have to be at that level of ability to wreak profound and even damaging changes to creative fields. We see that already.

One final, possibly heretical thought:

10. Some people are being made to use “AI” as a condition of their jobs. Maybe don’t give them too much shit for it. I know at least a couple of people who were recently hired for work, who were told they needed to be fluent in computer systems that had “AI” as part of their workflow. Did they want or need to use those systems to do the actual job they were hired for? Almost certainly not! Did that matter? Nope! Was it okay that their need to eat and pay rent outweighed their ethical annoyance/revulsion with “AI” and the fact it was adding more work, not less, onto their plate? I mean (waves at the world), you tell me. Personally speaking, I’m not the one to tell a friend that they and their kid and cat should live in a Toyota parked at a Wal-Mart rather than accept a corporate directive made by a mid-level manager with more jargon in their brain than good sense. I may be a softie.

Be that as it may, to the extent you can avoid “AI,” do so, especially if you have a creative job, where it’s almost always just going to get in your way. Your fans, the ones that exist and the ones you have yet to make, will appreciate that what they get from you is from you. That’s what people mostly want from art: Entertainment and connection. You will always be able to do that better than “AI.” There is no statistical model that can create what is uniquely you.

— JS

19:07

Steinar H. Gunderson: A286874(15) >= 42 [Planet Debian]

The following 42 15-bit values form a 2-disjunctive matrix (that is, no union of two values contain or equal a third value), or equivalently, a superimposed code:

000000000011111
000000011100011
000000101101100
000001010110100
000001101010001
000001110001010
000010011011000
000100100110010
000110010000110
000110100001001
000111001100000
001000110000101
001010000110001
001010101000010
001011000001100
001100001010100
001100010101000
001101000000011
010001000101001
010010001000101
010010110100000
010011000010010
010100001001010
010100010010001
010101100000100
011000000100110
011000100011000
011001011000000
100001001000110
100010000101010
100010100010100
100011010000001
100100000100101
100100111000000
100101000011000
101000001001001
101000010010010
101001100100000
110000001110000
110000010001100
110000100000011
111110000000000

This shows that A286874 a(15) >= 42.

If I had to make a guess, I'd say the equality holds, but I have nowhere near the computing resources to actually find the answer for sure. Stay tuned for news about a(14), though.

18:49

Link [Scripting News]

I always objected to browsers trying to hide the feeds. I come from NYC and rode the subway to school every day in high school. The things you see! It's all out there for the looking and breathing. Lift the hood on a car. Look at all those wires and hoses, what do they do. I hope they don't kill me. Whoever made the decision at Microsoft or Firefox or wherever that feeds needed to be obfuscated, some advice -- be more respectful of your users. The web is the medium that had a View Source command. You're supposed to take a look. Don't forget the Back button if you don't like what you see. Something funny, if only life had a Back button.

Link [Scripting News]

Speaking of the Back button, that's the problem with tiny-little-text-box social networks. No links. So guess what the Back button one of the best inventions ever, isn't part of your reading and writing world. I guess this is like the street cars in LA conspiracy, that the car companies bought and shut down?

Link [Scripting News]

One more thing and then I gotta go. I think it's time for the AI's to compete with Wikipedia. It's filled with hallucinations. Make it a community thing, let the people be involved, but do a better job of presentation, and validate what's written, don't let these things become so territorial. We want the facts, not who has the best PR.

18:00

Link [Scripting News]

To my WordPress developer friends. How about making the RSS feed prettier and easier to read. Properly indenting it would make a big diff. I prefer encoding individual characters to CDATA. Those two things to start. It really does matter how readable this stuff is. Comparison, the RSS feed that Old School generates, the software that renders my blog.

Link [Scripting News]

It's all-star weekend in the NBA which I've never seen the point of. As if sport is anything but a simulation of what we were born to do -- compete and cooperate. My team is great, your team sucks. It's fun the same way slapstick for some weird reason is funny. All it takes to get a laugh is trip and fall on your face. It's funny just thinking about it. Doesn't seem very nice but there it is.

17:07

Vim 9.2 released [LWN.net]

Version 9.2 of the Vim text editor has been released. "Vim 9.2 brings significant enhancements to the Vim9 scripting language, improved diff mode, comprehensive completion features, and platform-specific improvements including experimental Wayland support." Also included is a new interactive tutor mode.

Upcoming Speaking Engagements [Schneier on Security]

This is a current list of where and when I am scheduled to speak:

  • I’m speaking at Ontario Tech University in Oshawa, Ontario, Canada, at 2 PM ET on Thursday, February 26, 2026.
  • I’m speaking at the Personal AI Summit in Los Angeles, California, USA, on Thursday, March 5, 2026.
  • I’m speaking at Tech Live: Cybersecurity in New York City, USA, on Wednesday, March 11, 2026.
  • I’m giving the Ross Anderson Lecture at the University of Cambridge’s Churchill College at 5:30 PM GMT on Thursday, March 19, 2026.
  • I’m speaking at RSAC 2026 in San Francisco, California, USA, on Wednesday, March 25, 2026.

The list is maintained on this page.

15:14

Bits from Debian: DebConf 26 Registration and Call for Proposals are open [Planet Debian]

Registration and the Call for Proposals for DebConf 26 are now open. The 27th edition of the Debian annual conference will be held from July 20th to July 25th, 2026, in Santa Fe, Argentina.

The conference days will be preceded by DebCamp, which will take place from July 13th to July 19th, 2026.

The registration form can be accessed on the DebConf 26 website. After creating an account, click "register" in the profile section.

As always, basic registration for DebConf is free of charge for attendees. If you are attending the conference in a professional capacity or as a representative of your company, we kindly ask that you consider registering in one of our paid categories to help cover the costs of organizing the conference and to support subsidizing other community members.

The last day to register with guaranteed swag is June 14th.

We also encourage eligible individuals to apply for a diversity bursary. Travel, food, and accommodation bursaries are also available. More details can be found on the bursary info page.

The last day to apply for a bursary is April 1st. Applicants should receive feedback on their bursary application by May 1st.

Call for proposals

The call for proposals for talks, discussions and other activities is also open. To submit a proposal you need to create an account on the website, and then use the "Submit Talk" button in the profile section.

The last day to submit and have your proposal be considered for the main conference schedule, with video coverage guaranteed, is April 1st.

Become a sponsor

DebConf 26 is also accepting sponsors. Interested companies and organizations may contact the DebConf team through sponsors@debconf.org or visit the DebConf 26 website.

See you in Santa Fe,

The DebConf 26 Team

09:42

Things that feel risky [Seth's Blog]

Often aren’t.

In fact, they might be the safest way forward.

03:42

A Friday Mlem For You [Whatever]

Saja is turning into a handsome devil. Smudge is unimpressed nevertheless.

— JS

01:14

Haiku further improves its touchpad support [OSnews]

January was a busy month for Haiku, with their monthly report listing a metric ton of smaller fixes, changes, and improvements. Perusing the list, a few things stand out to me, most notably continued work on improving Haiku’s touchpad support.

The remainder of samuelrp84’s patchset implementing new touchpad functionality was merged, including two-finger scrolling, edge motion, software button areas, and click finger support; and on the hardware side, driver support for Elantech “version 4” touchpads, with experimental code for versions 1, 2, and 3. (Version 2, at least, seems to be incomplete and had to be disabled for the time being.)

↫ Haiku’s January 2026 activity report

On a related note, the still-disabled I2C-HID saw a number of fixes in January, and the rtl8125 driver has been synced up with OpenBSD. I also like the changes to kernel_version, which now no longer returns some internal number like BeOS used to do, instead returning B_HAIKU_VERSION; the uname command was changed accordingly to use this new information. There’s some small POSIX compliance fixes, a bunch of work was done on unit tests, and a ton more.

00:28

Eddie Lin's District 2 Is Racially Diverse [The Stranger]

The former assistant attorney for the Seattle City Attorney’s Office easily won The Stranger's endorsement. His big message? Seattle needs more housing, and from all sectors: private, parastatal, social. But is he too soft on cops? by Charles Mudede

I enter Cal Anderson Park. It’s 10:15 a.m. The sky is bright blue with long and high clouds. The sun is low. And a seagull stands on top of the city’s best fountain. What’s on its mind? On the concrete rim that circles the fountain’s pooled water, someone wrote with a spray can: “Death to AmeriKKK!” Now that’s on my mind. US fascism. 

Unbeknownst to me, Councilmember Eddie Lin is also in the park, also near the fountain. In November 2025, he won District 2’s special election by nearly 40 points. On December 2, he was sworn in. Today, we are meeting at The Stranger’s office for a quick check-in. How is it going so far? Is he working on his promises? Is the job harder than he expected? That sort of thing. While talking on the phone about some community matter, Lin spots me. Does he also notice the contemplative seagull on the fountain or the anti-fascist graffiti?

At 10:35, we are in The Stranger’s conference room. It has a view of the rainbow crosswalk next to the Wildrose and, in the distance, two towers that will soon have the repurposed corpse of a Boeing 747 near the ground floors between them. I was introduced to Eddie Lin in this conference room in June of 2025 for the SECB endorsement meeting for the primaries. The former assistant attorney for the Seattle City Attorney’s Office easily won our endorsement. His big message? Seattle needs more housing, and from all sectors: private, parastatal, social. 

“I saw you in the park,” Lin says to me as he places his phone on the conference table. “Funny you should bring that up,” I say. “I was thinking about facism in the US while crossing the park. And [you] being not only a person of color but the one who represents the most diverse district in Seattle, I want to begin by talking about ICE. When they come, they are coming for us. Is there really anything that can be done?” I also live in District 2.

Lin explains that he and Erika Evans, the new city attorney, are looking at the options closely and working with immigrant professionals and activists to prepare and protect all of the members of the community, many of whom are from Somalia, from what’s happening in Minneapolis. But, I say, ICE still just breaks the law. They break into homes without warrants. We saw this happen to an American citizen, ChongLy Thao. ICE just disregarded the law. Treated the Hmong American with no record like a criminal. Trump has made it loud and clear that this agency operates outside of conventional law. They can use excessive force and even act as if they can kill people with impunity. How can Seattle prepare for a federal organization that’s operating like a street gang? 

After a moment's thought, Lin puts on his lawyer hat and says it like it is: “There are a couple things for me. One: There are certain crimes committed [by ICE agents] that are not just federal crimes. They're also state crimes. Murder is a state crime that does not [in Washington] have a statute of limitation. And it can't be pardoned by the president, and so, you know, I think, these federal agents need to be worried about that. The president is trying to send this message that he will protect them and pardon them. He can't pardon a state crime. So, he's going to be out of office someday. [And] Republicans will not be in control forever. They can't protect these people forever. So, I think we need to make these agents understand this. Yes, the statute of limitations for excessive force is something like five years. Yes, I would like it to be longer. But that is the way I’m looking at it. You are not protected from state crimes.”

When I ask about how things have been since he took office, he brightens a little and explains that, to be honest, not much has happened. He was sworn in. He made the transition, and he is now settling in. Then I ask about his top priority: affordable housing. Any new developments in that direction? 

He is honest. Not much has happened in the immediate sense because housing is always a long-term commitment. “Even if we change zoning rules,” he says, “it’s still going to take years to see the results. The kind of housing crisis we are in now was caused many years ago. … But we still have to deal with the homeless crisis. That has to be done right now. … So, I support things like the tiny home villages or [other forms of] transitional housing. I'm supportive of [Mayor Katie Wilson’s] focus on that and want to do what I can to support her. Whether it's with resources, finding locations, or permitting, or land-use issues. But I think the whole city should be a part of transitional housing. Not just South Seattle.”

I bring up the fact that, though he’s considered a progressive, some think he is a touch soft on cops. He seems a little surprised by this, but it was mentioned in The Stranger’s 2025 primary endorsement. In response, Lin brings up that he, along with Alexis Mercedes Rinck and Rob Saka, voted against the police guild contract because it was woefully inadequate when it came to police accountability. Lin leaves it at that. Action counts more than words.

Friday, 13 February

23:42

Microsoft Store gets another CLI tool [OSnews]

We often lament Microsoft’s terrible stewardship of its Windows operating system, but that doesn’t mean that they never do anything right. In a blog post detailing changes and improvements coming to the Microsoft Store, the company announced something Windows users might actually like?

A new command-line interface for the Microsoft Store brings app discovery, installation and update management directly to your terminal. This enables developers and users with a new way to discover and install Store apps, without needing the GUI. The Store CLI is available only on devices where Microsoft Store is enabled.

↫ Giorgio Sardo at the Windows Blogs

Of course, this new command-line frontend to the Microsoft Store comes with commands to install, update, and search for applications in the store, but sadly, it doesn’t seem to come with an actual TUI for browsing and discovery, which is a shame. I sometimes find it difficult to use dnf to find applications, as it’s not always obvious which search terms to use, which exact spelling packagers are using, which words they use in the description, and so on. In other words, it may not always be clear if the search terms you’re using are the correct ones to find the application you need.

If package managers had a TUI to enable browsing for applications instead of merely searching for them, the process of using the command line to find and install applications would be much nicer. Arch has this third-party TUI called pacseek for its package manager, and it looks absolutely amazing. I’ve run into a rudimentary dnf TUI called dnfseek, but it’s definitely not as well-rounded as pacseek, and it also hasn’t seen any development since its initial release. I couldn’t find anything for apt, but there’s always aptitude, which uses ncurses and thus fulfills a similar role.

To really differentiate this new Microsoft Store command-line tool from winget, the company could’ve built a proper TUI, but instead it seems to just be winget with nicer formatted output that is limited to just the Microsoft Store. Nice, I guess.

22:07

I Saw U: Making Eye Contact at the ICE Protest, Winning a Prize on the Claw Machine, and Looking Like Chad Michael Murray with Kurt Cobain Hair [The Stranger]

Did you see someone? Say something! by Anonymous

Mullet 4 Mullet at the Ice Protest: Revolutionary Eye Contact

You: pink/purple curly mullet cutie carrying a long sign Me: gray/black mullet w boot sign. We locked eyes many a time, let’s go on a date? FUCK ICE!

Southcenter skate claw machine

You: short masc in a hat. Me: red jacket zombie shirt. You watched me win a prize for my friend’s birthday. I should have won you one too! Forgive me?

Benbow 80's Nohjty

You're Lisa charming smile in black shoulder length hair, my name is Stanton dark hair wearing black with a Debra Harry Blonde t shirt coffee?

Say She She @ Showbox, 1/31

You: sleepy eyes, strong nose, beanie. Me: glittery earrings, glasses, strappy top. Smiled at you again as I left with my (platonic) pal. Coffee?

yoga on 1/21..more than just the sauna making me sweat

You warned me about the faulty bathroom lock..I eavesdropped on your conversation about dating shows..let me be a contender? :)

SIFF Uptown 2025

You used to work in the ticket booth. You looked like Chad Michael Murray with Kurt Cobain hair. Where did you go!

Best Bangs at Macrina Bakery

You're the stunning tall beauty with bangs and a great smile at Macrina Bakery in Maple Leaf. We talked about movies. I'd love to continue the convo!

Goth Cutie outside of Corner Pocket

We talked for a little bit about music, but I was too nervous to ask about your number. Give me a chance to make up for my slip.

Is it a match? Leave a comment here or on our Instagram post to connect!

Did you see someone? Say something! Submit your own I Saw U message here and maybe we'll include it in the next roundup!

Feeds

FeedRSSLast fetchedNext fetched after
@ASmartBear XML 20:49, Thursday, 19 February 21:30, Thursday, 19 February
a bag of four grapes XML 20:42, Thursday, 19 February 21:24, Thursday, 19 February
Ansible XML 20:49, Thursday, 19 February 21:29, Thursday, 19 February
Bad Science XML 20:49, Thursday, 19 February 21:38, Thursday, 19 February
Black Doggerel XML 20:49, Thursday, 19 February 21:30, Thursday, 19 February
Blog - Official site of Stephen Fry XML 20:49, Thursday, 19 February 21:38, Thursday, 19 February
Charlie Brooker | The Guardian XML 20:42, Thursday, 19 February 21:24, Thursday, 19 February
Charlie's Diary XML 20:49, Thursday, 19 February 21:37, Thursday, 19 February
Chasing the Sunset - Comics Only XML 20:49, Thursday, 19 February 21:38, Thursday, 19 February
Coding Horror XML 20:49, Thursday, 19 February 21:36, Thursday, 19 February
Cory Doctorow's craphound.com XML 20:42, Thursday, 19 February 21:24, Thursday, 19 February
Cory Doctorow, Author at Boing Boing XML 20:49, Thursday, 19 February 21:30, Thursday, 19 February
Ctrl+Alt+Del Comic XML 20:49, Thursday, 19 February 21:37, Thursday, 19 February
Cyberunions XML 20:49, Thursday, 19 February 21:38, Thursday, 19 February
David Mitchell | The Guardian XML 21:14, Thursday, 19 February 21:57, Thursday, 19 February
Deeplinks XML 21:14, Thursday, 19 February 21:58, Thursday, 19 February
Diesel Sweeties webcomic by rstevens XML 21:14, Thursday, 19 February 21:57, Thursday, 19 February
Dilbert XML 20:49, Thursday, 19 February 21:38, Thursday, 19 February
Dork Tower XML 20:42, Thursday, 19 February 21:24, Thursday, 19 February
Economics from the Top Down XML 21:14, Thursday, 19 February 21:57, Thursday, 19 February
Edmund Finney's Quest to Find the Meaning of Life XML 21:14, Thursday, 19 February 21:57, Thursday, 19 February
EFF Action Center XML 21:14, Thursday, 19 February 21:57, Thursday, 19 February
Enspiral Tales - Medium XML 21:14, Thursday, 19 February 21:59, Thursday, 19 February
Events XML 20:49, Thursday, 19 February 21:37, Thursday, 19 February
Falkvinge on Liberty XML 20:49, Thursday, 19 February 21:37, Thursday, 19 February
Flipside XML 20:42, Thursday, 19 February 21:24, Thursday, 19 February
Flipside XML 21:14, Thursday, 19 February 21:59, Thursday, 19 February
Free software jobs XML 20:49, Thursday, 19 February 21:29, Thursday, 19 February
Full Frontal Nerdity by Aaron Williams XML 20:49, Thursday, 19 February 21:37, Thursday, 19 February
General Protection Fault: Comic Updates XML 20:49, Thursday, 19 February 21:37, Thursday, 19 February
George Monbiot XML 21:14, Thursday, 19 February 21:57, Thursday, 19 February
Girl Genius XML 21:14, Thursday, 19 February 21:57, Thursday, 19 February
Groklaw XML 20:49, Thursday, 19 February 21:37, Thursday, 19 February
Grrl Power XML 20:42, Thursday, 19 February 21:24, Thursday, 19 February
Hackney Anarchist Group XML 20:49, Thursday, 19 February 21:38, Thursday, 19 February
Hackney Solidarity Network XML 21:14, Thursday, 19 February 21:59, Thursday, 19 February
http://blog.llvm.org/feeds/posts/default XML 21:14, Thursday, 19 February 21:59, Thursday, 19 February
http://calendar.google.com/calendar/feeds/q7s5o02sj8hcam52hutbcofoo4%40group.calendar.google.com/public/basic XML 20:49, Thursday, 19 February 21:29, Thursday, 19 February
http://dynamic.boingboing.net/cgi-bin/mt/mt-cp.cgi?__mode=feed&_type=posts&blog_id=1&id=1 XML 21:14, Thursday, 19 February 21:59, Thursday, 19 February
http://eng.anarchoblogs.org/feed/atom/ XML 20:49, Thursday, 19 February 21:35, Thursday, 19 February
http://feed43.com/3874015735218037.xml XML 20:49, Thursday, 19 February 21:35, Thursday, 19 February
http://flatearthnews.net/flatearthnews.net/blogfeed XML 20:49, Thursday, 19 February 21:30, Thursday, 19 February
http://fulltextrssfeed.com/ XML 21:14, Thursday, 19 February 21:57, Thursday, 19 February
http://london.indymedia.org/articles.rss XML 20:49, Thursday, 19 February 21:36, Thursday, 19 February
http://pipes.yahoo.com/pipes/pipe.run?_id=ad0530218c055aa302f7e0e84d5d6515&amp;_render=rss XML 20:49, Thursday, 19 February 21:35, Thursday, 19 February
http://planet.gridpp.ac.uk/atom.xml XML 20:49, Thursday, 19 February 21:36, Thursday, 19 February
http://shirky.com/weblog/feed/atom/ XML 21:14, Thursday, 19 February 21:58, Thursday, 19 February
http://thecommune.co.uk/feed/ XML 21:14, Thursday, 19 February 21:59, Thursday, 19 February
http://theness.com/roguesgallery/feed/ XML 20:49, Thursday, 19 February 21:37, Thursday, 19 February
http://www.airshipentertainment.com/buck/buckcomic/buck.rss XML 20:49, Thursday, 19 February 21:38, Thursday, 19 February
http://www.airshipentertainment.com/growf/growfcomic/growf.rss XML 21:14, Thursday, 19 February 21:58, Thursday, 19 February
http://www.airshipentertainment.com/myth/mythcomic/myth.rss XML 20:42, Thursday, 19 February 21:24, Thursday, 19 February
http://www.baen.com/baenebooks XML 21:14, Thursday, 19 February 21:58, Thursday, 19 February
http://www.feedsapi.com/makefulltextfeed.php?url=http%3A%2F%2Fwww.somethingpositive.net%2Fsp.xml&what=auto&key=&max=7&links=preserve&exc=&privacy=I+accept XML 21:14, Thursday, 19 February 21:58, Thursday, 19 February
http://www.godhatesastronauts.com/feed/ XML 20:49, Thursday, 19 February 21:37, Thursday, 19 February
http://www.tinycat.co.uk/feed/ XML 20:49, Thursday, 19 February 21:29, Thursday, 19 February
https://anarchism.pageabode.com/blogs/anarcho/feed/ XML 21:14, Thursday, 19 February 21:58, Thursday, 19 February
https://broodhollow.krisstraub.comfeed/ XML 20:49, Thursday, 19 February 21:30, Thursday, 19 February
https://debian-administration.org/atom.xml XML 20:49, Thursday, 19 February 21:30, Thursday, 19 February
https://elitetheatre.org/ XML 20:49, Thursday, 19 February 21:36, Thursday, 19 February
https://feeds.feedburner.com/Starslip XML 20:42, Thursday, 19 February 21:24, Thursday, 19 February
https://feeds2.feedburner.com/GeekEtiquette?format=xml XML 21:14, Thursday, 19 February 21:57, Thursday, 19 February
https://hackbloc.org/rss.xml XML 20:49, Thursday, 19 February 21:30, Thursday, 19 February
https://kajafoglio.livejournal.com/data/atom/ XML 20:49, Thursday, 19 February 21:38, Thursday, 19 February
https://philfoglio.livejournal.com/data/atom/ XML 20:49, Thursday, 19 February 21:36, Thursday, 19 February
https://pixietrixcomix.com/eerie-cutiescomic.rss XML 20:49, Thursday, 19 February 21:36, Thursday, 19 February
https://pixietrixcomix.com/menage-a-3/comic.rss XML 21:14, Thursday, 19 February 21:58, Thursday, 19 February
https://propertyistheft.wordpress.com/feed/ XML 20:49, Thursday, 19 February 21:29, Thursday, 19 February
https://requiem.seraph-inn.com/updates.rss XML 20:49, Thursday, 19 February 21:29, Thursday, 19 February
https://studiofoglio.livejournal.com/data/atom/ XML 20:49, Thursday, 19 February 21:35, Thursday, 19 February
https://thecommandline.net/feed/ XML 20:49, Thursday, 19 February 21:35, Thursday, 19 February
https://torrentfreak.com/subscriptions/ XML 21:14, Thursday, 19 February 21:57, Thursday, 19 February
https://web.randi.org/?format=feed&type=rss XML 21:14, Thursday, 19 February 21:57, Thursday, 19 February
https://www.dcscience.net/feed/medium.co XML 20:49, Thursday, 19 February 21:38, Thursday, 19 February
https://www.DropCatch.com/domain/steampunkmagazine.com XML 20:49, Thursday, 19 February 21:30, Thursday, 19 February
https://www.DropCatch.com/domain/ubuntuweblogs.org XML 20:49, Thursday, 19 February 21:35, Thursday, 19 February
https://www.DropCatch.com/redirect/?domain=DyingAlone.net XML 20:49, Thursday, 19 February 21:36, Thursday, 19 February
https://www.freedompress.org.uk:443/news/feed/ XML 20:49, Thursday, 19 February 21:37, Thursday, 19 February
https://www.goblinscomic.com/category/comics/feed/ XML 20:49, Thursday, 19 February 21:29, Thursday, 19 February
https://www.loomio.com/blog/feed/ XML 20:49, Thursday, 19 February 21:35, Thursday, 19 February
https://www.newstatesman.com/feeds/blogs/laurie-penny.rss XML 20:49, Thursday, 19 February 21:30, Thursday, 19 February
https://www.patreon.com/graveyardgreg/posts/comic.rss XML 20:49, Thursday, 19 February 21:36, Thursday, 19 February
https://www.rightmove.co.uk/rss/property-for-sale/find.html?locationIdentifier=REGION^876&maxPrice=240000&minBedrooms=2&displayPropertyType=houses&oldDisplayPropertyType=houses&primaryDisplayPropertyType=houses&oldPrimaryDisplayPropertyType=houses&numberOfPropertiesPerPage=24 XML 21:14, Thursday, 19 February 21:57, Thursday, 19 February
https://x.com/statuses/user_timeline/22724360.rss XML 20:49, Thursday, 19 February 21:29, Thursday, 19 February
Humble Bundle Blog XML 20:49, Thursday, 19 February 21:36, Thursday, 19 February
I, Cringely XML 20:49, Thursday, 19 February 21:37, Thursday, 19 February
Irregular Webcomic! XML 20:49, Thursday, 19 February 21:30, Thursday, 19 February
Joel on Software XML 20:49, Thursday, 19 February 21:35, Thursday, 19 February
Judith Proctor's Journal XML 20:49, Thursday, 19 February 21:29, Thursday, 19 February
Krebs on Security XML 20:49, Thursday, 19 February 21:30, Thursday, 19 February
Lambda the Ultimate - Programming Languages Weblog XML 20:49, Thursday, 19 February 21:29, Thursday, 19 February
Looking For Group XML 21:14, Thursday, 19 February 21:58, Thursday, 19 February
LWN.net XML 20:49, Thursday, 19 February 21:30, Thursday, 19 February
Mimi and Eunice XML 21:14, Thursday, 19 February 21:59, Thursday, 19 February
Neil Gaiman's Journal XML 20:49, Thursday, 19 February 21:29, Thursday, 19 February
Nina Paley XML 20:49, Thursday, 19 February 21:36, Thursday, 19 February
O Abnormal – Scifi/Fantasy Artist XML 21:14, Thursday, 19 February 21:59, Thursday, 19 February
Oglaf! -- Comics. Often dirty. XML 20:49, Thursday, 19 February 21:37, Thursday, 19 February
Oh Joy Sex Toy XML 21:14, Thursday, 19 February 21:58, Thursday, 19 February
Order of the Stick XML 21:14, Thursday, 19 February 21:58, Thursday, 19 February
Original Fiction Archives - Reactor XML 20:42, Thursday, 19 February 21:24, Thursday, 19 February
OSnews XML 21:14, Thursday, 19 February 21:59, Thursday, 19 February
Paul Graham: Unofficial RSS Feed XML 21:14, Thursday, 19 February 21:59, Thursday, 19 February
Penny Arcade XML 20:42, Thursday, 19 February 21:24, Thursday, 19 February
Penny Red XML 21:14, Thursday, 19 February 21:59, Thursday, 19 February
PHD Comics XML 20:49, Thursday, 19 February 21:38, Thursday, 19 February
Phil's blog XML 20:49, Thursday, 19 February 21:37, Thursday, 19 February
Planet Debian XML 21:14, Thursday, 19 February 21:59, Thursday, 19 February
Planet GNU XML 20:49, Thursday, 19 February 21:30, Thursday, 19 February
Planet Lisp XML 20:49, Thursday, 19 February 21:38, Thursday, 19 February
Pluralistic: Daily links from Cory Doctorow XML 20:49, Thursday, 19 February 21:29, Thursday, 19 February
PS238 by Aaron Williams XML 20:49, Thursday, 19 February 21:37, Thursday, 19 February
QC RSS XML 20:49, Thursday, 19 February 21:36, Thursday, 19 February
Radar XML 20:42, Thursday, 19 February 21:24, Thursday, 19 February
RevK®'s ramblings XML 20:49, Thursday, 19 February 21:35, Thursday, 19 February
Richard Stallman's Political Notes XML 20:49, Thursday, 19 February 21:38, Thursday, 19 February
Scenes From A Multiverse XML 20:49, Thursday, 19 February 21:36, Thursday, 19 February
Schneier on Security XML 20:49, Thursday, 19 February 21:29, Thursday, 19 February
SCHNEWS.ORG.UK XML 21:14, Thursday, 19 February 21:58, Thursday, 19 February
Scripting News XML 20:42, Thursday, 19 February 21:24, Thursday, 19 February
Seth's Blog XML 20:49, Thursday, 19 February 21:35, Thursday, 19 February
Skin Horse XML 20:42, Thursday, 19 February 21:24, Thursday, 19 February
Spinnerette XML 21:14, Thursday, 19 February 21:58, Thursday, 19 February
Tales From the Riverbank XML 20:49, Thursday, 19 February 21:38, Thursday, 19 February
The Adventures of Dr. McNinja XML 21:14, Thursday, 19 February 21:59, Thursday, 19 February
The Bumpycat sat on the mat XML 20:49, Thursday, 19 February 21:29, Thursday, 19 February
The Daily WTF XML 20:49, Thursday, 19 February 21:35, Thursday, 19 February
The Monochrome Mob XML 20:49, Thursday, 19 February 21:30, Thursday, 19 February
The Non-Adventures of Wonderella XML 21:14, Thursday, 19 February 21:57, Thursday, 19 February
The Old New Thing XML 21:14, Thursday, 19 February 21:58, Thursday, 19 February
The Open Source Grid Engine Blog XML 20:49, Thursday, 19 February 21:36, Thursday, 19 February
The Stranger XML 21:14, Thursday, 19 February 21:59, Thursday, 19 February
towerhamletsalarm XML 20:49, Thursday, 19 February 21:35, Thursday, 19 February
Twokinds XML 20:42, Thursday, 19 February 21:24, Thursday, 19 February
UK Indymedia Features XML 20:42, Thursday, 19 February 21:24, Thursday, 19 February
Uploads from ne11y XML 20:49, Thursday, 19 February 21:35, Thursday, 19 February
Uploads from piasladic XML 21:14, Thursday, 19 February 21:57, Thursday, 19 February
Use Sword on Monster XML 20:49, Thursday, 19 February 21:36, Thursday, 19 February
Wayward Sons: Legends - Sci-Fi Full Page Webcomic - Updates Daily XML 20:49, Thursday, 19 February 21:35, Thursday, 19 February
what if? XML 20:49, Thursday, 19 February 21:30, Thursday, 19 February
Whatever XML 20:49, Thursday, 19 February 21:38, Thursday, 19 February
Whitechapel Anarchist Group XML 20:49, Thursday, 19 February 21:38, Thursday, 19 February
WIL WHEATON dot NET XML 21:14, Thursday, 19 February 21:58, Thursday, 19 February
wish XML 21:14, Thursday, 19 February 21:59, Thursday, 19 February
Writing the Bright Fantastic XML 21:14, Thursday, 19 February 21:58, Thursday, 19 February
xkcd.com XML 21:14, Thursday, 19 February 21:57, Thursday, 19 February