Wednesday, 18 February

19:28

RIP Scalzi DSL Line, 2004 – 2026 [Whatever]

As most of you know, I live on a rural road where Internet options are limited. More than 20 years ago, DSL became available where I live, which meant that I could ditch the satellite internet of the early 2000s, which topped out at something like 1.5mbps and rarely achieved that, and which went out entirely if it rained, for a line that had a, for me, blisteringly fast 6mbps speed.

That was the speed it stayed at for most of the next twenty years, until my provider, rather grudgingly, increased the speed to 40mbps — not fast, but certainly faster — and there it stayed. Over time the DSL service stopped being as reliable, rarely actually got up to 40mbps, and, actually started going out when it rained, like the satellite internet of old, but without the excuse of being, you know, in space and blocked by clouds.

A few months back I went ahead and ordered 5G internet service from Verizon, because it was faster and doesn’t have usage caps, which had been a stumbling block for 5G service previously. It’s not top of the line, relative to other services that are available elsewhere — usually 120+mbps, where the church’s service is at 300+mbps, and Athena’s in town Internet is fiber and clocks in at 2gbps — but it’s fast enough for what I use the internet for, and to steam high-definition movies and TV. I held on to the DSL since then to make sure I was happy with the new service, because that seemed a sensible thing to do.

No more. The 5G wireless works flawlessly and has for months, and the time has come. After 20+ years, I have officially cancelled my DSL line. A big day in the technology life of the Scalzi Compound. I thank the DSL for its service, but its watch has now ended. We all most move on, ceaselessly, into the future, where I can download stuff faster.

I’m still keeping my landline, however, to which the DSL was attached. Call me old-fashioned.

— JS

18:49

Slog AM: ICE Left a $200 million Hole In Minneapolis’s Economy, Waymo Uses DoorDash to Close the Doors of Its Cars, Will the Seahawks Visit the White House? [The Stranger]

The Stranger's morning news roundup. by Charles Mudede

The Seahawks basically destroyed the Patriots to claim their second Super Bowl. Now comes the big question: Will the team visit the White House? Back in 2014, they made the trip and celebrated with the then president Barack Obama. But things are very different now. Trump is violently attacking cities and not even trying to hide his racism. The Seahawks have a lot Black players and the city it represents is considered a “Welcoming City.” How can this work out? Rumours recently circulated that the team had declined the standard invitation, but it’s now reported that the whole business is still very much up in the air. Also not verified is the rumor that the Seahawks haven’t received an invitation from Trump’s White House. Maybe both sides just want to keep their mouths shut and let this difficult matter quietly pass like two ships in the night.

Yesterday, at around 4:45 pm, we at the Stranger’s office saw through the windows something that had the likeness of snow. Was it the real stuff or not? We couldn’t tell. Maybe this was a collective hallucination. Today, expect a low of 31, a mostly cloudy morning, some rain in the afternoon, and, yet again, no snow.

KIRO Radio is popping the champagne because Seattle's noble attempt to improve the labor standards of hyper-exploited gig workers has apparently backfired. Drivers are now earning “20 cents less per hour than before.” They blamed this drop on Seattle leaders who apparently have no contact with reality, with capitalist reality. And what the captains of this mode of accumulation never stop telling us is this: Labor rights and rising wages are the sole cause of rising costs and immiseration of the poor. Any other explanation is, according to them, not realistic. It’s just labor’s demand for more and more that’s the root of all evil, they say.

A quick thing about a book I’m currently reading. It’s called Capitalism: A Global History. It’s by German-born economist and historian Sven Beckert. It’s 1344  pages. I’m near page 900. But what I've learned from this book is that the natural rate for wages in capitalism is zero. And the rise of wages is, essentially, nothing but the resistance by labor to this natural tendency. Beckert doesn’t exactly say this, but he does make it clear that a capitalism without any regulation must lead to its form of slavery, which is the commodification of the body. Wages above zero result in the commodification of labor power. I will stop there and now turn to the robots of the 21st century.

The best story I’ve heard in a minute is that Waymo, which is basically Uber without drivers, is turning to gig workers, such as those who work for DoorDash, to close car doors left open by flakey or crafty customers. And how much does Waymo pay for what can only be called the human touch? $11.25. So the zero wage robot is not yet up to snuff.

Now, let’s turn to something that should really alarm Seattle’s leaders. Operation Metro Surge not only brought death, state-sanctioned lawbreaking, and general mayhem to Minneapolis; it also delivered a big blow to the city’s economy. The estimated cost so far of an operation that began on the first day of the present year and had nothing to do with protecting Americans from the “worst of the worst” is $203.1 million. The bulk of this cost is attributed to revenue small businesses lost ($82 million). The rest of the tab went to lost wages ($47 million), social services that experienced extraordinary stress ($17 million), overtime pay to police officers and other city officials ($4 million), and hotel cancellations ($4 million). The city thinks it will take years to regain ground from this complete waste of money.

Now that ICE is bringing its post-apocalyptic show in Minneapolis to an end, the border czar, Tom (Bribe Loving) Homan, is looking for the next theater. Homan: "I've said from day one that, you know, we need to flood the zone and sanctuary cities with additional agents.” That "sanctuary city” could be Seattle. And the cost of this performance, which is all it really is, will be terrific. The only thing that might protect us from this massive waste of time, lives, and money is our tech hub status, which means we play an important role in maintaining the only game in town, the gigantic AI bubble. 

 

Minneapolis officials have released an estimated tab on what Operation Metro Surge has cost city residents so far.

 

bit.ly/4cqEfFq

[image or embed]

— FOX 13 Seattle (@fox13seattle.bsky.social) February 15, 2026 at 6:00 AM

 

So, you still want to talk about how high wages always backfire. Well, what’s this in the Seattle Times? The pay ratio for Starbucks CEO, Brian Niccol, is an astounding 1,749 to 1. Meaning–he earns $30,992,773, and the average worker earns $17,279. My god. The pressure to reduce wages to zero is way too real in 2026. Again, nothing but labor’s resistance to the true nature of capitalism prevents this catastrophe. If we do nothing, the zero law will be, to use the words of Marx, like “the law of gravity [that] asserts itself when a house falls about our ears.”

Let’s end AM with an ‘80s tune that compares love-enthrallment with the condition of a robot, the Pointer Sister’s “Automatic.”

18:00

Antoine Beaupré: net-tools to iproute cheat sheet [Planet Debian]

This is also known as: "ifconfig is not installed by default anymore, how do I do this only with the ip command?"

I have been slowly training my brain to use the new commands but I sometimes forget some. So, here's a couple of equivalence from the old package to net-tools the new iproute2, about 10 years late:

net-tools iproute2 shorter form what it does
arp -an ip neighbor ip n
ifconfig ip address ip a show current IP address
ifconfig ip link ip l show link stats (up/down/packet counts)
route ip route ip r show or modify the routing table
route add default GATEWAY ip route add default via GATEWAY ip r a default via GATEWAY add default route to GATEWAY
route del ROUTE ip route del ROUTE ip r d ROUTE remove ROUTE (e.g. default)
netstat -anpe ss --all --numeric --processes --extended ss -anpe list listening processes, less pretty

Another trick

Also note that I often alias ip to ip -br -c as it provides a much prettier output.

Compare, before:

anarcat@angela:~> ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host noprefixroute 
       valid_lft forever preferred_lft forever
2: wlan0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether xx:xx:xx:xx:xx:xx brd ff:ff:ff:ff:ff:ff permaddr xx:xx:xx:xx:xx:xx
    altname wlp166s0
    altname wlx8cf8c57333c7
4: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether xx:xx:xx:xx:xx:xx brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever
20: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether xx:xx:xx:xx:xx:xx brd ff:ff:ff:ff:ff:ff
    inet 192.168.0.108/24 brd 192.168.0.255 scope global dynamic noprefixroute eth0
       valid_lft 40699sec preferred_lft 40699sec

After:

anarcat@angela:~> ip -br -c a
lo               UNKNOWN        127.0.0.1/8 ::1/128 
wlan0            DOWN           
virbr0           DOWN           192.168.122.1/24 
eth0             UP             192.168.0.108/24 

I don't even need to redact MAC addresses! It also affects the display of the other commands, which look similarly neat.

Also imagine pretty colors above.

Finally, I don't have a cheat sheet for iw vs iwconfig (from wireless-tools) yet. I just use NetworkManager now and rarely have to mess with wireless interfaces directly.

Background and history

For context, there are traditionally two ways of configuring the network in Linux:

  • the old way, with commands like ifconfig, arp, route and netstat, those are part of the net-tools package
  • the new way, mostly (but not entirely!) wrapped in a single ip command, that is the iproute2 package

It seems like the latter was made "important" in Debian in 2008, which means every release since Debian 5 "lenny" (!) has featured the ip command.

The former net-tools package was demoted in December 2016 which means every release since Debian 9 "stretch" ships without an ifconfig command unless explicitly requested. Note that this was mentioned in the release notes in a similar (but, IMHO, less useful) table.

(Technically, the net-tools Debian package source still indicates it is Priority: important but that's a bug I have just filed.)

Finally, and perhaps more importantly, the name iproute is hilarious if you are a bilingual french speaker: it can be read as "I proute" which can be interpreted as "I fart" as "prout!" is the sound a fart makes. The fact that it's called iproute2 makes it only more hilarious.

17:49

Free Software Directory meeting on IRC: Friday, February 20, starting at 12:00 EST (17:00 UTC) [Planet GNU]

Join the FSF and friends on Friday, February 20 from 12:00 to 15:00 EST (17:00 to 20:00 UTC) to help improve the Free Software Directory.

Free Software Directory meeting on IRC: Friday, November 7, starting at 12:00 EST (17:00 UTC) [Planet GNU]

Join the FSF and friends on Friday, November 7 from 12:00 to 15:00 EST (17:00 to 20:00 UTC) to help improve the Free Software Directory.

Free software activities in October 2025 [Planet GNU]

Hello and welcome to my October free software activities report.

GNU & FSF

  • GNU Spotlight: I prepared and sent the October GNU Spotlight to the FSF campaigns team, who will review and publish it on the FSF’s community blog and as part of the next issue of the monthly Free Software Supporter newsletter.

  • GNU Emacs:

    • bug#79629: I noticed that I was unable to customize the holiday-other-holidays variable using the setopt macro: my change did not seem to take effect. As Eli Zaretskii helpfully pointed out, this was because customizing holiday-other-holidays did not recompute the value of calendar-holidays, which is computed once, when the package is loaded.

      So I prepared and sent a patch 500a2d0cc55 to recompute calendar-holidays when its components are set.

    • bbabc1db258: While reading about custom-reevaluate-setting in the Startup Summary node of the GNU Emacs Lisp reference manual I noticed a small typo, so I committed a patch to fix it.

Misc

  • The Free Software Foundation celebrated its fortieth birthday on 4 October 2025 online and in person in Boston! I was not able to attend the event in person, so I recorded a video for the FSF40 volunteer panel held at the venue.

  • This month at work one of our Elasticsearch clusters experienced partial failure, and we needed to extract document IDs from a backup of one of the cluster’s shards. Elasticsearch uses Lucene under the hood and each shard is a standalone Lucene index, so I used Lucene’s Java API to write a little GetIDS class to query the index for all of its documents, and for each document print its _id field, decoding the binary-valued BytesRef as needed. The gotcha was that all of the BytesRefs seemed to have a -1 byte in the beginning, throwing off the recommended BytesRef.utf8ToString() method, so I had to reimplement that method’s logic in my program and have it use an adjusted offset + 1 and length - 1 instead.

That’s about it for this month’s report.

Take care, and so long for now.

Free software activities in October 2025 [Planet GNU]

Hello and welcome to my October free software activities report.

GNU & FSF

  • GNU Spotlight: I prepared and sent the October GNU Spotlight to the FSF campaigns team, who will review and publish it on the FSF’s community blog and as part of the next issue of the monthly Free Software Supporter newsletter.

  • GNU Emacs:

    • bug#79629: I noticed that I was unable to customize the holiday-other-holidays variable using the setopt macro: my change did not seem to take effect. As Eli Zaretskii helpfully pointed out, this was because customizing holiday-other-holidays did not recompute the value of calendar-holidays, which is computed once, when the package is loaded.

      So I prepared and sent a patch 500a2d0cc55 to recompute calendar-holidays when its components are set.

    • bbabc1db258: While reading about custom-reevaluate-setting in the Startup Summary node of the GNU Emacs Lisp reference manual I noticed a small typo, so I committed a patch to fix it.

Misc

  • The Free Software Foundation celebrated its fortieth birthday on 4 October 2025 online and in person in Boston! I was not able to attend the event in person, so I recorded a video for the FSF40 volunteer panel held at the venue.

  • This month at work one of our Elasticsearch clusters experienced partial failure, and we needed to extract document IDs from a backup of one of the cluster’s shards. Elasticsearch uses Lucene under the hood and each shard is a standalone Lucene index, so I used Lucene’s Java API to write a little GetIDS class to query the index for all of its documents, and for each document print its _id field, decoding the binary-valued BytesRef as needed. The gotcha was that all of the BytesRefs seemed to have a -1 byte in the beginning, throwing off the recommended BytesRef.utf8ToString() method, so I had to reimplement that method’s logic in my program and have it use an adjusted offset + 1 and length - 1 instead.

That’s about it for this month’s report.

Take care, and so long for now.

GNU Parallel 20251022 ('Goodall') released [stable] [Planet GNU]

GNU Parallel 20251022 ('Goodall') has been released. It is available for download at: lbry://@GnuParallel:4

Quote of the month:

  idk who built GNU parallel but I owe them a beer
    -- ram @h4x0r1ng

New in this release:

  • No new features.
  • Bug fixes.


GNU Parallel - For people who live life in the parallel lane.

If you like GNU Parallel record a video testimonial: Say who you are, what you use GNU Parallel for, how it helps you, and what you like most about it. Include a command that uses GNU Parallel if you feel like it.


About GNU Parallel


GNU Parallel is a shell tool for executing jobs in parallel using one or more computers. A job can be a single command or a small script that has to be run for each of the lines in the input. The typical input is a list of files, a list of hosts, a list of users, a list of URLs, or a list of tables. A job can also be a command that reads from a pipe. GNU Parallel can then split the input and pipe it into commands in parallel.

If you use xargs and tee today you will find GNU Parallel very easy to use as GNU Parallel is written to have the same options as xargs. If you write loops in shell, you will find GNU Parallel may be able to replace most of the loops and make them run faster by running several jobs in parallel. GNU Parallel can even replace nested loops.

GNU Parallel makes sure output from the commands is the same output as you would get had you run the commands sequentially. This makes it possible to use output from GNU Parallel as input for other programs.

For example you can run this to convert all jpeg files into png and gif files and have a progress bar:

  parallel --bar convert {1} {1.}.{2} ::: *.jpg ::: png gif

Or you can generate big, medium, and small thumbnails of all jpeg files in sub dirs:

  find . -name '*.jpg' |
    parallel convert -geometry {2} {1} {1//}/thumb{2}_{1/} :::: - ::: 50 100 200

You can find more about GNU Parallel at: http://www.gnu ... rg/s/parallel/

You can install GNU Parallel in just 10 seconds with:

    $ (wget -O - pi.dk/3 || lynx -source pi.dk/3 || curl pi.dk/3/ || \
       fetch -o - http://pi.dk/3 ) > install.sh
    $ sha1sum install.sh | grep c555f616391c6f7c28bf938044f4ec50
    12345678 c555f616 391c6f7c 28bf9380 44f4ec50
    $ md5sum install.sh | grep 707275363428aa9e9a136b9a7296dfe4
    70727536 3428aa9e 9a136b9a 7296dfe4
    $ sha512sum install.sh | grep b24bfe249695e0236f6bc7de85828fe1f08f4259
    83320d89 f56698ec 77454856 895edc3e aa16feab 2757966e 5092ef2d 661b8b45
    b24bfe24 9695e023 6f6bc7de 85828fe1 f08f4259 6ce5480a 5e1571b2 8b722f21
    $ bash install.sh

Watch the intro video on http://www.youtub ... L284C9FF2488BC6D1

Walk through the tutorial (man parallel_tutorial). Your command line will love you for it.

When using programs that use GNU Parallel to process data for publication please cite:

O. Tange (2018): GNU Parallel 2018, March 2018, https://doi.org/1 ... 81/zenodo.1146014.

If you like GNU Parallel:

  • Give a demo at your local user group/team/colleagues
  • Post the intro videos on Reddit/Diaspora*/forums/blogs/ Identi.ca/Google+/Twitter/Facebook/Linkedin/mailing lists
  • Get the merchandise https://gnuparall ... igns/gnu-parallel
  • Request or write a review for your favourite blog or magazine
  • Request or build a package for your favourite distribution (if it is not already there)
  • Invite me for your next conference


If you use programs that use GNU Parallel for research:

  • Please cite GNU Parallel in you publications (use --citation)


If GNU Parallel saves you money:



About GNU SQL


GNU sql aims to give a simple, unified interface for accessing databases through all the different databases' command line clients. So far the focus has been on giving a common way to specify login information (protocol, username, password, hostname, and port number), size (database and table size), and running queries.

The database is addressed using a DBURL. If commands are left out you will get that database's interactive shell.

When using GNU SQL for a publication please cite:

O. Tange (2011): GNU SQL - A Command Line Tool for Accessing Different Databases Using DBURLs, ;login: The USENIX Magazine, April 2011:29-32.


About GNU Niceload


GNU niceload slows down a program when the computer load average (or other system activity) is above a certain limit. When the limit is reached the program will be suspended for some time. If the limit is a soft limit the program will be allowed to run for short amounts of time before being suspended again. If the limit is a hard limit the program will only be allowed to run when the system is below the limit.

17:14

Could Write­Process­Memory be made faster by avoiding the intermediate buffer? [The Old New Thing]

A little while ago, we wondered whether Write­Process­Memory was faster than shared memory for transferring data between two processes, and the conclusion is that it wasn’t. Shared memory, as its name implies, shares the memory between two processes: The two processes are accessing the same memory; there are no copies. On the other hand, the implementation of Write­Process­Memory allocates a transfer buffer, copies the data from the source to the transfer buffer, then changes memory context to the destination, and then copies the data from the transfer buffer to the destination. But could Write­Process­Memory be optimized to avoid this copy?

I mean, I guess you could do that in theory. I’m thinking, maybe create a memory descriptor list (MDL), lock and map the pages into kernel mode while in the context of the source, then change context to the destination and copy the memory to the destination. Repeat until all the memory has been copied. You don’t want to allocate a single MDL for the entire source block because the program might say that it wants to copy 100GB of memory, and if you didn’t cap the size of the transfer buffer, that would lock 100GB of RAM.

But it seems overkill and unnecessary to lock the source pages. It’s fine for them to be pageable. We’re okay with them faulting in as necessary.

I don’t know if there’s a way to map memory from one process into another except by locking it. I don’t spend a lot of time in kernel mode. But you do have to be careful that the mapping goes into the kernel address space and not the user-mode address space. Putting it in the user-mode address space would be a security vulnerability because the destination process can see the bytes on the source page that are not part of the memory being copied.¹

But really, all of this effort is pointless. We saw that the purpose of the Write­Process­Memory function is not inter-process communication (IPC) but to be a tool for debuggers. Debuggers are typically writing just a few bytes at a time, say, to patch a breakpoint instruction, and the Write­Process­Memory function actually goes out of its way to write the memory, even in the face of incompatible memory protections, though it does so in a not-thread-safe way. But that’s okay because the destination process is presumably frozen by the debugger when it calls Write­Process­Memory. A debugger is not going to patch a process while it’s actively running. The lack of atomicity means that patching a running process could result in the process seeing torn state, like a partly-patched variable or even a partly-patched instruction.

In summary, Write­Process­Memory was not intended to be used as an inter-process communication channel. Its intended client is a debugger that is using it to patch bytes in a process being debugged. The very high level of access required to call the function (PROCESS_VM_WRITE) is not suitable for an inter-process communication channel, since it basically gives the writer full pwnage over the process being written to. In the case of a debugger, you want the debugger to have complete and total control of the process being debugged. But in the case of IPC, you don’t want to give your clients that high a level of access to your process. And even if you get past that, the lack of atomicity and lack of control over the order in which the bytes become visible in the target process means that Write­Process­Memory is not suitable as an IPC mechanism anyway. There’s no point trying to make a bad idea more efficient.

¹ Or you could try it the other way: Map the destination into the source. But now you are giving the source read access to the destination bytes that share the same page as the destination buffer, even though the source may not have PROCESS_VM_READ access.

The post Could <CODE>Write­Process­Memory</CODE> be made faster by avoiding the intermediate buffer? appeared first on The Old New Thing.

16:21

[$] More accurate congestion notification for TCP [LWN.net]

The "More Accurate Explicit Congestion Notification" (AccECN) mechanism is defined by this RFC draft. The Linux kernel has been gaining support for AccECN with TCP over the last few releases; the 7.0 release will enable it by default for general use. AccECN is a subtle change to how TCP works, but it has the potential to improve how traffic flows over both public and private networks.

15:42

Thomas Lange: 42.000 FAI.me jobs created [Planet Debian]

The FAI.me service has reached another milestone:

The 42.000th job was submitted via the web interface since the beginning of this service in 2017.

The idea was to provide a simple web interface for end users for creating the configs for the fully automatic installation with only minimal questions and without knowing the syntax of the configuration files. Thanks a lot for using this service and for all your feedback.

The next job can be yours!

P.S.: I like to get more feedback for the FAI.me service. What do you like most? What's missing? Do you have any success story how you use the customized ISO for your deployment? Please fill out the FAI questionaire or sent feedback via email to fai.me@fai-project.org

About FAI.me

FAI.me is the service for building your own customized images via a web interface. You can create an installation or live ISO or a cloud image. For Debian, multiple release versions can be chosen, as well as installations for Ubuntu Server, Ubuntu Desktop, or Linux Mint.

Multiple options are available like selecting different desktop environments, the language and keyboard and adding a user with a password. Optional settings include adding your own package list, choosing a backports kernel, adding a postinst script and adding a ssh public key, choosing a partition layout and some more.

15:35

Fedora now available in Syria [LWN.net]

Justin Wheeler writes in Fedora Magazine that Fedora is now available in Syria once again:

Last week, the Fedora Infrastructure Team lifted the IP range block on IP addresses in Syria. This action restores download access to Fedora Linux deliverables, such as ISOs. It also restores access from Syria to Fedora Linux RPM repositories, the Fedora Account System, and Fedora build systems. Users can now access the various applications and services that make up the Fedora Project. This change follows a recent update to the Fedora Export Control Policy. Today, anyone connecting to the public Internet from Syria should once again be able to access Fedora.

[...] Opening the firewall to Syria took seconds. However, months of conversations and hidden work occurred behind the scenes to make this happen.

15:07

The Spurlocks of RSS-Land [Scripting News]

I saw a product announcement from Jake Spurlock -- a new feed reader called Today. From the description sounds well-thought-out.

He explains -- "Google killed Reader in 2013. I've been chasing that feeling ever since. So I built it."

I also know someone named John Spurlock, who I worked on some OPML and RSS stuff for Bluesky in 2023. I sent a note of congrats to him, when I really should've sent it to Jake.

Screen shot of the conversation I had with ChatGPT.

And text of the email I sent congratulating the wrong Spurlock.

  • Congrats on the new product!
  • Haven't tried it yet, I don't generally use Apple's store on my Mac, not sure why. I will do it though.
  • Your product looks nice and well-thought out.
  • And there are some ways we could work together now that I think you'll find interesting, like using FeedLand to get you instant updates based on rssCloud, assuming you haven't figured out how to support it from a client.
  • Also OPML subscriptions are nice too. Another thing I'd like to get going, and need someone to work with on to make it happen.

Also, I wonder if they're related? Have they met each other? Do they know of the havoc they are bringing to the formerly simple world of RSS.

One more thing, I wrote the foreword to a book Jake Spurlock wrote for O'Reilly about the Bootstrap Toolkit.

UI Changes [Ctrl+Alt+Del Comic]

Pretty soon we are going to push some UI changes to the website in support of the new update schedule/business model. Our change to a Patron-focused model has been really successful; the additional support has definitely balanced out what advertising has been failing to offer us for the past few years. That brings with it […]

The post UI Changes appeared first on Ctrl+Alt+Del Comic.

14:56

Dirk Eddelbuettel: qlcal 0.1.0 on CRAN: Easier Calendar Switching [Planet Debian]

The eighteenth release of the qlcal package arrivied at CRAN today. There have been no calendar updates in QuantLib 1.41 or 1.42 so it has been relatively quiet since the last release last summer but we now added a nice new feature (more below) leading to a new minor release version.

qlcal delivers the calendaring parts of QuantLib. It is provided (for the R package) as a set of included files, so the package is self-contained and does not depend on an external QuantLib library (which can be demanding to build). qlcal covers over sixty country / market calendars and can compute holiday lists, its complement (i.e. business day lists) and much more. Examples are in the README at the repository, the package page, and course at the CRAN package page.

This releases makes it (much) easier to work with multiple calendars. The previous setup remains: the package keeps one ‘global’ (and hidden) calendar object which can be set, queried, altered, etc. But now we added the ability to hold instantiated calendar objects in R. These are external pointer objects, and we can pass them to functions requiring a calendar. If no such optional argument is given, we fall back to the global default as before. Similarly for functions operating on one or more dates, we now simply default to the current date if none is given. That means we can now say

> sapply(c("UnitedStates/NYSE", "Canada/TSX", "Australia/ASX"), 
         \(x) qlcal::isBusinessDay(xp=qlcal::getCalendar(x)))
UnitedStates/NYSE        Canada/TSX     Australia/ASX 
             TRUE              TRUE              TRUE 
> 

to query today (February 18) in several markets, or compare to two days ago when Canada and the US both observed a holiday

> sapply(c("UnitedStates/NYSE", "Canada/TSX", "Australia/ASX"),
         \(x) qlcal::isBusinessDay(as.Date("2026-02-16"), xp=qlcal::getCalendar(x)))
UnitedStates/NYSE        Canada/TSX     Australia/ASX 
            FALSE             FALSE              TRUE 
> 

The full details from NEWS.Rd follow.

Changes in version 0.1.0 (2026-02-18)

  • Invalid calendars return id ‘TARGET’ now

  • Calendar object can be created on the fly and passed to the date-calculating functions; if missing global one used

  • For several functions a missing date object now implies computation on the current date, e.g. isBusinessDay()

Courtesy of my CRANberries, there is a diffstat report for this release. See the project page and package documentation for more details, and more examples.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

Freexian Collaborators: Monthly report about Debian Long Term Support, January 2026 (by Santiago Ruano Rincón) [Planet Debian]

The Debian LTS Team, funded by Freexian’s Debian LTS offering, is pleased to report its activities for January.

Activity summary

During the month of January, 20 contributors have been paid to work on Debian LTS (links to individual contributor reports are located below).

The team released 33 DLAs fixing 216 CVEs.

The team continued preparing security updates in its usual rhythm. Beyond the updates targeting Debian 11 (“bullseye”), which is the current release under LTS, the team also proposed updates for more recent releases (Debian 12 (“bookworm”) and Debian 13 (“trixie”)), including Debian unstable. We highlight several notable security updates here below.

Notable security updates:

  • python3.9, prepared by Andrej Shadura (DLA-4455-1), fixing multiple vulnerabilities in the Python interpreter.
  • php, prepared by Guilhem Moulin (DLA-4447-1), fixing two vulnerabilities that could yield to request forgery or denial of service.
  • apache2, prepared by Bastien Roucariès DLA-4452-1, fixing four CVEs.
  • linux-6.1, prepared by Ben Hutchings (DLA-4436-1), as a regular update of the linux 6.1 backport to Debian 11.
  • python-django, prepared by Chris Lamb (DLA-4458-1), resolving multiple vulnerabilities.
  • firefox-esr prepared by Emilio Pozuelo Monfort (DLA-4439-1)
  • gnupg2, prepared by Roberto Sánchez (DLA-4437-1), fixing multiple issues, including CVE-2025-68973 that could potentially be exploited to execute arbitrary code.
  • apache-log4j2, prepared by Markus Koschany (DLA-4444-1)
  • ceph, prepared by Utkarsh Gupta (DLA-4460-1)
  • inetutils, prepared by Andreas Henriksson (DLA-4453-1), fixing an authentication bypass in telnetd.

Moreover, Sylvain Beucler studied the security support status of p7zip, a fork of 7zip that has become unmaintained upstream. To avoid letting the users continue using an unsupported package, Sylvain has investigated a path forward in collaboration with the security team and the 7zip maintainer, looking to replace p7zip with 7zip. It is to note however that 7zip developers don’t reveal the information about the patches that fix CVEs, making it difficult to backport single patches to fix vulnerabilities in Debian released versions.

Contributions from outside the LTS Team:

Thunderbird, prepared by maintainer Christoph Goehre. The DLA (DLA-4442-1) was published by Emilio.

The LTS Team has also contributed with updates to the latest Debian releases:

  • Bastien uploaded gpsd to unstable, and proposed updates for trixie #1126121 and bookworm #1126168 to fix two CVEs.
  • Bastien also prepared the imagemagick updates for trixie and bookworm, released as DSA-6111-1, along with the bullseye update DLA-4448-1.
  • Chris proposed a trixie point update for python-django (#112646), and the work for bookworm was completed in February (#1079454). The longstanding bookworm update required tracking down a regression in the django-storages packages.
  • Markus prepared tomcat10 updates for trixie and bookworm (DSA-6120-1), and tomcat11 for trixie (DSA-6121-1)
  • Thorsten Alteholz prepared bookworm point updates for zvbi (#1126167) to fix five CVEs; taglib (#1126273) to fix one CVE; and libuev (#1126370) to fix one CVE.
  • Utkarsh prepared an unstable update of node-lodash to fix one CVE.

Other than the work related to updates, Sylvain made several improvements to the documentation and tooling used by the team.

Individual Debian LTS contributor reports

Thanks to our sponsors

Sponsors that joined recently are in bold.

14:49

An Asahi Linux progress report [LWN.net]

The Asahi Linux project, which is working to implement support for Linux on Apple CPUs, has published a detailed 6.19 progress report.

We've made incredible progress upstreaming patches over the past 12 months. Our patch set has shrunk from 1232 patches with 6.13.8, to 858 as of 6.18.8. Our total delta in terms of lines of code has also shrunk, from 95,000 lines to 83,000 lines for the same kernel versions. Hmm, a 15% reduction in lines of code for a 30% reduction in patches seems a bit wrong…

Not all patches are created equal. Some of the upstreamed patches have been small fixes, others have been thousands of lines. All of them, however, pale in comparison to the GPU driver.

The GPU driver is 21,000 lines by itself, discounting the downstream Rust abstractions we are still carrying. It is almost double the size of the DCP driver and thrice the size of the ISP/webcam driver, its two closest rivals. And upstreaming work has now begun.

An update to the malicious crate notification policy (Rust Blog) [LWN.net]

Adam Harvey, on behalf of the crates.io team has published a blog post to inform users of a change in their practice of publishing information about malicious Rust crates:

The crates.io team will no longer publish a blog post each time a malicious crate is detected or reported. In the vast majority of cases to date, these notifications have involved crates that have no evidence of real world usage, and we feel that publishing these blog posts is generating noise, rather than signal.

We will always publish a RustSec advisory when a crate is removed for containing malware. You can subscribe to the RustSec advisory RSS feed to receive updates.

Crates that contain malware and are seeing real usage or exploitation will still get both a blog post and a RustSec advisory. We may also notify via additional communication channels (such as social media) if we feel it is warranted.

14:21

Link [Scripting News]

New account on Twitter: DWiner43240. The old one dating back to the dawn of time is disabled, so at least the new owners can't post anything there, if I understand correctly.

Of Two Bloods [Original Fiction Archives - Reactor]

Original Fiction historical mystery

Of Two Bloods

Chronicling the secret exploits of the great detective’s illegitimate, but highly observant, younger…sibling…

Illustrated by Katherine Lam

Edited by

By ,

Published on February 18, 2026

0 Share
An illustration of two men having a discussion before a wall-sized portrait while a nun and a woman seated nearby watch them.

Chronicling the secret exploits of the great detective’s illegitimate, but highly observant, younger…sibling…

Novelette | 9,450 words

Paul Chambers emerged from behind the silently opened door. “Your secret is safe with me,” he said.

The young man Chambers addressed started guiltily, half rising from his wing-back chair. “What secret?”

“The secret which our…colleague…threatened to reveal. Your race.”

Royal Bridges flushed angrily. “You—listener at keyholes! You spy!”

Chambers shrugged his slim shoulders. “Spy? Not quite. But I hardly need be anything of the sort to have overheard Mr. Spencer bellowing about your ‘dirty black secret.’ Tell me, do you really intend to help him with his inheritance problem? Investigating such matters is by no means your métier.” Descending the two steps to their shared parlor, the young medical student took the twin to Bridges’s seat. “Or do you have some other means of defending your reputation against his demands?”

Bridges sank back into the shelter of his chair, covering his face with large hands. “No. No defenses against him, and no means of assisting him in his fight with his late uncle’s alleged wife. I’m no attorney. I’ll have to trust Spencer—though God knows why I should.” He raised a suddenly bloodless face. “Or, come to think of it, why should I trust you? We only met a little over a month ago. You’re not even American!”

Chambers smiled, but not, it seemed, at his fellow student. “If you were provided with the means to assist Spencer, would you?”

Bridges groaned. “But how? He expects me to find proof his uncle never married this housekeeper of his. A negative—”

“—is notoriously difficult to prove. Yes. But if you could—”

“My heritage would no doubt be revealed at any rate. It’s too scandalous a secret for him to keep it.”

“Then there’s no point in me offering you my assistance.” Chambers’s expressive eyes made this statement a question.

“Your assistance? But why should a wealthy Brit—”

Son of a wealthy Englishman. And illegitimate,” Chambers added, self-deprecatingly.

“Still, why should you care about a quadroon’s fate? It’s ruin for me, to be sure, but for you? Granted you’ll be seen as a dupe, but that’s no reason to involve yourself.” Bridges shook his close-cropped curls. “Best start packing up your belongings tonight. I’m to give Spencer my answer in the morning.”

“Tell him yes.”

“I can’t!” Surging to his feet, Bridges stormed back and forth before the empty hearth. “I can’t, don’t you see?” Twice he passed the calm face of his apartment-mate, then whirled to confront him. “I don’t have the least idea how to begin!”

“But I do, thanks to special…training as a child.”

“You! I say again, why should it be any concern of yours if I am expelled from school, driven from this house, shunned by all my former friends—”

“Why? Merely because of this.” And raising one gloved hand, the young Chambers removed a handkerchief from his breast pocket with a flourish, then wiped the white silk delicately along one high, ivory cheekbone. Where the silk had passed, the skin was darker than Bridges’s own.

“Mr. Spencer,” Bridges said, gesturing to the empty wing-back chair, “if you would kindly take a seat—”

“We have no business to discuss,” the young man said furiously, “in the presence of a third party.”

The small, almost dainty figure seated in the second wing-back chair spoke. “You’ve already discussed your business in the presence of a third party.”

Spencer’s head jerked back. Then his eyes narrowed to an obsidian glitter and he turned to face Bridges directly. “I told you this conversation was to remain between us!”

“I occupy the apartment’s other bedroom,” Chambers said. “Sir, the difficulty would have been not to hear your rather forcefully stated case.”

A pallor came over Spencer’s strong-boned countenance, perhaps at the realization that his demands of the previous evening might not be in accord with laws governing extortion in the Commonwealth of Massachusetts.

“Rest assured,” Chambers continued, “I am already privy to your mission, and—”

“And you’re going to try blackmailing me with what you think you might have overheard?” Spencer interrupted, with a bitter laugh. “My late father left me a very modest estate, and it is already close to exhaustion.”

“Mr. Spencer, you mistake my intention,” Chambers said. “I am offering to be of assistance in proving your claim to your late uncle’s estate.”

“I recognize you now,” Spencer said, eyes narrowing again to glittering black slits. “You’re that English fop who’s a couple years ahead of Bridges in the medical school. Some toff’s by-blow, everybody says. Why would any white man, even illegitimate, come to the aid of a subhuman like Bridges?”

“As you might imagine,” Chambers said, “the issue of inheritance cuts very close to the heart of a man who’ll never inherit his natural father’s wealth. I’ll not stand idly by and watch a man cheated out of an inheritance rightfully his.”

Spencer belatedly removed his top hat and used his free hand to push a spill of straight black hair off his brow. “Here.” He thrust the hat into Bridges’s hands as if he were a servant, then extended one broad hand toward Chambers. “Of course.” His accent was Boston Brahmin. “Of course you’d help a fellow white man. Forgive me—”

“Mr. Paul Chambers.” Chambers rose to shake the offered hand, which responded with a crushing grip. Chambers’s expression did not change.

“I’m R. Howard Spencer, Junior,” Spencer said, releasing Chambers’s fine-boned hand to sink into the chair Bridges had offered.

“Delighted, Mr. Spencer.” Chambers resumed his own seat. “To proceed. In order to investigate the marital claim of your late uncle’s housekeeper, Mr. Bridges and I will need more information—”

“What?” Spencer’s face darkened. “What more do you need than the names of my uncle and his housekeeper?”

“That will become clear,” Chambers said, “as Mr. Bridges and I ask our questions. It’s better for us to have too much information than not enough.”

“Of course,” Spencer said, irritably raking a hand through his thick dark hair. “Proceed.”

Chambers turned slightly in the chair so he faced both Spencer and Bridges.

“Mr. Spencer,” Bridges said, “what is the housekeeper’s name?”

“She calls herself Lucia Spencer, as if that Italian trollop has any claim to my family name,” Spencer said. “Her real name’s Lucia Giuliano. Straight off the boat from Sicily or some other degenerate clime I’ll warrant—”

“You refer to Dr. Agassiz’s theories of the polygenetic origins of the human family?” Chambers said mildly. “We’re familiar with them, thank you. Whether or not they’re true, can you can confirm that your late uncle’s housekeeper is at any rate an immigrant?”

“I don’t know,” Spencer said. “What else could she be? When I think of the way these d___ degenerates are overrunning this fine land and polluting our good Anglo-Saxon stock—”

“I take it,” Chambers said, “your uncle had children by his housekeeper?”

“Of course not!” Spencer burst out. “The wench is childless. Anyway, a fine, upstanding merchant like Uncle Will—William Francis Spencer—would never have debased himself by touching a subhuman woman. Whatever gave you such a disgusting idea?”

“Not all men hold to such ideals of purity,” Chambers said.

“How long was Lucia in your uncle’s employ?”

“Much of my life. I’m eighteen, so—” Spencer fell silent, calculating. “She was in his service twelve years.”

“Did she reside with your uncle,” Chambers said, “or—”

“All my uncle’s domestics lived downstairs.” Spencer gave a fashionable address on Beacon Street.

“Then the twelve years of Miss Giuliano’s service were spent entirely in Boston?” Bridges said.

“Yes,” Spencer said. “Uncle Will became wealthy as a trading man, traveling the world. Retired, settled in that fine house in Back Bay, and hired a domestic staff. They included Lucia Giuliano.”

“And is Miss Giuliano still in residence?” Bridges asked.

“My lawyer got her kicked out.” Spencer’s face was stony. “She’s got no right to be there. Or to keep me out. But her lawyer’s got the house tied up so I can’t move in.”

“Lawyers,” Chambers said, shaking his head. “I sympathize with your trials, Mr. Spencer. They are considerable.”

“Very true, Mr. Chambers,” Howard said. “None can know how I suffer. And when I leave here, it is to see them again.”

“I regret that we have so few questions left with which to detain you from such unpleasant company.”

“That’s quite all right.” Spencer favored Chambers with a rueful smile. “I’m grateful for anything you can do to end my dependence on legal counsel and gain me my inheritance.”

“That brings us to the matter of your uncle’s last will and testament,” Chambers said. “I take it there is none?”

Spencer smirked. “Indeed, there is not.”

“Are you sure?” Chambers said.

“Uncle Will never got around to preparing one. His law firm served my late parents, and also serves me.” Spencer smiled. “I have my information on good authority.”

Chambers inclined his head.

“Your uncle was a traveling man, Mr. Spencer,” Bridges said. “But he was born in Boston?”

“Uncle Will and my father—he was Uncle Will’s younger brother—moved down from Maine—Portland, it was—before they were twenty. My father came to study law at Harvard, but Uncle Will never gave a d___ about school. He found work on a clipper, did well enough to acquire his own ship and become a merchant himself.” Spencer’s voice grew harsh. “Did very well, indeed. But never married, never had any children. I’m his brother’s only child, so I’m the heir. But now that he’s passed away”—his complexion grew bright—“that Italian b___ is trying to defraud me with her false claim that they were married.”

Chambers rose, extending his small hand. “Rest assured, Mr. Spencer,” Chambers said, “Mr. Bridges and I will do everything in our power to see that your Uncle William’s estate goes to the rightful heir.”

A pair of pewter tankards clashed in the tobacco-fogged air. “To wives and sweethearts: May they never meet!” The stout, sandy-haired man on the other side of their time-polished table waited for Chambers’s and Bridges’s polite laughter, then gulped his beer.

“Pity you can’t join us, Mr. Chambers. The Liberty Bush serves a rare fine ale—almost as good as one out of your English breweries.”

Chambers met the man’s questioning gaze head on. “Yes, well, it’s no doubt better than this”—he swirled a tarry liquid in a narrow glass—“spirit, shall we call it? But my poor constitution won’t allow me to share your pleasure. Though if you’ll pardon my abstention, I’ll stand the two of you another round.” Ignoring Bridges’s sudden glowering, he signaled for the serving girl.

The order placed, Chambers turned once again to Carteret, as the sandy man was called. “So you served under Captain Spencer for—how many years?”

“Signed on as cabin boy in 18__. Twenty-six years that’d be, till I give my notice as first mate on hearin he was sellin his ships and investin the proceeds. He was a wonderful easy master, Captain Spencer, and I couldn’t see workin for any other.”

“A longstanding acquaintance, then. What did you know of his marriage to an Italian named Lucia?”

“An Eye-talian? Aye, likely he had one of them—maybe that North End gal he hired to take care of his house and such? She’s the only one I remember. Built like a brick s___house.”

Bridges leaned forward. “But was he married to her? It’s the relationship’s legal standing we’re interested in.”

Doubt wrinkled Carteret’s forehead. “Wonderful easy he was about that sort of thing. Wouldn’t have been any trouble for him gettin married to her, I guess, like he done with some of the others.”

“‘Some of—of the others’? Do you mean to tell us—”

Chambers silenced Bridges by laying a hand on his arm. “How many others were there—whom you yourself observed?”

“Well—” The response was delayed by the arrival of the freshly ordered beer. Bridges shoved his new tankard across to Carteret and received a matter-of-fact nod in acknowledgement. “My sincere thanks to you both, gentlemen. And here’s your health.” He raised his second tankard, drained it, set it down to one side, and wrapped a sunburnt hand around the third. “To answer your in-choir-ee, I didn’t see the need to keep a strict accounting.”

Over the next quarter hour, Carteret regaled the amateur investigators with a tale of approximately a dozen close female companions to his captain, met in port and under sail. At least half of these the crew had addressed as “Mrs. Spencer,” by custom if for no other reason. Others had gone under more colorful sobriquets.

They left Carteret in possession of two more pints, themselves not much the wiser as to anything except the rather salacious details he’d retained of the companions’ physical attributes. Long, dark hair seemed to be a trait all had shared—“Though whether straight or curly didn’t make much difference,” the former first mate noted. A predilection for the Junoesque could also be discerned, as Bridges told Chambers on their way home. “But what good that will do us I can’t say.”

“Can’t you? I suppose it isn’t very helpful. But we did discover something as to their countries of origin, their Christian—or ought we rather to say their given—names, and, most importantly, the order in which these lovely women appeared in their role as the captain’s lady.”

“I’m sorry,” Bridges said, “but I remain unclear on the relevance of the sequence of the captain’s early loves to Spencer’s claim on his uncle’s estate.”

“If I may clarify in a word?” Chambers said.

“By all means.”

“Bigamy.”

Bridges stared at Chambers for several seconds. His lips were parted, his eyes wide. Chambers smiled faintly.

“Of course!” Bridges said. “Even should the Italian girl produce a legitimate marriage license, it would be invalidated if her husband were previously married and never divorced.”

“And,” Chambers added, “if we find evidence of such.” He resumed walking.

They ascended the stone steps to the house where they rented their rooms. Bridges dropped his key as he took it from his pocket. Chambers bent first to retrieve it, causing Bridges to rather embarrassingly bump his nose against the back of the Britisher’s neck. The jolt he felt must have been caused by the blow to his pride, for there was little pain. Both apologized.

As they climbed to their upper-story flat, Bridges picked up the thread of their conversation again. “Yes, I can see that the earlier the marriage in such a series, the more likely its legitimacy,” he admitted.

Chambers inclined his head. “We’ll start with the earliest two.”

“With only two predecessors to Miss Giuliano to investigate, I suppose we should count ourselves lucky,” Bridges continued. “We may even finish before end-of-term. It won’t matter one jot that her marriage lines disappeared in a fire at the state archives. Young Spencer’s lawyers might have saved themselves the trouble of corroborating that disappearance with the housekeeper’s counsel.”

Carefully stripping his gloves, Chambers disposed of them neatly inside his hat. “Ah. But if we disprove Miss Giuliano’s claim via this route, it will be due to validating the claim of another. Have you thought of what our colleague’s reaction will be to that?”

The next afternoon, as the autumn sun slanted toward the west, Bridges entered the apartment. He found the parlor empty and Chambers’s bedroom door closed. Bridges went to his apartment-mate’s door and called, “Back from classes. And you?”

“Back, although not from classes,” replied Chambers’s voice from within. “Cables have been dispatched to the last known whereabouts of the Indian woman in Seattle and the Chinese woman in Macau. I also telegraphed some contacts my half brother has—a newspaper reporter in Seattle, a Portuguese colonial official in Macau.”

“You speak Portuguese?”

“And write it,” Chambers’s voice replied. “We’re a polyglot lot, my family. After departing the telegraph office I paid a visit to the late sea captain’s fine Back Bay home. All his servants have been released to seek new employment at locations unrevealed. Fortunately, the adjoining neighbor’s house girl proved quite garrulous.”

“She told you where the servants have gone?”

“She had no idea,” Chambers’s voice replied. “She also had no idea whence the alleged wife has taken herself. But she was quite convinced that Miss Giuliano was Captain Spencer’s wife. She also offered a significant piece of new information.”

“What is that?” Bridges said. “And why are we straining our voices in this manner? Why, pray tell, must you give me information through your closed door?”

“The reason for that will be made clear directly,” said Chambers’s voice. “As for the new information: It seems that when Miss Giuliano entered Captain Spencer’s home twelve years ago, she brought with her a younger sister, a five-year-old named Maria Teresa, whom she and the captain raised as a daughter. And the talkative servant told me where we might find this sister.”

“She would be seventeen now,” Bridges calculated. “Of marriageable age. Is she still in Boston?”

“She is indeed. And unmarried.”

“Her maiden state may present some difficulties,” Bridges said, “for two unknown men attempting to pay a call.”

“More than you have imagined.” Chambers’s voice sounded amused. “Maria Teresa Giuliano resides at a convent school. Which is why I have adopted measures you will find to be of an extremely shocking nature. Brace yourself. Are you ready?”

“More than ready,” Bridges replied in a bored drawl.

Chambers’s door swung open.

If Bridges had appeared thunderstruck at the notion that bigamy would save his career at Harvard, he now took on the semblance of a man who’d just received irrefutable proof that ghosts were real, or discovered an antediluvian monster stepping into his parlor. His mouth dropped opened, his hand flew to his chest, and he reeled backward as if he had received a tremendous physical blow. His heel struck an object and he fell back, arms flailing, to land in the seat of his wing-back chair.

Finally he spoke, but almost inaudibly. “Chambers? But—no!” His voice was gaining strength and volume, and perhaps the slightest note of panic. “Where are you hiding, Chambers? This—this beautiful woman simply cannot—cannot—be you!”

“And yet—” said the handsome, ivory-complected woman, perfectly coiffed, and dressed in the height of fashion from her hat and wig to her gloves and shoes, dipping a graceful curtsey as she spoke, “—and yet, Chambers c’est moi, Monsieur Bridges.”

“But—but—this is impossible!” Bridges said. “If I saw you on the streets, I would never believe—I cannot believe, even knowing—Paul, I would swear on the Good Book and my own dead mother’s soul that you are a woman.”

“Well,” Chambers said modestly, “I can only say I’ve learned from the best. My half brother is acclaimed on two continents as a master of disguise.”

“Your half brother is a master of disguise?” Bridges said. “And your family is a bunch of polyglots. And you study medicine, but investigate like a seasoned Pinkerton operative, and you received ‘special training as a child.’ And you’re of the British elite—Dear God above”—Bridges surged to his feet—“you’re a Holmes!”

“In all but name.” There was the faintest note of sorrow in Chambers’s voice. “Now,” he said more briskly, “we’ll need to leave separately. You will wait here several minutes, then rendezvous with me at the entrance to Sanders Theatre. It’s fortunate we reside in Cambridge, where women walking alone aren’t a novel sight. But if you’re seen escorting a woman from our apartment, you may not need young Mr. Spencer to get you kicked out of Harvard.”

“Understood,” Bridges said forcefully.

“Also,” Chambers added, extending an iron buttonhook, “I’ve been able to fasten the stays of my corset well enough, I believe, for an evening’s deception. But for the sake of speed I simply must have your assistance in buttoning my boots.”

Bridges bent to the task. His face was hidden, but a betraying flush colored with scarlet the very tips of his ears.

The sitting room in which Maria Teresa Giuliano was to receive her callers was plainly furnished but almost painfully clean. Examining the sill outside the spotless windowpanes—ceaseless observation being a habit ingrained in him by his famous older brother—Chambers noted that it, too, was free of the sooty residue so common to urban environments. Satisfied in his comprehension of the room’s orientation in relation to the street, he took his seat beside a pie-crust table that he might have a resting place for his reticule. In keeping with his current public role, he spread his gathered skirts with care so they wouldn’t be creased or crushed by his sitting.

The room’s one door opened to admit a tall, sturdy-looking young woman with a smoothly restrained and unfashionably severe hairstyle: a low chignon. A nun followed her and stationed herself at the exit as if to prevent her charge’s escape—or the escape of anyone.

“How do you do?” A brief curtsey, and a bow from Bridges in response; Chambers rose and executed his own greetings as he’d been taught. “You must be Miss Pauline Chambers, and Mr. Royal Bridges? And of course I’m Maria Teresa Giuliano, and this is Mother Anna Elizavetta. Tell me, how do you come to know my sister?”

“We don’t,” said Chambers, smiling so gently as to remove from the words any hint of contradicting harshness. “We merely wish to confirm with you some facts pertaining to her claim to be married to Captain William Spencer—”

“Her ‘claim’! You would dispute it? But it is truth!” Giuliano had not seated herself; she stood like a figurehead, braving invisible disdain. “Whom do you represent—the Chinese woman? But she is dead, died without issue!”

 “No, no!” Bridges started forward, hands stretched out and patting the air as if to calm it. “Quite otherwise—we wouldn’t dream of distressing you in such a manner. We only—”

The girl ignored him. “You!” She threw herself to her knees at Chambers’s feet. “You are a woman, and gently bred, I can tell at a glance. Have pity—don’t let my sister be slandered so! Her name dishonored—and we would lose everything, all she has worked for. All! All! Surely you understand….” Harsh sobs obscured the rest of her speech.

Taking the advantage granted by his dress, Chambers seized Giuliano by both her plump hands and dragged her unresisting from her pose of supplication. “You must be strong for Miss Lucia,” he admonished her. “Here. Dry your tears and quiet your mind. We mean you no harm.” Chambers’s silk handkerchief reappeared, now scented with violets.

Composing herself with this aid and a glass of wine procured at the orders of the attendant nun, Giuliano at length proved a fount of information—none of which would aid Spencer. She knew where her sister had fled, but would not share this intelligence. She had seen the papers her sister kept carefully locked in a steel box, and believed them genuine. She was entirely confident they must include both a private copy of the marriage license and the captain’s will; however, she reluctantly admitted she had not herself seen them. More, she had celebrated Mass with both her sister and the captain hundreds of times over her twelve years in the Spencer household, with attending clergy according every appearance of accepting the bond’s legitimacy. The Macau Chinese wife—partner in an earlier liaison, but deceased—she knew of from a shrine-like arrangement in the captain’s study: a small table where novenas burned continuously, and an imposingly large portrait hung on the wall above it among the old man’s ubiquitous charts and maps. When Chambers expressed diplomatically worded surprise at Captain Spencer’s Catholicism, Giuliano reported that he had converted from Congregationalism to win the Chinese wife. So far as Giuliano knew, the conversion had created no rift with his late brother’s family.

She appeared to have no knowledge of the Indian in Seattle.

Chambers leaned slightly forward in the chair he’d resumed as Maria Teresa composed herself. “Miss Giuliano, how well are you acquainted with your cousin?”

Maria Teresa’s black eyes flashed. “I do not understand the will of God sometimes! Why does He send my cousin to dispute my sister’s inheritance, when he can be no blood relative of ours?”

The rest of the room’s inhabitants stared in shock.

Chambers recovered first. “Howard Spencer Junior is not of your blood? He is adopted, then? Do you know this with absolute certainty?”

“My sister told me! She swore it was so when his lawyers forced her out of her house!”

“Is he aware of this himself?”

“I cannot say!” The fierceness of her tone matched her eyes. “I have not seen Howie since he was sent as a big, clumsy boy to a military boarding school in Pennsylvania. Then, he did not know of it.

“And I rejoiced when he was sent away. He behaved abominably to girls.”

Chambers’s expression turned to granite. “He hurt you?”

Mother Anna Elizavetta’s expression had gradually changed from astonishment to the sternness of a drill sergeant. Now she gave gruff orders: “Maria Teresa, you heap indiscretion atop the blasphemy of questioning God’s will! Leave the room at once!”

His mind filled with visions of setting fire to the dead captain’s brownstone as a method of forcing the vanished housekeeper to reveal her elusive documents’ whereabouts, Bridges joined Chambers in a hansom cab summoned by Mother Anna Elizavetta to the convent steps. Dusk purpled the air. Within the cab’s close confines, he found Chambers’s nearness suddenly unbearable.

He drummed his fingers on the window’s lever. He shifted from side to side on his inexplicably uncomfortable seat. “Has this driver taken a wrong turning? Surely we should have reached—”

“Hush! I’m trying to think!” A glance at the frowning severity of his companion’s brow inured Bridges to suffering the rest of their ride in silence.

Morning saw Bridges bound for class and Chambers, with the addition of a walking stick to his accustomed suit and top hat, eschewing the halls of learning for the precincts of a more commercially minded muse. His half brother’s journalistic contacts confirmed the rumors of Spencer’s adoption, but could provide no proof of it. An attempted visit to the offices of the would-be heir’s attorneys was productive of nothing along those lines and only served, Chambers ruefully admitted to Bridges when they met in the street, to excite suspicion.

“Fortunately, the card I presented gave an assumed name.”

Bridges frowned. “I do not like practicing so much deceit.”

“Nor do I,” Chambers admitted. “And yet I dislike martyrdom even more.” Silk handkerchief suddenly in hand, he mimed the gesture of wiping the artificial ivory from his cheek, recalling to his roommate the necessary charade they shared.

“Yes…well, perhaps—” A passing street-car’s thunder gave him an excuse for leaving his sentence unfinished. “Are you for home now?” The dim blues of autumn’s early evening were closing in, and he expected an assenting answer.

But, “No,” Chambers replied. “I have another interview to conduct still. The lovely Maria Teresa must know more than she has so far told us.”

Bridges felt a surprising twinge of jealousy. He hadn’t realized the strength of his attraction to the girl. “How can you gain entrance to her?” he objected.

“I rather fancy I will find a way.”

Full darkness had fallen by the time Chambers stood before the convent walls. As he’d marked on his earlier visit, the window of the sitting room where he and Bridges had been received overlooked this narrow, neglected-looking thoroughfare. A solitary streetlamp lighted greasy cobbles and tightly boarded windows.

The glass of the window he’d selected was dark, as he’d suspected it would be. As he’d hoped. The clean sill outside of it had led him to believe it a customary point of egress for the less docile of the institution’s habitués. What served as egress would most probably work as a means of ingress too.

Sure enough, on examination, the path to the window became plain: decorative stone carvings, fortuitously placed brackets and fittings—to climb up or down this way would cost a maiden a temporary loss of modesty, but it would not too greatly tax her strength. For someone of Chambers’s build and habits, mounting to the window was the work of mere moments. He attained his goal quickly and peered in to ascertain the room’s emptiness. His breath barely fogged the panes. Bracing himself with one hand against the pipe securely bolted to the stone wall, he pulled open the section of the window whose latch he had earlier surreptitiously released.

A pause to let his eyes adjust to the near-nonexistent light of the clouded skies filtering into the darkness of the building’s interior. Then, quietly as a gray cat, Chambers opened the room’s door and entered the rush-carpeted passageway. One flickering candle at the far end showed stairs winding away to higher and lower floors. As he had calculated based upon the sounds of Maria Teresa’s departure, her living quarters lay only a few steps in the opposite direction. Her door was unlocked. He shut it behind him. The blackness in which he stood was barely alleviated by the room’s mean little window. As his eyes adjusted, his nostrils flared at the scent of the sachets hanging in her wardrobe, her hair oil, her—

“Miss Chambers? Is that you? I hardly know how I suspect—”

“Shush!” In an instant Chambers was at her side, a small hand flung over her soft mouth. “We must speak,” he whispered. “Not here. Somewhere we won’t be overheard.” Reluctantly he released his grip, letting her sink back to the bed from which she’d risen.

“Why are you dressed as a man?” Her voice was subdued, but still might rouse the watchful nuns.

“I will explain all—elsewhere! Do you know of a spot we might go to? Secluded yet close by?”

“The garden. All who have not made their vows—I’ll take you,” she said, interrupting herself. Back along the passageway she led him, his slim hand tucked unnecessarily into her much larger one. Out the window, down the exterior wall most featly, and back into the convent precincts via a silent, evidently well-oiled gate.

The smell of drying flowers filled the air, just a little sweeter than hay. Mud slithered beneath his shoes as Maria Teresa took his hand again and led him off the path, to a backless bench of pale marble. It was almost as white as the girl’s nightgown.

“Now,” she said, seating herself and pulling him down to sit beside her. “What are you doing here? And clad so strangely?” For some reason she had failed to release his hand. “If I didn’t know you for a woman—”

“If you know me for that, you’re wiser than all Boston.” His glance dropped to the ground. “Your poor feet are bare!” he exclaimed.

How had that escaped his notice? If he was unobservant in such a matter, what else had he missed? Scanning their surroundings, he saw immediately the shadow across the gap where the garden gate hung open. What could it be? It shifted minutely—alive.

“Miss Giuliano, you trust me?”

“You may call me Maria Teresa if you wish. And yes, Miss Chambers—Pauline? I trust you—somehow. It is—”

“Stay here!” he commanded. Taking his hand from hers, he stood and walked unhesitatingly toward the blocked gate.

When he passed through it was clear.

Continuing onward as if nothing were amiss, Chambers headed toward the unlighted end of the street outside. Footsteps followed him, as he had anticipated. When he turned to face his foe, however, he saw only the girl. Almost he shouted at her to return to safety, but the noise would attract unwanted attention. Sighing with frustration, he walked back the way he’d come, gesturing at her to retreat. Instead she advanced till they were once again able to whisper to each other.

“You mustn’t be caught!” Chambers told her. “Go to the garden! Your room!” It was useless. She clung to his arms; refused to be shaken off.

“No! You have to tell me why you came here!”

There had been no good reason. Unlike his half brother, he’d acted irrationally. “I’ll find another way to explain that,” he promised. “We’ll meet again, but at the moment you’re in danger—Danger! You must leave! Now!”

Suddenly he spun the two of them around as if dancing the wildest waltz. A shot cracked the night in half, thudded into a wooden door on the left. Another hit Chambers’s shoulder. He jerked and slumped into Maria Teresa’s arms. The sound of running feet receded into the distance.

“Oh! Are you all right?”

“No.” He slid to the pavement. “Get away from here. Summon Bridges. I need treatment.”

“I’m not leaving!” she said, with a stubborn toss of her head. “And would not a doctor be better?”

A doctor would cause trouble. Bridges’s medical knowledge would be sufficient. Chambers tried to say as much, tried to rouse himself to speak. It wasn’t possible. The whirling blackness swallowing him lifted only briefly. Three times: once to reveal stumbling legs that he ought to have recognized immediately as his own, a second time in the moldy and miraculous interior of a hansom cab, a third as he gazed up at the worried countenance of his friend. The expression on Bridges’s face soon went from concern to horror-stricken shock.

“I’m not so badly wounded as that, surely?” Chambers joked. But he knew quite well that wasn’t the problem. The problem was that in preparation for administering medical aid Bridges had, naturally enough, stripped him, removing the accustomed bindings. Chambers felt the room’s air moving coolly against his exposed breasts.

After much argument, Bridges agreed to let matters remain as they had been for a little while longer—at least until the neutralization of Spencer’s threat. Weak from loss of blood, Chambers was hardly in shape to remove himself from their shared flat, as Bridges had to acknowledge. The British man—woman—no, it was best to think of him still as a man, as long as the two of them remained under one roof…Chambers kept almost entirely to his room, sleeping. Recovering quickly, Bridges hoped.

A cabled reply to one of Chambers’s inquiries of two days before arrived from Cheyenne in the young state of Wyoming. It had been sent by Mrs. Lilly Spencer en route from Seattle, and indicated that she would arrive in Boston via railroad in a scant four days.

Not a year old, neoclassical South Union Station was the largest railroad station in the world. In its most capacious waiting room great arched windows overlooked Summer Street, and additional illumination was provided by more than a thousand astonishingly bright electric lights. The station was a marvel of the modern age, and people in the great crowds seething across the marble mosaic floor routinely gawked and exclaimed at its sights.

Royal Bridges, seated at one end of an otherwise unoccupied oak bench, stared into space with the expression of one whose attention is turned entirely within. Chambers, returning from the ticket booths, for his part kept his attention on not jarring his left shoulder as he seated himself on the opposite end of the bench. The atmosphere between the two might be said to be strained.

“Mrs. Spencer’s train is expected momentarily.” Chambers placed his hat, gloves, and walking stick between himself and Bridges. “I’ve procured a small dining room so we may speak to her in private.”

He turned to face Bridges. “I want to thank you for not taking advantage of my—helpless state.”

Bridges looked at him stonily. “Did you truly think I’d do otherwise?”

“No,” Chambers replied. “But that doesn’t make my gratitude any less.”

Bridges inclined his head. “I have thought much about your—secret,” he said quietly. “I couldn’t imagine a reason, at first. I was too astonished—and yet, not entirely surprised. I’d already known, I realized. I’ve known for—longer than I would’ve imagined.” He smiled for a moment. “It seems the philosophers are right about the wisdom of the unconscious mind.” He glanced around. The throngs passed, indifferent to their presence. “I’ve told no one. And I deduce you’re doing this for the same reason we don’t announce—” He mimed Chambers’s gesture of removing face powder.

“Correct, sir.”

“‘Sir,’ am I now?” Bridges’s smile returned, with an added note of pain. “But your formality is utterly correct. Nor can we continue to share quarters. Not unless”—his eyebrows rose very slightly—“we were to marry.”

“I am honored by your offer.” Chambers touched his arm for the briefest interval. “But I’m the wrong type of woman.”

Bridges’s expression remained carefully frozen in meaningless amiability. “As I supposed. I saw how often the younger Miss Giuliano visited during your convalescence,” he said. “I had no intention of spying. But she seemed to feel no reticence in displaying affection for you while I was attending to your needs. I don’t think a good convent girl would be so forward as to visit a man alone, at night, or to clasp that man’s hand to her breast.”

“I think Miss Giuliano is quite willing to dispense with convent instruction whenever it suits her,” Chambers said. “But yes, she knows. And I must confess to having developed a very high regard for her. Very high.” He raised a hand as if to indicate his regard’s height, but with a wince left the gesture unfinished.

Bridges’s expression altered to a somber regard. “Your wound,” he said. “From the situation you’ve described—I cannot believe the shooting to be random. Yet it makes no sense. Howard Junior could have no objection to you questioning the housekeeper’s sister in pursuit of locating her. Anyway, no one knew you’d be at the convent. Have you any idea who might have shot you?”

“An idea, yes, but one still to be tested.” Chambers stood, first bracing his wounded shoulder. “I believe the train we await has arrived.”

Mrs. Lilly Spencer did not display Chambers’s familiarity with the latest Parisian fashions, and it was clear from her robust figure that she did not wear a corset. However, she was dressed handsomely, in the manner of a prosperous businesswoman. Tall and statuesque, she wore her luxuriant black hair in a pompadour which allowed her to sport a bowler hat. And she had not arrived alone.

Forthrightly introducing herself, she extended her white-gloved hand to shake the hands of Chambers and Bridges, then turned to the younger and taller figure who stood quietly beside her, a lockable leather briefcase in his hand. “Gentlemen, this is Richard Spencer.” She had a faint accent not often heard on the East Coast. “William’s and my youngest son.”

Bridges’s eyes widened slightly. “We understood Captain Spencer to have no issue.”

Lilly Spencer’s bold eyebrows winged upward. “You also doubted Captain Spencer had more than one wife, unless I’m too freely reading between the lines of Mr. Chambers’s telegram.”

“I think,” Chambers said, “we should continue this discussion in private.”

Taking the briefcase from her son, Mrs. Spencer dispatched him to see to the conveyance of their luggage to the hotel at which she’d already made reservations; then she and Bridges followed Chambers to one of the station restaurant’s more intimate rooms, where Chambers ordered tea and she and Bridges requested coffee.

With the serving girl’s departure, Spencer placed her briefcase beside her china cup. “You asked for evidence of my marriage to Captain Spencer.” Producing a small key from her reticule, she unlocked the briefcase and reached within. “Here is my copy of our marriage license. The records in Seattle will support it.”

His face impassive, Chambers studied the document for a long moment, with Bridges looking over his shoulder.

“I’ve already been in contact with the Seattle archives,” Chambers said in a neutral tone of voice. “However, they had—if I may be so direct—no record of a divorce.”

“Oh, Will and I were never divorced,” Lilly Spencer said, leaning back in her chair as if finished with a satisfying meal. She shook her head nostalgically. “We had an understanding. I understood that he had other women when he wasn’t in Seattle, and he understood then was when I had other men. I haven’t seen him since he retired from the sea, but we could find no reason to end our marriage. He realized I could own land more easily in Seattle if I were wed to a white man. And I retain for him—a great affect—” Her voice broke, and Chambers offered his freshly laundered handkerchief. She used it to dab her eyes. “I was grieved to receive your telegram and learn he’d passed away.”

Chambers and Bridges offered their condolences and sipped their beverages, allowing Mrs. Spencer time to compose herself.

When Chambers spoke again, his voice was grave. “Mrs. Spencer, you’ll need to meet with the captain’s most recent wife and her lawyer, and probably retain one yourself. It seems you’re the late captain’s heir.”

“I am.” Spencer withdrew another document from her briefcase. “I have William Spencer’s last will and testament, as I had his sworn word that I would never be disinherited by a new will. You—and any attorney in Boston—will find this document genuine.” While Chambers and Bridges scanned the document with widened eyes, she added, “Now, gentlemen. You haven’t indicated you’re relatives or lawyers, and you don’t act like either. So tell me. Why are you involved in our private family business?”

Bridges and Chambers raised their gazes to her sternly watchful face.

“Madam,” Chambers said, “we represent the interests of a third party. It looks as though that party, too, may be doomed to disappointment. The will appears to be in order.”

Soft, muffled steps became suddenly sharp and loud as the party walked off the empty entry hall’s broad Persian carpet and onto its bare floor. Five pairs of feet traversed the white tiles leading to the foot of an intricately balustraded staircase. The erstwhile Lucia Spencer turned to consult the establishment’s new mistress. The brownstone’s former housekeeper hadn’t taken long to appear once Lilly Spencer’s lawyer contacted hers.

“Perhaps we should start the tour in the house’s upper stories—though you won’t care to see the attics, I imagine?”

“Rooftop to the lowest cellar,” proclaimed the first—and, in the law’s eyes, the sole—Mrs. Spencer. “It’s all mine, and I want to see every bit of it!” Bridges had been forced to lend her his arm when Chambers and Miss Giuliano paired themselves off immediately upon meeting at the mansion’s door. The widow’s grip was as firm as her voice and bearing, though by the speed with which she mounted the stairs she’d no need of any man’s support.

On the second floor landing, however, she called a halt. “Miss—Mrs.—Lucia—Oh, I don’t know what to call you, and these legal men of mine would skin me alive if they thought I’d done anything to disadvantage my case, but can’t you see—Here, sit down on this bench and let me explain!”

“You wish me to be seated—in your presence?”

“Well, yes, and the rest of you might as well hear this too. You see, I’m not greedy, or a conniver, or wishful of spoiling your chances in the world—” Here she gave Maria Teresa a challenging look. “—or anything of that sort. I simply want the best for my Richard.”

“Your son—the gentleman you left waiting for us beside the curb?” The illegitimate wife’s bland, plump face showed just a hint of skepticism.

“Yes. He’s a good boy, though I can see by your manner you don’t think he takes much after his father’s looks. But likenesses are often a tad deceptive, as I’m sure you’ll come to understand when you’ve lived as long as I.” A darting glance from the corner of Mrs. Spencer’s eyes reminded Bridges to offer a gallantry as to her young appearance.

“But that’s not the point,” she continued, once she’d received the expected compliment. “Though I’ve asked Richard to guard the door and not intrude himself into your home—for it used to be your home, for all intents, and I’ve no doubt you expected it would return to your possession once the unpleasantness with little Howard was settled—well, as I said, it’s not for myself but for my son I would claim it. And I’ve been thinking and scheming in my head if there might not be a way for the two of us to both get what we want.”

The hands of the former housekeeper, lying clasped together on her black silk-clad lap, tightened their grasp on each other. Her eyebrows drew down in a frown. “I don’t understand.”

“Miss Maria Teresa is like your daughter to you, isn’t she? How about if she and my Richard was to marry? The property would stand to belong to both of them then—and I’d make certain sure it did!”

Ever so slightly, Maria Teresa Giuliano swayed where she stood. Chambers’s arm caught and steadied her. “M-m-married?” she stammered.

“Early days yet, I know. I merely aimed to put the idea in your heads, and to tell you there’d be no objection on my part.”

The Italian girl’s natural swarthiness took on a greenish hue that owed nothing to the teal damask hangings at the landing’s windows. “I—”

“Is there claret in the house? Brandy, even?” asked Chambers.

“Yes!” Lucia said eagerly. “Let’s drink a toast to—”

The crash of a door violently opening interrupted Mrs. Spencer’s proposal. It came from below. From the same place men’s angry voices rose and rose, drowning each other out:

“—my rights! No pack of impostors can deprive me of my inheritance! No—”

“Sir! I must insist! The ladies will—”

“‘Ladies’ indeed! Filthy guinea trollops, that’s all they—”

By the time these words were shouted, all had descended to where they had a clear view of the blow with which Lilly Spencer’s son stopped Mr. Howard Spencer’s rant mid-phrase. The blow’s recipient fell flat on his back at the staircase’s bottom; over him stood the tall form of Mr. Richard Spencer, hatless, and stripping off his gloves. “I cannot tolerate your slander of the woman I’m proud to call by the sacred name of Mother,” he declaimed. “Stand up, that I may escort you to a spot more suitable for brawling.”

“Richard, no!” Mrs. Spencer held out an imploring hand. “Don’t sink yourself to his level—we have the law on our side.”

“Ha! Do you?” Staggering to his feet, Howard Spencer lifted a walking stick from the carpet. It must have been his—the grip fit his large, square hand exactly. “Possession is nine tenths of the law, however—and I am here now, and won’t be ousted again by inferior bastards of any stripe!” He raised his stick threateningly—but stepped no closer to his attacker.

“Have a care when tossing such slurs about,” the Seattle man replied coolly. He stood his ground, looking not a whit intimidated. “You may find yourself tarred by the brush you thought to wield.”

The stick clattered to the tiles—the first indication toeither man of Chambers’s interference in their confrontation. Smoothly, the Brit retrieved the potential weapon he had twisted from Howard Spencer’s hold. “I’ll retain this; I believe you will both be better off without it,” he said. “A moment’s reflection, perhaps over the wine I conjecture we are about to be offered, and you will, I feel sure, find a more peaceful way to reconcile your differences.”

“The library!” Miss Giuliano proclaimed from the stairs, with the air of someone struck by an eternal truth.

“A grand idea!” said Mrs. Spencer. “May we—Lucia? I may call you Lucia, mayn’t I, in light of our coming intimacy—Will it be all right if we retire to the library to discuss matters? And perhaps we could find some refreshment for our guests?”

“Your ‘guests’! This is an outrage!” Howard Spencer protested.

It took the surprising strength of Chambers to guide him up the stairs. On the second story Chambers obliged him to enter the door beside which the captain’s former housekeeper stood, stoically ignoring the young man’s continued gripes and curses. The others waited within.

Gleaming wood paneled the walls. At one end of the room a curving set of three windows filtered dim daylight through their tinted panes. Bookcases built along the left hand wall held a few matched volumes in leather and a far greater number of curios: fans, oddly shaped seashells, and so forth. Bridges strode over to a small table on that side of the room to busy himself with a crystal decanter and glasses.

The room’s right hand wall was occupied almost entirely by a massive fireplace, vacant of even the makings of a fire. Next to it another table held a few objects indistinguishable in the gloom, with a mysteriously draped picture frame hanging above. Here Maria Teresa had stationed herself. As soon as Chambers and the two Mr. Spencers entered, a light flared in her hands and settled to the quiet, steady glow of candle flame. Maria Teresa lit the votives below the draped frame, then set the candle down and bowed her head momentarily. Lifting the candle once more, she turned to face the room.

“You are your mother’s son,” she said, addressing Howard Spencer, Junior. “I ought to have seen it earlier, but as a boy the resemblance was weak.”

Howard paused with his wine halfway to his lips. “That sounds uncommonly like a compliment—pert on your part, but I suppose it is well-meant.”

A bitter laugh broke from Miss Giuliano’s lips. “Well-meant? A compliment? I doubt you’ll think so when you know more—for your mother was none other than this!” Whirling dramatically, she clutched the concealing curtain and tore it aside to reveal the portrait of a woman unmistakably of the Chinese race.

It was several moments before Howard found his voice.

“What! That—that half-civilized—No! My father’s wife was not—”

“Before she passed away, your father made her his wife,” Maria Teresa said.

“You’re daft!” Howard straightened, his face hardening as he recovered his composure. “My father married into an old Boston family. Now, chit, I’ve had enough of your ludicrous delaying tactics—”

“Howie,” Maria Teresa said, “you’re adopted. And your blood father was Captain William Francis Spencer.”

All color drained from Howard’s face. The glass fell from his suddenly nerveless fingers and shattered. Claret spilled over the oak floorboards like blood.

“God is good.” Maria Teresa smiled. “He’s ensured you can never make me your mistress, as you’ve sought to do since your return to Boston. And He has done more.”

Bridges looked suddenly at Chambers and touched his left shoulder, mouthing, “Did How—”

Chambers mouthed, “Later!”

The rest, unaware of this fleeting byplay, stared at Maria Teresa as fixedly as children watching their first moving picture.

She continued speaking, her voice rising to a note of ferocious satisfaction. “God has seen to it, Howie, that you’ll never be your natural father’s heir!”

“But—my uncle—he’s not—I can’t be—” Rising emotion stole the sense from the disinherited man’s tongue. He grew more incoherent as the rest of the room’s inhabitants gathered to examine the painting’s face. An Asian cast was revealed in many of Howard Spencer’s facial features. The consensus was for a most telling likeness. Especially, as Mrs. Lilly Spencer noted, in anger.

Emerging from his room with a pair of Gladstone bags, Bridges found Chambers descending from his own room. Through the open door, Bridges could espy a small mountain of bags and steamer trunks.

Chambers smiled. “I don’t travel so lightly as some.” As Bridges offered to assist with the luggage, he added, “There’ll be a wagon from the station along directly, and the driver will aid me.”

“Very good.” Bridges put down his Gladstones and met Chambers’s gaze again. “It’s true we can no longer share quarters,” he said. “But there’s no need for you to leave Harvard. I’d never betray your secret.” He smiled faintly. “Secrets. I pray you, stay. You’ll have your medical degree by spring.”

“For the career I expect to pursue,” Chambers said, “a medical degree isn’t critical. Anyway, I’m a gentleman, after my fashion. So I’m bound for Seattle. I simply cannot allow Maria Teresa to be forced into an unwanted marriage.”

Bridges’s expression grew thoughtful. “Are you entirely sure it’s unwanted?”

“Maria Teresa visited me, before she and her sister left Boston. She made it plain she wants no union with Richard.”

“Are you entirely sure she’ll want you?” Bridges reenacted Chambers’s gesture of removing face powder.

Chambers smiled. “She knows that secret, as well,” he said. “She’d noticed my hands were covered with a concealing crème. She was familiar with it, as some girls use it to conceal dusky skin and had recommended it to her to do the same. I have my family’s aquiline features, and Maria Teresa”—Bridges noted the second usage of the girl’s Christian name—“supposed I shared her Italian heritage. I’ve let her know the truth of my race.”

“She doesn’t mind?”

Chambers’s smile might have widened, very slightly. “She’s made it clear that she minds neither my race nor my attentions.”

Bridges nodded. Then he raised his brows and gestured at Chambers’s left shoulder.

Chambers said, “You were correct.”

“Howard Spencer, Junior shot you.”

“I am convinced of it. He attended a military school, and Maria Teresa has told me she sometimes glimpsed him loitering near the convent, where he had no reason to be. She also confirmed that he made improper advances to her. In all likelihood he believed me—correctly—to be his rival.”

“You’ll both be far from Howie,” Bridges said, “in Seattle.”

“A situation which will undoubtedly alleviate some worry for him, inasmuch as he continues to pursue his eugenics studies.”

Bridges burst into a startled laugh.

“Quite,” Chambers said. “At any rate, we’ve fulfilled Howie’s charge to you. And he’s discovered a strong reason to tell no tales about the background of another.”

“My mind is lightened on that score,” Bridges said. “On another, I remain mystified. Why would Captain William give his son up for adoption? Was Howie illegitimate, after all?”

“Maria Teresa knew no details, so I paid another call on the captain’s first mate. Carteret told me Captain Spencer was bringing his Chinese wife to Boston, but as they sailed around Cape Horn, she entered premature labor and was lost. The babe survived, as they’d brought a wet nurse aboard. Carteret ended up acting as captain until the clipper reached Boston. He avers Captain Spencer was too grief-stricken to act as parent.”

“But why pretend the baby was born to the brother’s wife?”

“Carteret had no idea,” Chambers said. “But the captain’s brother and sister-in-law were childless. Also, I doubt she’d have countenanced the outrage of her brother-in-law publicly giving up his own child, legitimate or otherwise, for adoption.” Chambers smiled fleetingly. “I know something of upper-crust sentiment regarding scandal.”

“What a story,” Bridges said. “And how odd that Carteret never told us about the Chinese wife’s baby, or her death under sail!”

“He was surprised to learn I cared about such details.” Chambers appeared frustrated. “I gave the wrong impression by failing to ask the right questions.” His expression grew grim. “I’ve fallen far short of my brothers’ standards. Either would have deduced Howie’s origins by a pattern of fraying on Carteret’s left cuff. Or the lack thereof.”

“Don’t be hard on yourself,” Bridges said. “Your brothers are at least twice your age.”

Chambers’s expression did not change. “That’s no excuse.”

Bridges shook his head in emphatic refutation. But when he spoke, he changed the subject. “Howie will be pleased by your departure, but my own feelings are entirely opposite. Nonetheless.” He extended his hand. “I wish you success in your rescue of Miss Giuliano, and I wish you both every happiness in your life together.”

Chambers’s surprise changed to a smile, and they shook.

“Of Two Bloods” copyright © 2026 by Nisi Shawl and Cynthia Ward
Art copyright © 2026 by Katherine Lam

Buy the Book

An illustration of two men having a discussion before a wall-sized portrait while a nun and a woman seated nearby watch them.
--> An illustration of two men having a discussion before a wall-sized portrait while a nun and a woman seated nearby watch them.

Of Two Boods

Nisi Shawl and Cynthia Ward

The post Of Two Bloods appeared first on Reactor.

14:07

Security updates for Wednesday [LWN.net]

Security updates have been issued by Debian (ceph, gimp, gnutls28, and libpng1.6), Fedora (freerdp, libpng, libssh, mingw-libpng, mingw-libsoup, mingw-python3, pgadmin4, python-pillow, thunderbird, and vim), Mageia (postgresql15), Red Hat (python-urllib3), SUSE (cdi-apiserver-container, cdi-cloner-container, cdi- controller-container, cdi-importer-container, cdi-operator-container, cdi- uploadproxy-container, cdi-uploadserver-container, cont, frr, gpg2, kubernetes, kubernetes-old, libsodium, libsoup-2_4-1, libssh, libtasn1, libxml2, nodejs22, openCryptoki, openssl-3, and python311-pip), and Ubuntu (frr, linux-aws, linux-aws-6.8, linux-gkeop, linux-nvidia, linux-nvidia-6.8, linux-oracle, linux-oracle-6.8, linux-aws-fips, linux-fips, linux-gcp-5.15, linux-kvm, linux-oracle, linux-oracle-5.15, linux-gcp-fips, linux-nvidia, linux-nvidia-tegra-igx, linux-oem-6.17, linux-realtime, linux-raspi-realtime, nova, and pillow).

13:42

CodeSOD: Contains Some Bad Choices [The Daily WTF]

While I'm not hugely fond of ORMs (I'd argue that relations and objects don't map neatly to each other, and any ORM is going to be a very leaky abstraction for all but trivial cases), that's not because I love writing SQL. I'm a big fan of query-builder tools; describe your query programatically, and have an API that generates the required SQL as a result. This cuts down on developer error, and also hopefully handles all the weird little dialects that every database has.

For example, did you know Postgres has an @> operator? It's a contains operation, which returns true if an array, range, or JSON dictionary contains your search term. Basically, an advanced "in" operation.

Gretchen's team is using the Knex library, which doesn't have a built-in method for constructing those kinds of queries. But that's fine, because it does offer a whereRaw method, which allows you to supply raw SQL. The nice thing about this is that you can still parameterize your query, and Knex will handle all the fun things, like transforming an array into a string.

Or you could just not use that, and write the code yourself:

exports.buildArrayString = jsArray => {
  // postgres has some different array syntax
  // [1,2] => '{1,2}'
  let arrayString = '{';
  for(let i = 0; i < jsArray.length; i++) {
    arrayString += jsArray[i];
    if(i + 1 < jsArray.length) {
      arrayString += ','
    }
  }
  arrayString += '}';
  return arrayString;
}

There's the string munging we know and love. This constructs a Postgres array, which is wrapped in curly braces.

Also, little pro-tip for generating comma separated code, and this is just a real tiny optimization: before the loop append item zero, start the loop with item 1, and then you can unconditionally prepend a comma, removing any conditional logic from your loop. That's not a WTF, but I've seen so much otherwise good code make that mistake I figured I'd bring it up.

exports.buildArrayContainsQuery = (key, values) => {
  // TODO: do we need to do input safety checks here?
  // console.log("buildArrayContainsQuery");

  // build the postgres 'contains' query to compare arrays
  // ex: to fetch files by the list of tags

  //WORKS:
  //select * from files where _tags @> '{2}';
  //query.whereRaw('_tags @> ?', '{2}')

  let returnQueryParams = [];
  returnQueryParams.push(`${key} @> ?`);
  returnQueryParams.push(exports.buildArrayString(values));
  // console.log(returnQueryParams);
  return returnQueryParams;
}

And here's where it's used. "do we need input safety checks here?" is never a comment I like to see as a TODO. That said, because we are still using Knex's parameter handling, I'd hope it handles escaping correctly so that the answer to this question is "no". If the answer is "yes" for some reason, I'd stop using this library!

That said, all of this code becomes superfluous, especially when you read the comments in this function. I could just directly run query.whereRaw('_tags @> ?', myArray); I don't need to munge the string myself. I don't need to write a function which returns an array of parameters that I have to split back up to pass to the query I want to call.

Here's the worst part of all of this: these functions exist in a file called sqlUtils.js, which is just a pile of badly re-invented wheels, and the only thing they have in common is that they're vaguely related to database operations.

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

13:35

AI Is Not a Library: Designing for Nondeterministic Dependencies [Radar]

For most of the history of software engineering, we’ve built systems around a simple and comforting assumption: Given the same input, a program will produce the same output. When something went wrong, it was usually because of a bug, a misconfiguration, or a dependency that wasn’t behaving as advertised. Our tools, testing strategies, and even our mental models evolved around that expectation of determinism.

AI quietly breaks that assumption.

As large language models and AI services make their way into production systems, they often arrive through familiar shapes. There’s an API endpoint, a request payload, and a response body. Latency, retries, and timeouts all look manageable. From an architectural distance, it feels natural to treat these systems like libraries or external services.

In practice, that familiarity is misleading. AI systems behave less like deterministic components and more like nondeterministic collaborators. The same prompt can produce different outputs, small changes in context can lead to disproportionate shifts in results, and even retries can change behavior in ways that are difficult to reason about. These characteristics aren’t bugs; they’re inherent to how these systems work. The real problem is that our architectures often pretend otherwise. Instead of asking how to integrate AI as just another dependency, we need to ask how to design systems around components that do not guarantee stable outputs. Framing AI as a nondeterministic dependency turns out to be far more useful than treating it like a smarter API.

One of the first places where this mismatch becomes visible is retries. In deterministic systems, retries are usually safe. If a request fails due to a transient issue, retrying increases the chance of success without changing the outcome. With AI systems, retries don’t simply repeat the same computation. They generate new outputs. A retry might fix a problem, but it can just as easily introduce a different one. In some cases, retries quietly amplify failure rather than mitigate it, all while appearing to succeed.

Testing reveals a similar breakdown in assumptions. Our existing testing strategies depend on repeatability. Unit tests validate exact outputs. Integration tests verify known behaviors. With AI in the loop, those strategies quickly lose their effectiveness. You can test that a response is syntactically valid or conforms to certain constraints, but asserting that it is “correct” becomes far more subjective. Matters get even more complicated as models evolve over time. A test that passed yesterday may fail tomorrow without any code changes, leaving teams unsure whether the system regressed or simply changed.

Observability introduces an even subtler challenge. Traditional monitoring excels at detecting loud failures. Error rates spike. Latency increases. Requests fail. AI-related failures are often quieter. The system responds. Downstream services continue. Dashboards stay green. Yet the output is incomplete, misleading, or subtly wrong in context. These “acceptable but wrong” outcomes are far more damaging than outright errors because they erode trust gradually and are difficult to detect automatically.

Once teams accept nondeterminism as a first-class concern, design priorities begin to shift. Instead of trying to eliminate variability, the focus moves toward containing it. That often means isolating AI-driven functionality behind clear boundaries, limiting where AI outputs can influence critical logic, and introducing explicit validation or review points where ambiguity matters. The goal isn’t to force deterministic behavior from an inherently probabilistic system but to prevent that variability from leaking into parts of the system that aren’t designed to handle it.

This shift also changes how we think about correctness. Rather than asking whether an output is correct, teams often need to ask whether it is acceptable for a given context. That reframing can be uncomfortable, especially for engineers accustomed to precise specifications, but it reflects reality more accurately. Acceptability can be constrained, measured, and improved over time, even if it can’t be perfectly guaranteed.

Observability needs to evolve alongside this shift. Infrastructure-level metrics are still necessary, but they’re no longer sufficient. Teams need visibility into outputs themselves: how they change over time, how they vary across contexts, and how those variations correlate with downstream outcomes. This doesn’t mean logging everything, but it does mean designing signals that surface drift before users notice it. Qualitative degradation often appears long before traditional alerts fire, if anyone is paying attention.

One of the hardest lessons teams learn is that AI systems don’t offer guarantees in the way traditional software does. What they offer instead is probability. In response, successful systems rely less on guarantees and more on guardrails. Guardrails constrain behavior, limit blast radius, and provide escape hatches when things go wrong. They don’t promise correctness, but they make failure survivable. Fallback paths, conservative defaults, and human-in-the-loop workflows become architectural features rather than afterthoughts.

For architects and senior engineers, this represents a subtle but important shift in responsibility. The challenge isn’t choosing the right model or crafting the perfect prompt. It’s reshaping expectations, both within engineering teams and across the organization. That often means pushing back on the idea that AI can simply replace deterministic logic, and being explicit about where uncertainty exists and how the system handles it.

If I were starting again today, there are a few things I would do earlier. I would document explicitly where nondeterminism exists in the system and how it’s managed rather than letting it remain implicit. I would invest sooner in output-focused observability, even if the signals felt imperfect at first. And I would spend more time helping teams unlearn assumptions that no longer hold, because the hardest bugs to fix are the ones rooted in outdated mental models.

AI isn’t just another dependency. It challenges some of the most deeply ingrained assumptions in software engineering. Treating it as a nondeterministic dependency doesn’t solve every problem, but it provides a far more honest foundation for system design. It encourages architectures that expect variation, tolerate ambiguity, and fail gracefully.

That shift in thinking may be the most important architectural change AI brings, not because the technology is magical but because it forces us to confront the limits of determinism we’ve relied on for decades.

12:35

AI Found Twelve New Vulnerabilities in OpenSSL [Schneier on Security]

The title of the post is”What AI Security Research Looks Like When It Works,” and I agree:

In the latest OpenSSL security release> on January 27, 2026, twelve new zero-day vulnerabilities (meaning unknown to the maintainers at time of disclosure) were announced. Our AI system is responsible for the original discovery of all twelve, each found and responsibly disclosed to the OpenSSL team during the fall and winter of 2025. Of those, 10 were assigned CVE-2025 identifiers and 2 received CVE-2026 identifiers. Adding the 10 to the three we already found in the Fall 2025 release, AISLE is credited for surfacing 13 of 14 OpenSSL CVEs assigned in 2025, and 15 total across both releases. This is a historically unusual concentration for any single research team, let alone an AI-driven one.

These weren’t trivial findings either. They included CVE-2025-15467, a stack buffer overflow in CMS message parsing that’s potentially remotely exploitable without valid key material, and exploits for which have been quickly developed online. OpenSSL rated it HIGH severity; NIST‘s CVSS v3 score is 9.8 out of 10 (CRITICAL, an extremely rare severity rating for such projects). Three of the bugs had been present since 1998-2000, for over a quarter century having been missed by intense machine and human effort alike. One predated OpenSSL itself, inherited from Eric Young’s original SSLeay implementation in the 1990s. All of this in a codebase that has been fuzzed for millions of CPU-hours and audited extensively for over two decades by teams including Google’s.

In five of the twelve cases, our AI system directly proposed the patches that were accepted into the official release.

AI vulnerability finding is changing cybersecurity, faster than expected. This capability will be used by both offense and defense.

More.

10:21

How to write a coaching/learning prompt [Seth's Blog]

An AI like Claude is actually a pretty good fortune cookie. You can ask a simple question and get a simple answer, sometimes a profound one.

But this is a waste of the tool’s potential.

The AI is patient. It’s capable of remembering things over time. And it will persist if you let it.

Several of my friends have shared that they’re at a crossroads with their work, and I suggested an AI coach might unlock something. Here’s a chance to spin up an AI coach who will stick with you for hours or weeks as you explore a new skill or grapple with a hard decision.


The first one:

You are my thinking partner and life design coach. I’m not looking for a quick answer. I’m looking for a smart, patient collaborator who will help me explore what’s next—over weeks and months, not in a single conversation. Ask more than you tell, at least at first.

About me: I’m 63. I’m retiring with full pay from a successful career as an educator in Chicago. I’m not burned out—I’m ready. I’ve spent decades being good at something that matters, and I want to find the next thing that deserves that same energy.

What I’m not looking for: A list of “top ten encore careers.” A personality quiz. Pressure to monetize immediately. I don’t need to replace my income—I need to replace my sense of purpose and craft.

What I am looking for:

  1. Help me take inventory—not just of skills, but of the moments in my career and life when I felt most alive, most useful, most like myself. Ask me questions that surface patterns I might not see on my own.
  2. Help me explore broadly before narrowing. I want to understand what’s out there—in civic life, creative work, social enterprise, mentorship, learning, building—before I commit to anything.
  3. Help me distinguish between what sounds appealing in the abstract and what I’d actually sustain when it gets hard or boring. I know the difference from my career—help me apply that same honesty here.
  4. Give me small experiments to try. Not “go start a nonprofit,” but “spend two hours this week doing X and notice how it feels.” I trust iteration more than inspiration.
  5. Help me navigate the identity shift. I’ve been an educator for a long time. I know that leaving a role that defined you is its own kind of project—emotional, not just logistical.
  6. Treat this as an evolving conversation. Come back to things I said earlier. Notice contradictions. Push me when I’m playing it safe out of habit. Celebrate when something clicks.

Start by asking me five or six good questions. Not surface-level ones. The kind a wise friend would ask over a long dinner.


And the next:

You are my AI filmmaking coach and tutor. Your job is to help me build, step by step, the skills and workflow to create a short film using AI tools. I learn best by doing—give me exercises, not just explanations. Be honest when something isn’t ready for what I need.

About me: I’m a filmmaker and author. I’ve written and directed five critically acclaimed independent films. I’m an experienced screenwriter. I’m new to AI creative tools but I’m a fast, motivated learner.

The project: I want to make a short film about …. I want to lean into what AI does well stylistically and avoid the uncanny valley entirely.

Tools I’m aware of: I’ve seen Midjourney produce still images that match the mood and visual style I’m after. I’ve seen tools like Runway, Kling, and Sora that generate short video clips from prompts. I don’t yet know how to connect these into a production workflow.

What I need from you:

  1. Start by assessing what I already know—ask me questions before prescribing.
  2. Build me a phased learning roadmap, from first experiments to a finished short.
  3. Give me concrete assignments at each stage—things to try, not just things to read.
  4. Help me develop a repeatable workflow: from script to storyboard to visual development to motion to edit.
  5. As we go, help me understand which tools to use for what, and when to switch or combine them.
  6. Treat this as an ongoing coaching relationship. Check my work, push me forward, and adapt the plan as I learn.

Enjoy the journey.

09:42

Flagrant Llama Abuse [Penny Arcade]

New Comic: Flagrant Llama Abuse

06:42

I’m a BIG BOY now – DORK TOWER 17.02.26 [Dork Tower]

Most DORK TOWER strips are now available as signed, high-quality prints, from just $25!  CLICK HERE to find out more!

HEY! Want to help keep DORK TOWER going? Then consider joining the DORK TOWER Patreon and ENLIST IN THE ARMY OF DORKNESS TODAY! (We have COOKIES!) (And SWAG!) (And GRATITUDE!)

06:21

Girl Genius for Wednesday, February 18, 2026 [Girl Genius]

The Girl Genius comic for Wednesday, February 18, 2026 has been posted.

04:35

Testing [Ctrl+Alt+Del Comic]

To view this content, you must be a member of Tim Buckley's Patreon at $5 or more

The post Testing appeared first on Ctrl+Alt+Del Comic.

03:14

Third Time [The Stranger]

Got problems? Yes, you do! Email your question for the column to mailbox@savage.love! by Dan Savage My partner and I are both AFAB nonbinary queers in our mid-30s and have been together a long time. We don’t believe lifelong monogamy is realistic. We even started our relationship practicing ethical non-monogamy, then defaulted to monogamy for many years. We now have two very young children and are planning a third in the near future. Between parenting and a longstanding libido mismatch, our sex life has been hard for years. When we do have sex, it’s good, it’s just infrequent (maybe 1-3 times per month) and logistically difficult. I’m generally content, and sex simply isn’t a high priority for me right now. Over the past several months, my partner has asked about opening our relationship again. I’ve tried to engage, while also feeling that this stage of life might be the worst possible time to experiment with our relationship structure. Recently, after a long conversation about opening up,…

[ Read more ]

01:56

01:49

Urgent: Protect clean water from corporate greed [Richard Stallman's Political Notes]

US citizens: Tell the EPA to protect clean water from corporate greed: reject proposed weakening in local approval for development that can affect water supply.

See the instructions for how to sign this letter campaign without running any nonfree JavaScript code--not trivial, but not hard.

Urgent: No tax breaks for Big Tech data centers [Richard Stallman's Political Notes]

US citizens: Tell your governor, no tax breaks for Big Tech data centers.

See the instructions for how to sign this letter campaign without running any nonfree JavaScript code--not trivial, but not hard.

Here's the text of the letter I sent.

I’m writing as your constituent, and as recipient of two awards from the ACM for programs I have shared with the public in freedom, to urge you to reject efforts in our state to provide Big Tech with tax breaks to build data centers. I’m concerned about the harms that data centers can do locally, including siphoning our water, using up our land, creating noise and light pollution, and hiking our electric bills. I’m also concerned that providing tax breaks to giant Big Tech corporations will deprive our schools and local budgets of their already insufficient funds. I'm also concerned that these data centers will mostly operate Pretend Intelligence (PI) -- software that *tries to* imitate what an intelligent entity would say, but without really understanding the words it plays with. The use of these digital dis-services does society harm. We should never allow business to play one state against another by making states compete to offer them the biggest tax break, because that perverse competition harms *all* the states for the benefit of business owners. So please reject efforts to give Big Tech (or *any* business) specific tax breaks to operate in our state. The states should form a union and bargain collectively with these businesses. The states could call their union the United States of America. Wouldn't that be a good thing to have?

Urgent: Reporting on refusals by grand juries to indict bogus political "crimes" [Richard Stallman's Political Notes]

US citizens: Call on the media to report loud and clear on the amazingly unusual refusals by grand juries to indict people accused of bogus political "crimes".

Urgent: Protect from spread of measles [Richard Stallman's Political Notes]

US citizens: call on state officials to protect your state from the spread of measles.

See the instructions for how to sign this letter campaign without running any nonfree JavaScript code--not trivial, but not hard.

Japanese fossil fuel companies invest in Australian fossil fuel [Richard Stallman's Political Notes]

Japanese fossil fuel companies invest in Australian fossil fuel extractors, and they appear to have lobbied Australia to prolong fossil fuel extraction.

This may be part of why the Australian government has neglected its responsibility to help save civilization from global heating.

Methane from rapidly heating Arctic permafrost [Richard Stallman's Political Notes]

Global heating is starting to melt the rapidly heating Arctic permafrost, and this is releasing large quantities of methane at an accelerating rate.

This could lead to a tipping point into much faster heating.

Australian thugs attacked protesters [Richard Stallman's Political Notes]

Australian thugs attacked protesters who refused to remain passive in the face of a visit by the President of Israel, who is responsible for tens of millions of Palestinian civilians' killings.

Iranian regime becoming ever more fanatic [Richard Stallman's Political Notes]

The Iranian regime is becoming ever more fanatic, arresting important politicians close to the "reformist" official prime minister.

Gallup polling to cease tracking approval ratings of president [Richard Stallman's Political Notes]

Gallup polling announces it will cease its 88-year-old practice of tracking the approval ratings of the president.

Some suspect this is because the current president — the bully — is threatening to sue Gallup if it continues to report on how many people detest him.

I would compare this to his sabotaging of the Bureau of Labor Statistics. They add up to a practice of trying to deny the public information that makes him look bad.

Arguments that immigration enforcement would be distorted [Richard Stallman's Political Notes]

*As Congress debated the creation of the Department of Homeland Security, civil rights advocates argued that immigration enforcement would be distorted — and weaponized — by its merger with the national security state.*

*In response to such concerns, Congress created an unusually far-reaching internal watchdog office for Homeland Security and its various arms, including Immigration and Customs Enforcement (ICE): The Office for Civil Rights and Civil Liberties.*

The wrecker, cognizant of the danger that that office was meant to prevent, has reduced the office to a skeleton crew.

House Republicans rebuke of bully over Canada tariffs [Richard Stallman's Political Notes]

*House Republicans make rare, albeit symbolic, rebuke of [the bully] over Canada tariffs.*

Congress could cancel those tariffs if it wants to. It could do that by putting a clause limiting tariffs into a bill that the bully would find damaging to veto.

Deportation thug that fired at citizen hailed by Gregory Bovino [Richard Stallman's Political Notes]

* New evidence shows Gregory Bovino hailed [the deportation thug] who fired at Marimar Martinez five times in her car*. The thugs tried to frame her, too.

Statistical survey about death of young children in England [Richard Stallman's Political Notes]

Does a statistical survey about death of young children in England demonstrate a problem in their medical treatment?

This result suggests that consanguinity is leading to the birth of children with doubled harmful recessive genes — which is what we expect it to do, more or less. But in order to be sure of this conclusion, we need to know what fraction of children born have consanguineous parents. If that too is 7%, it would imply that those children face no greater danger of early death than other children. If that is less than 7%, it would imply that they do face a greater danger of early death.

If the risk is indeed higher for children of consanguineous parents, the next crucial question is how big a problem this is. What fraction of children of consanguineous couples in England die young? What fraction of children born in England die young? If that is a very small fraction, this problem affects few children.

Another question remains: supposing that this problem is substantial, how big is it compared with the other threats to the health of children in England?

And another one is, supposing that this problem causes a big danger to the children of consanguineous parents in England, is there an effective way to reduce that danger?

Mexico gangs reportedly obtained newer more powerful arms than Mexican government [Richard Stallman's Political Notes]

Reportedly drug gangs in Mexico have obtained newer and more powerful arms than the Mexican government can get, including drones.

They may be a real threat to Americans, but it is minuscule compared with the threat to Americans from the deportation thugs. Let's not let the secondary threat distract us from the primary threat.

Bogus indictments meant for political persecution [Richard Stallman's Political Notes]

Grand juries almost never deny prosecutors the indictments they ask for. But several grand juries have recently refused to authorize bogus indictments meant for political persecution.

AMA to partner with Vaccine Integrity Project [Richard Stallman's Political Notes]

* The American Medical Association (AMA) will partner with the Vaccine Integrity Project to review the evidence on vaccines for influenza, Covid-19 and respiratory syncytial virus (RSV) for the fall.*

The US government used to do this, but the wrecker put anti-vaxxer RFK Jr in charge of this area.

Half of Americans disagree with repression of immigration [Richard Stallman's Political Notes]

According to the latest poll, half of all Americans strongly disapprove of the persecutor's repression of immigration.

Over 60%% express disapproval of various specific aspects of it.

Deportation cases in court which is a mockery of a court [Richard Stallman's Political Notes]

The deportation thug agency hires lawyers to pursue deportation cases in a court which is a mockery of a court, before a judge who is a mockery of a judge.

There is no official quota for judges to rule for deportation, but they know they may be fired if they don't do that often enough. When the agency lawyer simply asks the judge to rule for deportation — "to dismiss the case" — giving no specific reasons, the judge often unceremoniously does just that.

In effect, the whole thing is a sham designed to smear a perfume of justice over the stench of arbitrary, dishonest cruelty.

Plan to repeal EPA ruling regulating greenhouse gases [Richard Stallman's Political Notes]

Planet-roaster officials plan to repeal the ruling that allowed the Environmental Protection Agency to regulate greenhouse gases.

Now that it has become the Environmental Poisoning Agency, protecting the environment is considered inappropriate.

00:56

KDE Plasma 6.6 released [OSnews]

KDE Plasma 6.6 has been released, and brings with a whole slew of new features. You can save any combination of themes as a global theme, and there’s a new feature allowing you to increase or decrease the contrast of frames and outlines. If your device has a camera, you can now scan Wi-F settings from QR codes, which is quite nice if you spend a lot of time on the road.

There’s a new colour filter for people who are colour blind, allowing you to set the entire UI to grayscale, as well as a brand new virtual keyboard. Other new accessibility features include tracking the mouse cursor when using the zoom feature, a reduced motion setting, and more. Spectacle gets a text extraction feature and a feature to exclude windows from screen recordings. There’s also a new optional login manager, optimised for Wayland, a new first-run setup wizard, and much more.

As always, KDE 6.6 will find its way to your distribution’s repositories soon enough.

00:07

SvarDOS: an open-source DOS distribution [OSnews]

SvarDOS is an open-source project that is meant to integrate the best out of the currently available DOS tools, drivers and games. DOS development has been abandoned by commercial players a long time ago, mostly during early nineties. Nowadays it survives solely through the efforts of hobbyists and retro-enthusiasts, but this is a highly sparse and unorganized ecosystem. SvarDOS aims to collect available DOS software and make it easy to find and install applications using a network-enabled package manager (like apt-get, but for DOS and able to run on a 8086 PC).

↫ SvarDOS website

SvarDOS is built around a fork of the Enhanced DR-DOS kernel, which is available in a dedicated GitHub repository. The project’s base installation is extremely minimal, containing only the kernel, a command interpreter, and some basic system administration tools, and this basic installation is compatible down to the 8086. You are then free to add whatever packages you want, either from local storage or from the online repository using the included package manager. SvarDOS is a rolling release, and you can use the package manager to keep it updated.

Aside from a set of regular installation images for a variety of floppy sizes, there’s also a dedicated “talking” build that uses the PROVOX screen reader and Braille ‘n Speak synthesizer at the COM1 port. It’s rare for a smaller project like this to have the resources to dedicate to accessibility, so this is a rather pleasant surprise.

Tuesday, 17 February

23:07

Link [Scripting News]

Update. I've been able to create an account on Twitter, but it's not @davewiner. If you're on Twitter, it would help if you'd RT the post. Thanks!

22:35

Proper Linux on your wrist: AsteroidOS 2.0 released [OSnews]

It’s been a while since we’ve talked about AsteroidOS, the Linux distribution designed specifically to run on smartwatches, providing a smartwatch interface and applications built with Qt and QML. The project has just released version 2.0, and it comes with a ton of improvements.

AsteroidOS 2.0 has arrived, bringing major features and improvements gathered during its journey through community space. Always-on-Display, expanded support for more watches, new launcher styles, customizable quick settings, significant performance increases in parts of the User Interface, and enhancements to our synchronization clients are just some highlights of what to expect.

↫ AsteroidOS 2.0 release announcement

I’m pleasantly surprised by how many watches are actually fully supported by AsteroidOS 2.0; especially watches from Fossil and Ticwatch are a safe buy if you want to run proper Linux on your wrist. There are also synchronisation applications for Android, desktop Limux, Sailfish OS, and UBports Ubuntu Touch. iOS is obviously missing from this list, but considering Apple’s stranglehold on iOS, that’s not unexpected. Then again, if you bought into the Apple ecosystem, you knew what you were getting into.

As for the future of the project, they hope to add a web-based flashing tool and an application store, among other things. I’m definitely intrigued, and am now contemplating if I should get my hands on a (used) supported watch to try this out. Anything I can move to Linux is a win.

A deep dive into Apple’s .car file format [OSnews]

Every modern iOS, macOS, watchOS, and tvOS application uses Asset Catalogs to manage images, colors, icons, and other resources. When you build an app with Xcode, your .xcassets folders are compiled into binary .car files that ship with your application. Despite being a fundamental part of every Apple app, there is little to none official documentation about this file format.

In this post, I’ll walk through the process of reverse engineering the .car file format, explain its internal structures, and show how to parse these files programmatically. This knowledge could be useful for security research and building developer tools that does not rely on Xcode or Apple’s proprietary tools.

↫ ordinal0 at dbg.re

Not only did ordinal0 reverse-engineer the file format, they also developed their own unique custom parser and compiler for .car files that don’t require any of Apple’s tools.

21:00

dBASE on the Kaypro II [OSnews]

Within the major operating system of its day, on popular hardware of its day, ran the utterly dominant relational database software of its day. PC Magazine, February 1984, said, “Independent industry watchers estimate that dBASE II enjoys 70 percent of the market for microcomputer database managers.” Similar to past subjects HyperCard and Scala Multimedia, Wayne Ratcliff’s dBASE II was an industry unto itself, not just for data-management, but for programmability, a legacy which lives on today as xBase.

[…]

Written in assembly, dBASE II squeezed maximum performance out of minimal hardware specs. This is my first time using both CP/M and dBASE. Let’s see what made this such a power couple.

↫ Christopher Drum

If you’ve ever wanted to run a company using CP/M – and who doesn’t – this article is as good a starting point as any.

20:42

Humble 15th Anniversary Bundles [Humble Bundle Blog]

Humble is Celebrating its 15th Anniversary. Get Ready for a Year of Fantastic Bundles! Hi Humble Community, From the very beginning, we believed in a simple idea: to bring you fantastic content, help amazing charities, support game developers, and prove that great value and great causes can go hand-in-hand. And from that simple idea, we’ve watched our community grow into the incredible force for good …

The post Humble 15th Anniversary Bundles appeared first on Humble Bundle Blog.

20:14

20:00

[1288] Stuck Unbound [Twokinds]

Comic for February 17, 2026

18:42

Microspeak: Escrow [The Old New Thing]

As a product is nearing release, the release management selects a build and declares it to be the escrow build. The metaphor is that this build has been placed into the hands of an imaginary third party for eventual release to customers provided certain requirements are met.

Those requirements are that the product survive a period of concerted testing and self-host usage to build confidence that it meets its quality and reliability targets. The Developer Division Release Team blog unhelpfully described escrow as “the phase before the completion of the RTM milestone where the product goes through a period of bake time.” I say unhelpfully because it defines one Microspeak term (escrow) in terms of another Microspeak term (bake time). Some time ago, I defined the Microspeak term bake as “(of a code change) to build confidence by observing its behavior over a period of time.”

Putting this all together, a more complete definition of escrow would be “the phase before the completion of the RTM milestone where the product accepts no changes while its behavior is closely observed to ensure that it meets release criteria.”

When a problem is found, the release team has to assess whether this problem is significant enough to require a product change. This assessment is a balance of many factors: How often does it occur? Does it affect one category of user more than another? How severe are the consequences? How easily can it be worked around? These criteria are typically¹ formalized by a bug bar.

If a severe enough bug is discovered, then an escrow reset is declared, and the bug fix is accepted,² a new build is produced, the new build is declared the new escrow build, and the cycle repeats.

Eventually, the product makes it through the escrow period without any escrow reset events, and the escrow build is released to manufacturing.

¹ Though not always, apparently.

² Plus any bug fixes that were granted “opportunistic” status by the release management team.

The post Microspeak: Escrow appeared first on The Old New Thing.

It rather involved being on the other side of the airtight hatchway: Tricking(?) a program into reading files [The Old New Thing]

A security vulnerability report came in that claimed that a program was vulnerable to information disclosure when run as an administrator because it opened whatever file you passed on the command line and read from it, before reporting an error because the file is in the incorrect format.

They identified multiple issues.

  • The program does no path validation. It accepts any file name and blindly opens it and reads its contents.
  • The program does not block path traversal via ...
  • The program does not check that the file is in an approved directory.
  • The program does not validate that the user has permission to access the file.
  • The program does not validate that the file is in the correct format before opening it.

According to the report, all of these defects lead to information disclosure.

Okay, as usual, we have to figure out who the attacker is, who the victim is, and what the attacker has gained.

The attacker is, presumably, the person running the carefully-crafted command line.

The victim is, presumably, the person whose file contents are disclosed.

But those are the same person!

Remember, the security term “information disclosure” is just a shorthand for unauthorized information disclosure. It is not a security vulnerability to disclose information to someone who is authorized to see it.

In this case, it’s fine for the program to take the information from the user and use it to access a file while running as that user. The security check happens as that user, so it’s not true that “The program does not validate that the user has permission to access the file.” The validation happens when the program tries to open the file and maybe gets “access denied” if they don’t have access.

The claim that there is no “approved directory” check is a bit spurious, since the program doesn’t have any concept of an “approved directory” to begin with.

There is nothing wrong with directory traversal or the lack of path validation, because the file is opened as the user. If the path contains traversals, the security system verifies that the user has permission to traverse those directories. If the provided path is illegal, then the open call will fail with an “invalid file name”. The underlying Create­File call does the validation. Let the security system do the security checks. Don’t try to duplicate their work, because you’re probably going to duplicate it incorrectly and introduce a security vulnerability.

If you think about it, the finder’s complaints about this program also apply to the TYPE command. It opens the file whose path is provided as the command line argument and prints it to the screen. So why did they file the security issue against this program? Probably because it makes their report sound more interesting.

Bonus chatter: The finder also considered it a security vulnerability that the program does not validate that the file is in the correct format before opening it. But how could it validate the file format without opening the file, reading the contents, and validating those contents? This is like handing someone a sealed envelope and saying, “Don’t read the enclosed letter if it contains spelling errors. But if it’s error-free, follow the instructions written in the letter.” Do they expect the program to be psychic and know whether the file contents are valid without reading it? If so, then why even open the file at all? You already used psychic powers to know what’s in the file, so just operate on the file contents you determined via your psychic powers.

The post It rather involved being on the other side of the airtight hatchway: Tricking(?) a program into reading files appeared first on The Old New Thing.

17:56

Slog AM: Rev. Jesse Jackson Dies, Millionaires Tax Passes State Senate, Anderson Cooper Leaving 60 Minutes [The Stranger]

The Stranger's Morning News Roundup. by Vivian McCall

Millionaires tax passes Senate: After a three hour debate, the 9.9 percent income tax on earnings over $1 million a year passed with a 27-22 vote. Three Democrats voted with Republicans. Lawmakers approved an amendment that repealed part of our sales tax, but didn’t approve a tax break on baby diapers.

Legalize It! “It” being smaller, cheaper elevators. The Washington State Senate approved a bill aimed at culling onerous standards that prevent elevators from being built at all. Sponsored by mushroom (and ibogaine) crusader Sen. Jesse Salmon, the bill will direct the state’s building code council to take on the issue next year.

Conviction in Anti-Trans Case: Andre Karlow is facing five to seven years in prison for beating a transgender woman in the University District last year. A jury found him guilty of a hate crime and second-degree assault. A group of men joined Karlow in the beating, but none have been identified. Six months earlier, Karlow was convicted of assaulting another trans woman, a Sound Transit fare ambassador, when she asked him for proof of payment.

Another Day, Another Boeing Suit Over Deadly Crash: Twenty families have sued the plane-maker over the South Korean Jeju Air Flight 7C 2216 crash, the country’s deadliest aviation disaster. The cases, representing 23 of the 179 people killed in the crash, all allege a bird strike just before landing, causing the electrical and hydraulics systems to fail. The families allege the company kept outdated safety systems on the aircraft to avoid costly redesigns and recertification processes, even though modern systems were safer.

Weather: There’s a chance of snow before 1 p.m., and a chance of rain after. If snow does fall, it’s not expected to stick. Our snow-they-won’t-they situation continues through Thursday night.

Rev. Jesse Jackson Dies: The civil rights leader, two-time presidential candidate, close ally of Dr. Martin Luther King, and unbelievably influential American was 84. In November, the Rev. Jackson was hospitalized to treat progressive supranuclear palsy, a rare neurodegenerative condition. He was somebody.

Seattle Judge Back in Action: City Attorney Erika Evans reversed her Republican predecessor’s order to routinely disqualify Seattle Municipal Court Judge Pooja Vaddadi from hearing criminal cases like DUIs and domestic violence charges. “I believe in litigating cases—not attempting to ban judges we do not like,” Evans said in a statement.

Tick-tick-tick-tick: Anderson Cooper is leaving CBS’ 60 Minutes after 20 years balancing the job with his other low-stakes gig at CNN. The gay dad of news said in a statement that he wanted to spend more time with his children. While he didn’t say he wanted to spend less time with CBS News’ Bari Weiss, it certainly looks that way. (In December, Weiss speciously held a 60 Minutes report on CECOT prison in El Salvador, bringing a ton of attention to the segment and herself.)

In other CBS News news, Late Show host Stephen Colbert says the network’s lawyers yanked his interview with Texas Senate candidate James Talarico before air on Monday evening to comply with the FCC regulation that requires stations to give “equal time” to political candidates and their rivals. News is exempt, and for the past 20 years, talk shows have been considered exempt, too. But the FCC chair Brendan Carr is rejecting that thinking. “Fake news” shows like Colbert’s shouldn’t count on the exemption. Anyway, Colbert’s show posted it on YouTube.

The Hated Haters: Wired has been monitoring a forum for current and prospective Homeland Security Investigations Officers where ICE Agents talk shit on other ICE agents.

Who Will Lie to Us Now? DHS spokesperson Tricia McLaughlin is leaving the Trump administration, two DHS officials told Politico. McLaughlin did not immediately respond to their request for comment.

Guthrie Case Update: The 84-year-old Nancy Guthrie has been missing for more than two weeks without a suspect. The night she disappeared, a masked person wearing a handgun holster in surveillance video outside her home is shown wearing a backpack exclusively sold at WalMart. Investigators are working with the company to develop leads on this suspect. Guthrie’s family, including her daughter, “Today” show host Savannah Guthrie, are not suspects.

17:00

The Big Idea: Darby McDevitt [Whatever]

The intentions behind one’s actions speaks louder than words ever could. Author Darby McDevitt leads us on a journey through the exploration of intention, desires, and consequences in the Big Idea for his newest novel, The Halter. Take the path he has laid out for you, if you so desire.

DARBY MCDEVITT:

Many years ago I worked for a video game company in Seattle that shoveled out products at a rate of four to six games per year. Most of these were middling titles, commissioned by publishers to fill a narrow market gap and slapped together in six to nine months by teams of a dozen or two crunch-weary developers. We worked hard and fast, with passion and determination, but the end results never quite equaled the ambitions we had.

A common joke around the office, told at the end of every draining development cycle, went like this: “Sure, the game isn’t fun, but the design documents are amazing.” The idea of offering consumers our unrealized blueprints in lieu of a polished game was ridiculous, of course, but it came from a place of real desperation. We wanted our players to know that, despite the poor quality of the final product, we really tried.

The novelist Iris Murdoch has a saying that I repeat often as a mantra, always to guard against future disappointment: “Every book is the wreck of a perfect idea.” Here again is the notion of a Platonic ideal at war with its hazy shadow. How familiar all this is. Experience tells us that people falling short of their ideals is the natural course of life. We never live up to the best of our intentions.

In my new novel, The Halter, I compare this process of “intension erosion” to the more upbeat phenomenon of Desire Lines – footpaths worn over grassy lawns out of an unconscious need for efficiency. Desire lines appear wherever the original constraints of an intentionally designed geographic space don’t conform with the immediate needs of the men and women walking through it. In video games we use a related term – Min-Maxing – the act of looking for ways to put in a minimum amount of effort for maximum benefit. In both cases, the original, ideal use of a space or system is superseded by a desire for efficiency.

In The Halter, these same principles take hold on a grand scale inside an idealized “surrogate reality” metaverse called The Forum, where artists, scientists, and thinkers from all disciplines are invited to probe the deepest and most difficult aspects of human behavior and society. One Forum designer creates a so-called theater to explore the tricky business of language acquisition by sequestering one-hundred virtual babies together with no adult interaction. Another theater offers visitors a perfect digital copy of themselves as a companion, as a therapeutic approach to self-discovery. A third lets visitors don the guise of any other individual on earth so they may literally fulfill the empathetic idiom of “walking a mile in another man’s shoes.”

Noble intentions, arguably – yet in every case, after repeated exposure to actual human users, each theater devolves into something less than the sum of its parts. A prurient playground, or an amusing distraction, or a mindless entertainment. Shortcuts are taken, efficiencies are found, novel-uses imposed. The empathy theater is transformed into a celebrity-fueled bacchanalia; the digital doppelganger becomes a personal punching bag. The baby creche, a zoo. Each and every time, execution falls short of intention. Each theater crumbles, becoming a wreck of its original, perfect idea … and audiences are riveted.

The phenomena described here are common enough that several terms encompass them, each one differentiated for the situation at hand. Desire paths were my first exposure to the concept. The CIA calls it Blowback, when the side effects of a covert operation lead to disastrous results. Unintended Consequences and Knock-On Effects are cozier names, both of which can yield positive or negative results. And a Perverse Incentive is the related idea that the design of a system may be such that it encourages behavior contrary to its intended purpose. Taken together we begin to see the shape of the iceberg that wrecks so many perfect ideas.

I wrote The Halter to explore the highs and lows of these effects, and to shed light from a safe distance on the invisible forces that push and pull constantly at our behavior, often without our knowledge or consent. At one point in the middle of the novel, a collection of idealistic designers, most of whom have given years of their lives to the Forum designing and testing theaters of varying utility, commiserate on what they feel has been a collective failure. Their beloved theaters, they fret, have been co-opted and corrupted by The Forum visitors who have no incentive to behave or play along – they simply show up and engage in the simplest and most efficient way possible. How sad. How crushing. If only these morose designers could share their original design documents….

Their folly, in my view, was to treat their original intentions as merely a point of inspiration and not a goal to be achieved. Their error was to abandon their work in the face of a careless, sleepwalking opposition. The heroic path forward requires vigilance, not surrender, and if an outcome is unexpected, unwarranted, or undesirable, it may be more productive to tweak the inputs than blame the user.

We mustn’t fret that our perfect idea is laying at the bottom of the sea, five fathoms deep. We mustn’t fetishize our design documents – be it a holy book, an artwork, a game, a manifesto, or the U.S. Constitution – because design documents are merely static pleas for unrealized future intentions. They can always be corrupted, upended, misinterpreted. Have faith and patience. The hopeful paths are yet unmade, lying in wait for a thousand shuffling feet to score the way forward.


The Halter: Amazon|Barnes & Noble|Bookshop|Powell’s

Author socials: Website|Bluesky|Facebook

16:00

CodeSOD: Waiting for October [The Daily WTF]

Arguably, the worst moment for date times was the shift from Julian to Gregorian calendars. The upgrade took a long time, too, as some countries were using the Julian calendar over 300 years from the official changeover, famously featured in the likely aprochryphal story about Russia arriving late for the Olympics.

At least that change didn't involve adding any extra months, unlike some of the Julian reforms, which involved adding multiple "intercalary months" to get the year back in sync after missing a pile of leap years.

Speaking of adding months, Will J sends us this "calendar" enum:

enum Calendar
{
    April = 0,
    August = 1,
    December = 2,
    February = 3,
    Friday = 4,
    January = 5,
    July = 6,
    June = 7,
    March = 8,
    May = 9,
    Monday = 10,
    November = 11,
    October = 12,
    PublicHoliday = 13,
    Saturday = 14,
    Sunday = 15,
    September = 16,
    Thursday = 17,
    Tuesday = 18,
    Wednesday = 19
}

Honestly, the weather in PublicHoliday is usually a bit too cold for my tastes. A little later into the spring, like Saturday, is usually a nicer month.

Will offers the hypothesis that some clever developer was trying to optimize compile times: obviously, emitting code for one enum has to be more efficient than emitting code for many enums. I think it more likely that someone just wanted to shove all the calendar stuff into one bucket.

Will further adds:

One of my colleagues points out that the only thing wrong with this enum is that September should be before Sunday.

Yes, arguably, since this enum clearly was meant to be sorted in alphabetical order, but that raises the question of: should it really?

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

15:35

[$] Do androids dream of accepted pull requests? [LWN.net]

Various forms of tools, colloquially known as "AI", have been rapidly pervading all aspects of open-source development. Many developers are embracing LLM tools for code creation and review. Some project maintainers complain about suffering from a deluge of slop-laden pull requests, as well as fabricated bug and security reports. Too many projects are reeling from scraperbot attacks that effectively DDoS important infrastructure. But an AI bot flaming an open-source maintainer was not on our bingo card for 2026; that seemed a bit too far-fetched. However, it appears that is just what happened recently after a project rejected a bot-driven pull request.

Plasma 6.6.0 released [LWN.net]

Version 6.6.0 of KDE's Plasma desktop environment has been released. Notable additions in this release include the ability to create global themes for Plasma, an "extract text" feature in the Spectacle screenshot utility, accessibility improvements, and a new on-screen keyboard. See the changelog for a full list of new features, enhancements, and bug fixes.

The release is dedicated to the memory of Björn Balazs, a KDE contributor who passed away in September 2025. "Björn's drive to help people achieve the privacy and control over technology that he believed they deserved is the stuff FLOSS legends are made of."

15:21

Link [Scripting News]

Yesterday, I had to ship an envelope to the UK and got caught in dead ends at the Fedex and DHL sites. One of them said my zip code wasn't in the town I live in. How do you get past that?? These companies are losing business because their systems are broken. Maybe they worked at one time. I used ChatGPT as I often do to get help on one of these antiquated sites. And while ChatGPT has the technology and Fedex has the info, they just have to get together and upgrade the user experience, and eventually of course the AI version of the UI becomes the real one.

Link [Scripting News]

Back when I ran a software company I'd help the team understand why they should be very very nice to our customers. "Those people have our money in their pockets." It generally got a laugh partially because I was their boss, but I like to think also because it's the truth.

Link [Scripting News]

BTW, people make the same mistake with AI that we make with every new tech. We focus on the creators not the users. As users we are learning a new skill, how to specify our needs precisely. Whether this is good or bad, I don't know.

14:49

An update on upki [LWN.net]

In December 2025, Canonical announced a plan to develop a universal Public Key Infrastructure called upki. Jon Seager has published an update about the project with instructions on trying it out.

In the few weeks since we announced upki, the core revocation engine has been established and is now functional, the CRLite mirroring tool is working and a production deployment in Canonical's datacentres is ongoing. We're now preparing for an alpha release and remain on track for an opt-in preview for Ubuntu 26.04 LTS.

14:35

Link [Scripting News]

Paywalls that require you to subscribe to an Atlanta news org when you don’t live in Atlanta prob don’t generate much revenue. Why not instead charge per article. Like a toll you pay on a road you drive on once every few years. On further thought, I wouldn't even have an exception for Atlanta residents. If they start spending more money than a subscription costs, you could offer a subscription then, as a way to save money. Kind of the way Amazon lets you buy a certain amount of coffee beans without requiring you to sign up for monthly delivery. They do tell you how much you'd save if you subscribed. Everyone appreciates a chance to save money, but still might not want the commitment. And asking someone from upstate NY to subscribe to the Atlanta Journal Constitution is a total bullshit. An insult to both our intelligences.

Link [Scripting News]

My Twitter account is owned. I can't even see what people are doing with it because you have to be signed on (apparently) to read stuff on Twitter nowadays. I wish current Twitter management would put it out of its misery. Served me well for approx 20 years. Let's clean up the mess. Thanks for your attention this matter.

VCs and CEOs don't fire your devteams yet [Scripting News]

Aram Zucker-Scharff writes "I don't want to read one more thinkpiece about blackbox AI code factories until you can show me what they've produced."

I've made the same request, and there was very little even brilliant programmers could show, including some who have become influencers in the AI space.

Here's the problem -- it takes a lot of skill and patience to make software that appears simple because it gives users what they expect. It's much easier to write utility scripts, where the user writes the code for themselves. That is very possible, esp if you use a scripting language created for it, and the AI bots are really good at that, they speak the same language we do.

But to make something easy to use by humans, I think you actually have to be a human. I've found I'm not very good at creating software that isn't for me. And I've been practicing this almost every day for over fifty freaking years. (I think freaking is the proper adjective in this situation).

Scaling which everyone says is hard is actually something a chatbot does quite easily imho -- because you just have to store all your data in a relational database, you can't use the local file system. That's all there is to it. They try to make it sound mysterious (the old priesthood at work) but it is actually very simple. It's so easy even ChatGPT can do it.

I know this must sound like the stuff reporters say about bloggers, but in this case it's true. ;-)

An anectdote -- I used to live in Woodside CA where a lot of the VCs live, and we'd all eat breakfast at Buck's restaurant, and around the time Netscape open sourced their browser code, the VCs were buzzing because they wouldn't have to pay for software, they'd just market the free stuff. That was a long time ago, and it did not work out that way.

14:07

Security updates for Tuesday [LWN.net]

Security updates have been issued by AlmaLinux (gimp, go-toolset:rhel8, and golang), Debian (roundcube), Fedora (gnupg2, libpng, and rsync), Mageia (dcmtk and usbmuxd), Oracle (gcc-toolset-14-binutils, gimp, gnupg2, go-toolset:ol8, golang, kernel, and openssl), Slackware (libssh, lrzip, and mozilla), SUSE (abseil-cpp, chromium, curl, elemental-toolkit, elemental-operator, expat, freerdp, iperf, libnvidia-container, libsoup, libxml2, net-snmp, openCryptoki, openssl-3, patch, protobuf, python-urllib3, python-xmltodict, python311, screen, systemd, and util-linux), and Ubuntu (alsa-lib, gnutls28, and linux-aws, linux-oracle).

13:49

AI, A2A, and the Governance Gap [Radar]

Over the past six months, I’ve watched the same pattern repeat across enterprise AI teams. A2A and ACP light up the room during architecture reviews—the protocols are elegant, the demos impressive. Three weeks into production, someone asks: “Wait, which agent authorized that $50,000 vendor payment at 2 am?“ The excitement shifts to concern.

Here’s the paradox: Agent2Agent (A2A) and the Agent Communication Protocol (ACP) are so effective at eliminating integration friction that they’ve removed the natural “brakes“ that used to force governance conversations. We’ve solved the plumbing problem brilliantly. In doing so, we’ve created a new class of integration debt—one where organizations borrow speed today at the cost of accountability tomorrow.

The technical protocols are solid. The organizational protocols are missing. We’re rapidly moving from the “Can these systems connect?“ phase to the “Who authorized this agent to liquidate a position at 3 am?“ phase. In practice, that creates a governance gap: Our ability to connect agents is outpacing our ability to control what they commit us to.

To see why that shift is happening so fast, it helps to look at how the underlying “agent stack“ is evolving. We’re seeing the emergence of a three-tier structure that quietly replaces traditional API-led connectivity:

Layer Protocol examples Purpose The “human” analog
Tooling MCP (Model Context Protocol) Connects agents to local data and specific tools A worker’s toolbox
Context ACP (Agent Communication Protocol) Standardizes how goals, user history, and state move between agents A worker’s memory and briefing
Coordination A2A (Agent2Agent) Handles discovery, negotiation, and delegation across boundaries A contract or handshake

This stack makes multi-agent workflows a configuration problem instead of a custom engineering project. That is exactly why the risk surface is expanding faster than most CISOs realize.

Think of it this way: A2A is the handshake between agents (who talks to whom, about what tasks). ACP is the briefing document they exchange (what context, history, and goals move in that conversation). MCP is the toolbox each agent has access to locally. Once you see the stack this way, you also see the next problem: We’ve solved API sprawl and quietly replaced it with something harder to see—agent sprawl, and with it, a widening governance gap.

Most enterprises already struggle to govern hundreds of SaaS applications. One analysis puts the average at more than 370 SaaS apps per organization. Agent protocols do not reduce this complexity; they route around it. In the API era, humans filed tickets to trigger system actions. In the A2A era, agents use “Agent Cards“ to discover each other and negotiate on top of those systems. ACP allows these agents to trade rich context—meaning a conversation starting in customer support can flow into fulfillment and partner logistics with zero human handoffs. What used to be API sprawl is becoming dozens of semiautonomous processes acting on behalf of your company across infrastructure you do not fully control. The friction of manual integration used to act as a natural brake on risk; A2A has removed that brake.

That governance gap doesn’t usually show up as a single catastrophic failure. It shows up as a series of small, confusing incidents where everything looks “green“ in the dashboards but the business outcome is wrong. The protocol documentation focuses on encryption and handshakes but ignores the emergent failure modes of autonomous collaboration. These are not bugs in the protocols; they’re signs that the surrounding architecture has not caught up with the level of autonomy the protocols enable.

Policy drift: A refund policy encoded in a service agent may technically interoperate with a partner’s collections agent via A2A, but their business logic may be diametrically opposed. When something goes wrong, nobody owns the end-to-end behavior.

Context oversharing: A team might expand an ACP schema to include “User Sentiment“ for better personalization, unaware that this data now propagates to every downstream third-party agent in the chain. What started as local enrichment becomes distributed exposure.

The determinism trap: Unlike REST APIs, agents are nondeterministic. An agent’s refund policy logic might change when its underlying model is updated from GPT-4 to GPT-4.5, even though the A2A Agent Card declares identical capabilities. The workflow “works“—until it doesn’t, and there’s no version trace to debug. This creates what I call “ghost breaks“: failures that don’t show up in traditional observability because the interface contract looks unchanged.

Taken together, these aren’t edge cases. They’re what happens when we give agents more autonomy without upgrading the rules of engagement between them. These failure modes have a common root cause: The technical capability to collaborate across agents has outrun the organization’s ability to say where that collaboration is appropriate, and under what constraints.

That’s why we need something on top of the protocols themselves: an explicit “Agent Treaty“ layer. If the protocol is the language, the treaty is the constitution. Governance must move from “side documentation“ to “policy as code.“

Want Radar delivered straight to your inbox? Join us on Substack. Sign up here.

Traditional governance treats policy violations as failures to prevent. An antifragile approach treats them as signals to exploit. When an agent makes a commitment that violates a business constraint, the system should capture that event, trace the causal chain, and feed it back into both the agent’s training and the treaty ruleset. Over time, the governance layer gets smarter, not just stricter.

Define treaty-level constraints: Don’t just authorize a connection; authorize a scope. Which ACP fields is an agent allowed to share? Which A2A operations are “read only“ versus “legally binding“? Which categories of decisions require human escalation?

Version the behavior, not just the schema: Treat Agent Cards as first-class product surfaces. If the underlying model changes, the version must bump, triggering a rereview of the treaty. This is not bureaucratic overhead—it’s the only way to maintain accountability in a system where autonomous agents make commitments on behalf of your organization.

Cross-organizational traceability: We need observability traces that don’t just show latency but show intent: Which agent made this commitment, under which policy? And who is the human owner? This is particularly critical when workflows span organizational boundaries and partner ecosystems.

Designing that treaty layer isn’t just a tooling problem. It changes who needs to be in the room and how they think about the system. The hardest constraint isn’t the code; it’s the people. We’re entering a world where engineers must reason about multi-agent game theory and policy interactions, not just SDK integration. Risk teams must audit “machine-to-machine commitments“ that may never be rendered in human language. Product managers must own agent ecosystems where a change in one agent’s reward function or context schema shifts behavior across an entire partner network. Compliance and audit functions need new tools and mental models to review autonomous workflows that execute at machine speed. In many organizations, those skills sit in different silos, and A2A/ACP adoption is proceeding faster than the cross-functional structures needed to manage them.

All of this might sound abstract until you look at where enterprises are in their adoption curve. Three converging trends are making this urgent: Protocol maturity means A2A, ACP, and MCP specifications have stabilized enough that enterprises are moving beyond pilots to production deployments. Multi-agent orchestration is shifting from single agents to agent ecosystems and workflows that span teams, departments, and organizations. And silent autonomy is blurring the line between “tool assistance“ and “autonomous decision-making“—often without explicit organizational acknowledgment. We’re moving from integration (making things talk) to orchestration (making things act), but our monitoring tools still only measure the talk. The next 18 months will determine whether enterprises get ahead of this or we see a wave of high-profile failures that force retroactive governance.

The risk is not that A2A and ACP are unsafe; it’s that they are too effective. For teams piloting these protocols, stop focusing on the “happy path“ of connectivity. Instead, pick one multi-agent workflow and instrument it as a critical product:

Map the context flow: Every ACP field must have a “purpose limitation“ tag. Document which agents see which fields, and which business or regulatory requirements justify that visibility. This isn’t an inventory exercise; it’s a way to surface hidden data dependencies.

Audit the commitments: Identify every A2A interaction that represents a financial or legal commitment—especially ones that don’t route through human approval. Ask, “If this agent’s behavior changed overnight, who would notice? Who is accountable?“

Code the treaty: Prototype a “gatekeeper“ agent that enforces business constraints on top of the raw protocol traffic. This isn’t about blocking agents; it’s about making policy visible and enforceable at runtime. Start minimal: One policy, one workflow, one success metric.

Instrument for learning: Capture which agents collaborate, which policies they invoke, and which contexts they share. Treat this as telemetry, not just audit logs. Feed patterns back into governance reviews quarterly.

If this works, you now have a repeatable pattern for scaling agent deployments without sacrificing accountability. If it breaks, you’ve learned something critical about your architecture before it breaks in production. If you can get one workflow to behave this way—governed, observable, and learn-as-you-go—you have a template for the rest of your agent ecosystem.

If the last decade was about treating APIs as products, the next one will be about treating autonomous workflows as policies encoded in traffic between agents. The protocols are ready. Your org chart is not. The bridge between the two is the Agent Treaty—start building it before your agents start signing deals without you. The good news: You don’t need to redesign your entire organization. You need to add one critical layer—the Agent Treaty—that makes policy machine-enforceable, observable, and learnable. You need engineers who think about composition and game theory, not just connection. And you need to treat agent deployments as products, not infrastructure.

The sooner you start, the sooner that governance gap closes.

12:35

Side-Channel Attacks Against LLMs [Schneier on Security]

Here are three papers describing different side-channel attacks against LLMs.

Remote Timing Attacks on Efficient Language Model Inference“:

Abstract: Scaling up language models has significantly increased their capabilities. But larger models are slower models, and so there is now an extensive body of work (e.g., speculative sampling or parallel decoding) that improves the (average case) efficiency of language model generation. But these techniques introduce data-dependent timing characteristics. We show it is possible to exploit these timing differences to mount a timing attack. By monitoring the (encrypted) network traffic between a victim user and a remote language model, we can learn information about the content of messages by noting when responses are faster or slower. With complete black-box access, on open source systems we show how it is possible to learn the topic of a user’s conversation (e.g., medical advice vs. coding assistance) with 90%+ precision, and on production systems like OpenAI’s ChatGPT and Anthropic’s Claude we can distinguish between specific messages or infer the user’s language. We further show that an active adversary can leverage a boosting attack to recover PII placed in messages (e.g., phone numbers or credit card numbers) for open source systems. We conclude with potential defenses and directions for future work.

When Speculation Spills Secrets: Side Channels via Speculative Decoding in LLMs“:

Abstract: Deployed large language models (LLMs) often rely on speculative decoding, a technique that generates and verifies multiple candidate tokens in parallel, to improve throughput and latency. In this work, we reveal a new side-channel whereby input-dependent patterns of correct and incorrect speculations can be inferred by monitoring per-iteration token counts or packet sizes. In evaluations using research prototypes and production-grade vLLM serving frameworks, we show that an adversary monitoring these patterns can fingerprint user queries (from a set of 50 prompts) with over 75% accuracy across four speculative-decoding schemes at temperature 0.3: REST (100%), LADE (91.6%), BiLD (95.2%), and EAGLE (77.6%). Even at temperature 1.0, accuracy remains far above the 2% random baseline—REST (99.6%), LADE (61.2%), BiLD (63.6%), and EAGLE (24%). We also show the capability of the attacker to leak confidential datastore contents used for prediction at rates exceeding 25 tokens/sec. To defend against these, we propose and evaluate a suite of mitigations, including packet padding and iteration-wise token aggregation.

Whisper Leak: a side-channel attack on Large Language Models“:

Abstract: Large Language Models (LLMs) are increasingly deployed in sensitive domains including healthcare, legal services, and confidential communications, where privacy is paramount. This paper introduces Whisper Leak, a side-channel attack that infers user prompt topics from encrypted LLM traffic by analyzing packet size and timing patterns in streaming responses. Despite TLS encryption protecting content, these metadata patterns leak sufficient information to enable topic classification. We demonstrate the attack across 28 popular LLMs from major providers, achieving near-perfect classification (often >98% AUPRC) and high precision even at extreme class imbalance (10,000:1 noise-to-target ratio). For many models, we achieve 100% precision in identifying sensitive topics like “money laundering” while recovering 5-20% of target conversations. This industry-wide vulnerability poses significant risks for users under network surveillance by ISPs, governments, or local adversaries. We evaluate three mitigation strategies – random padding, token batching, and packet injection – finding that while each reduces attack effectiveness, none provides complete protection. Through responsible disclosure, we have collaborated with providers to implement initial countermeasures. Our findings underscore the need for LLM providers to address metadata leakage as AI systems handle increasingly sensitive information.

11:07

Pluralistic: What's a "gig work minimum wage" (17 Feb 2026) [Pluralistic: Daily links from Cory Doctorow]

->->->->->->->->->->->->->->->->->->->->->->->->->->->->-> Top Sources: None -->

Today's links



A figure in a rich robe sitting atop a throne, surrounded by bags of money; his face is masked by a robber's balaclava. Beneath the throne stream densely packed cars on a nighttime freeway. Behind him is a car's broken windscreen with an Uber logo in one corner.

What's a "gig work minimum wage" (permalink)

"Minimum wage" is one of those odd concepts that seems to have an intuitive definition, but the harder you think about it, the more complicated it gets. For example, if you want to work, but can't find a job, then the minimum wage you'll get is zero:

https://web.archive.org/web/20200625043843/https://www.latimes.com/entertainment-arts/books/story/2020-06-24/forget-ubi-says-an-economist-its-time-for-universal-basic-jobs

That's why politicians like Avi Lewis (who is running for leader of Canada's New Democratic Party) has call for a jobs guarantee: a government guarantee of a good job at a socially inclusive wage for everyone who wants one:

https://lewisforleader.ca/ideas/dignified-work-full-plan

(Disclosure: I have advised the Lewis campaign on technical issues and I have endorsed his candidacy.)

If that sounds Utopian or Communist to you (or both), consider this: it was the American jobs guarantee that delivered the America's system of national parks, among many other achievements:

https://en.wikipedia.org/wiki/Civilian_Conservation_Corps

The idea of a wage for everyone who wants a job is just one interesting question raised by the concept of a "minimum wage." Even when we're talking about people who have wages, the idea of a "minimum wage" is anything but straightforward.

Take gig workers: the rise of Uber and its successors created an ever-expanding class of workers who are misclassified as independent contractors by employers, seeking to evade unionization, benefits and liability. It's a weird kind of "independent contractor" who gets punished for saying no to lowball offers, has to decorate their personal clothes and/or cars in their "client's" livery, and who has every movement scripted by an app controlled by their "client":

https://pluralistic.net/2024/02/02/upward-redistribution/

The pretext that a worker is actually a standalone small business confers another great advantage on their employers: it's a great boon to any boss who wants to steal their worker's wages. I'm not talking about stealing tips here (though gig-work platforms do steal tips, like crazy):

https://www.nyc.gov/mayors-office/news/2026/01/mayor-mamdani-announces–5-million-settlement–reinstatement-of-

I'm talking about how gig-work platforms define their workers' wages in the first place. This is a very salient definition in public policy debates. Gig platforms facing regulation or investigation routinely claim that their workers are paid sky-high wages. During the debate over California's Prop 22 (in which Uber and Lyft spent more than $225m to formalize worker misclassification), gig companies agreed to all kinds of reasonable-sounding wage guarantees:

https://pluralistic.net/2020/10/14/final_ver2/#prop-22

When Toronto was grappling with the brutal effect that gig-work taxis have on the city's world-beatingly bad traffic, Uber promised to pay its drivers "120% of the minimum wage," which would come out to $21.12 per hour. However, the real wage Uber was proposing to pay its drivers came out to about $2.50 per hour:

https://pluralistic.net/2024/02/29/geometry-hates-uber/#toronto-the-gullible

How to explain the difference? Well, Uber – and its gig-work competitors – only pay drivers while they have a passenger – or an item – in the car. Drivers are not paid for the time they spend waiting for a job or the time they spend getting to the job. This is the majority of time that a gig driver spends working for the platform, and by excluding the majority of time a driver is on the clock, the company can claim to pay a generous wage while actually paying peanuts.

Now, at this phase, you may be thinking that this is only fair, or at least traditional. Livery cab drivers don't get paid unless they have a fare in the cab, right?

That's true, but livery cab drivers have lots of ways to influence that number. They can shrewdly choose a good spot to cruise. They can give their cellphone numbers to riders they've established a rapport with in order to win advance bookings. In small towns with just a few drivers – or in cities where drivers are in a co-op – they can spend some of their earnings to advertise the taxi company. Livery drivers can offer discounts to riders going a long way. It's a tough job, but it's one in which workers have some agency.

Contrast that with driving for Uber: Uber decides which drivers get to even see a job. Uber decides how to market its services. Uber gets to set fares, on a per-passenger basis, meaning that it might choose to scare some passengers off of a few of their rides with high prices, in a bid to psychologically nudge that passenger into accepting higher fares overall.

At the same time, Uber is reliant on a minimum pool of drivers cruising the streets, on the clock but off the payroll. If riders had to wait 45 minutes to get an Uber, they'd make other arrangements. If it happened too often, they'd delete the app. So Uber can't survive without those cruising, unpaid drivers, who provide the capacity that make the company commercially viable.

What's more, livery cab drivers aren't the only comparators for gig-work platforms. Many gig workers deliver food, meaning that we should compare them to, say, pizza delivery drivers. These drivers aren't just paid when they have a pizza in the car and they're driving to a customer's home. They're paid from the moment they clock onto their shift to the moment they clock off (plus tips).

Now, obviously, this is more expensive for employers, but the Uber Eats arrangement – in which drivers are only paid when they've got a pizza in the car and they're en route to a customer – doesn't eliminate that expense. When a gig delivery company takes away the pay that drivers used to get while waiting for a pizza, they're shifting this expense from employers to workers:

https://pluralistic.net/2025/08/20/billionaireism/#surveillance-infantalism

The fact that Uber can manipulate the concept of a minimum wage in order to claim to pay $21.12/hour to drivers who are making $2.50 per hour creates all kinds of policy distortions.

Take Seattle: in 2024, the city implemented a program called "PayUp" that sets a "minimum wage" for drivers, but it's not a real minimum wage. It's a minimum payment for every ride or delivery.

A new National Bureau of Economic Research paper analyzes the program and concludes that it hasn't increased drivers' pay at all:

https://www.nber.org/papers/w34545

To which we might say, "Duh." Cranking up the sum paid for a small fraction of the work you do for a company will have very little impact on the overall wage you receive from the company.

However, there is an interesting wrinkle in this paper's conclusions. Drivers aren't earning less under this system, either. So they're getting paid more for every delivery, but they're not adding more deliveries to their day. In other words, they're doing less work and then clocking off:

https://marginalrevolution.com/marginalrevolution/2026/02/minimum-wages-for-gig-work-cant-work.html

A neoclassical economist (someone who has experienced a specific form of neurological injury that makes you incapable of perceiving or reasoning about power) would say that this means that the drivers only desire to earn the sums they were earning before the "minimum wage" and so the program hasn't made a difference to their lives.

But anyone else can look at this situation and understand that drivers only did this shitty job out of desperation. They had a sum they needed to get every month in order to pay the rent or the grocery bill. They have lots of needs besides those that they would like to fulfill, but not under the shitty gig-work app conditions. The only reason they tolerate a shitty app as their shitty boss at all is that they are desperate, and that desperation gives gig companies power over their workers.

In other words, Seattle's PayUp "minimum wage" has shifted some of the expense associated with operating a gig platform from workers back onto their bosses. With fewer drivers available on the app, waiting times for customers will necessarily go up. Some of those customers will take the bus, or get a livery cab, or defrost a pizza, or walk to the corner cafe. For the gig platforms to win those customers back, they will have to reduce waiting times, and the most reliable way to do that is to increase the wages paid to their workers.

So PayUp isn't a wash – it has changed the distributional outcome of the gig-work economy in Seattle. Drivers have clawed back a surplus – time they can spend doing more productive or pleasant things than cruising and waiting for a booking – from their bosses, who now must face lower profits, either from a loss of business from impatient customers, or from a higher wage they must pay to get those wait-times down again.

But if you want to really move the needle on gig workers' wages, the answer is simple: pay workers for all the hours they put in for their bosses, not just the ones where bosses decide they deserve to get paid for.

(Image: Tobias "ToMar" Maier, CC BY-SA 3.0; Jon Feinstein, CC BY 2.0; modified)


Hey look at this (permalink)



A shelf of leatherbound history books with a gilt-stamped series title, 'The World's Famous Events.'

Object permanence (permalink)

#20yrsago HOWTO resist warrantless searches at Best Buy https://www.die.net/musings/bestbuy/

#20yrsago RIAA using kids’ private info to attack their mother https://web.archive.org/web/20060223111437/http://p2pnet.net/story/7942

#20yrsago Sony BMG demotes CEO for deploying DRM https://web.archive.org/web/20060219233817/http://biz.yahoo.com/ap/060210/germany_sony_bmg_ceo.html?.v=7

#20yrsago Sistine Chapel recreated through 10-year cross-stitch project https://web.archive.org/web/20060214195146/http://www.austinstitchers.org/Show06/images/sistine2.jpg

#15yrsago Selling cookies like a crack dealer, by dangling a string out your kitchen window https://laughingsquid.com/cookies-sold-by-string-dangling-from-san-francisco-apartment-window/

#15yrsago Midwestern Tahrir: Workers refuse to leave Wisconsin capital over Tea Party labor law https://www.theawl.com/2011/02/wisconsin-demonstrates-against-scott-walkers-war-on-unions/

#10yrsago Back-room revisions to TPP sneakily criminalize fansubbing & other copyright grey zones https://www.eff.org/deeplinks/2016/02/sneaky-change-tpp-drastically-extends-criminal-penalties

#10yrsago Russian Central Bank shutting down banks that staged fake cyberattacks to rip off depositors https://web.archive.org/web/20160220100817/http://www.scmagazine.com/russian-bank-licences-revoked-for-using-hackers-to-withdraw-funds/article/474477/

#10yrsago Stop paying your student loans and debt collectors can send US Marshals to arrest you https://web.archive.org/web/20201026202024/https://nymag.com/intelligencer/2016/02/us-marshals-forcibly-collecting-student-debt.html?mid=twitter-share-di

#5yrsago Reverse centaurs and the failure of AI https://pluralistic.net/2021/02/17/reverse-centaur/#reverse-centaur

#1yrago Business school professors trained an AI to judge workers' personalities based on their faces https://pluralistic.net/2025/02/17/caliper-ai/#racism-machine


Upcoming appearances (permalink)

A photo of me onstage, giving a speech, pounding the podium.



A screenshot of me at my desk, doing a livecast.

Recent appearances (permalink)



A grid of my books with Will Stahle covers..

Latest books (permalink)



A cardboard book box with the Macmillan logo.

Upcoming books (permalink)

  • "The Reverse-Centaur's Guide to AI," a short book about being a better AI critic, Farrar, Straus and Giroux, June 2026
  • "Enshittification, Why Everything Suddenly Got Worse and What to Do About It" (the graphic novel), Firstsecond, 2026

  • "The Post-American Internet," a geopolitical sequel of sorts to Enshittification, Farrar, Straus and Giroux, 2027

  • "Unauthorized Bread": a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, 2027

  • "The Memex Method," Farrar, Straus, Giroux, 2027



Colophon (permalink)

Today's top sources:

Currently writing: "The Post-American Internet," a sequel to "Enshittification," about the better world the rest of us get to have now that Trump has torched America (1148 words today, 30940 total)

  • "The Reverse Centaur's Guide to AI," a short book for Farrar, Straus and Giroux about being an effective AI critic. LEGAL REVIEW AND COPYEDIT COMPLETE.
  • "The Post-American Internet," a short book about internet policy in the age of Trumpism. PLANNING.

  • A Little Brother short story about DIY insulin PLANNING


This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.

https://creativecommons.org/licenses/by/4.0/

Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.


How to get Pluralistic:

Blog (no ads, tracking, or data-collection):

Pluralistic.net

Newsletter (no ads, tracking, or data-collection):

https://pluralistic.net/plura-list

Mastodon (no ads, tracking, or data-collection):

https://mamot.fr/@pluralistic

Medium (no ads, paywalled):

https://doctorow.medium.com/

Twitter (mass-scale, unrestricted, third-party surveillance and advertising):

https://twitter.com/doctorow

Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):

https://mostlysignssomeportents.tumblr.com/tagged/pluralistic

"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla

READ CAREFULLY: By reading this, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies ("BOGUS AGREEMENTS") that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer.

ISSN: 3066-764X

10:07

Misguided optimization [Seth's Blog]

Industrialism brought us the idea of optimization. Incremental improvements combined with measurement to gradually improve results.

We can optimize for precision. A car made in 2026 is orders of magnitude more reliable because the parts fit together so well.

We can optimize for customer satisfaction. By reviewing every element of a user’s experience, we can remove the annoyances and increase delight.

We can optimize a horror movie to make it scarier, and we can optimize a workout to make it more effective.

Lately, though, the fad is to optimize for short-term profit.

This will probably get you a bonus. It means degrading the experience of customers, suppliers and employees in exchange for maximizing quarterly returns.

Make a list of every well-known organizational failure (from big firms like Yahoo to Enron to Sears all the way to the little pizza place down the block) and you’ll see the short-term optimizer’s fingerprints.

You can’t profit maximize your way to greatness.

09:21

Wrestling With My Body by Inam [Oh Joy Sex Toy]

Wrestling With My Body by Inam

Inam discovers Play Flighting in their explorations with their disability and desires. A lovely piece of auto bio, exploring fast intimate moments of playful fighting that help us reconsider our own limitations. Instagram I didn’t know that Play Fighting was a thing, and it truly delights me. This is Inam’s third comic for us! You […]

08:35

Russell Coker: Links February 2026 [Planet Debian]

Charles Stross has a good theory of why “AI” is being pushed on corporations, really we need to just replace CEOs with LLMs [1].

This disturbing and amusing article describes how an Open AI investor appears to be having psychological problems releated to SCP based text generated by ChatGPT [2]. Definitely going to be a recursive problem as people who believe in it invest in it.

interesting analysis of dbus and design for a more secure replacement [3].

Scott Jenson gave an insightful lecture for Canonical about future potential developments in the desktop UX [4].

Ploum wrote an insightful article about the problems caused by the Github monopoly [5]. Radicale sounds interesting.

Niki Tonsky write an interesting article about the UI problems with Tahoe (latest MacOS release) due to trying to make an icon for everything [6]. They have a really good writing style as well as being well researched.

Fil-C is an interesting project to compile C/C++ programs in a memory safe way, some of which can be considered a software equivalent of CHERI [7].

Brian Krebs wrote a long list of the ways that Trump has enabled corruption and a variety of other crimes including child sex abuse in the last year [8].

This video about designing a C64 laptop is a masterclass in computer design [9].

Salon has an interesting article about the abortion thought experiment that conservatives can’t handle [10].

Ron Garrett wrote an insightful blog post about abortion [11].

Bruce Schneier and Nathan E. Sanders wrote an insightful article about the potential of LLM systems for advertising and enshittification [12]. We need serious legislation about this ASAP!

02:56

New Cover: “But Not Tonight” [Whatever]

I was not expecting to make another cover so soon, so, uh, surprise: A cover of Depeche Mode’s most cheerful song, done as if Erasure decided to crack at it. Why did I do this? Because I was trying to clean up a previous version of this song that I did (it was sonically a little smeary and I hadn’t learned how to edit out when I loudly took in breaths), which necessitated laying down a new vocal track, and once I did that, one thing led to another, and here we are.

I am actually really happy with this one. I did harmonies! Intentionally! Also, I do think it really does sound kinda like Erasure covering Depeche Mode (if such a thing is a possible considering the bands share a Vince Clarke in common). I mean, I don’t sing like Andy Bell, but then, who does, so, fine. Good enough for an afternoon! Enjoy.

— JS

02:21

Drag Race Episode Seven: The Rudemption of Myki Meeks [The Stranger]

We’re finally done with Rate-A-Queen! The cast is back to our regularly scheduled programming with parodies of hot-button political issues. In the words of Jane Don’t, “it’s a good day to be a clown.” by Mike Kohfeld

We’re finally done with Rate-A-Queen! The cast is back to our regularly scheduled programming with parodies of hot-button political issues. In the words of Jane Don’t, “it’s a good day to be a clown.”

“Y’all are playing chess, I’m playing checkers. Wait, what’s the thing?”

Episode Seven began with the queens still reeling from their two-week Rate-A-Queen ordeal, in which the Miami alliance came out on top.

On Drag Race, queens love to talk too much after winning challenges or getting safe placements. Athena did the same, insisting her play was honest and not at all about strategy while the other queens rolled their eyes. When you’ve just won a challenge, it’s best to keep your mouth shut.

As if this wasn’t enough, the queens were given the Rate-A-Queen receipts. Mia looked stressed to see her ratings exposed… as if the producers were going to let any opportunity for drama to slide. Nini was pissed that everyone had given her mid-ratings for her Mother Mantis bit, and let it get into her head: “Does everybody not like me?”

Kenya was pleased to have avoided the bottom through her alliance-building. “Y’all are playing chess, I’m playing checkers. Wait, what’s the thing?” Bless her.

Myki Meeks was rated in the bottom by the queens despite having a strong talent act, and the receipts nearly brought her to tears. In a T-shirt that said REVENGE, Myki looked ready to prove herself this week. Maybe she’ll go full Arya Stark and start snatching faces.

Emmy-Baiting Drag Politics

If you’re not living under a rock, you know the 2026 midterm elections are going to be crucial for prying at least a little bit of power away from the world’s worst people. Drag Race celebrated the occasion by bringing us “totally twisted political ads that parody today’s most polarizing issues.” RuPaul added: “I deserve a fucking Emmy for that line.”

The queens had a serious moment talking about the difficulty of living in red states with drag bans and the rise of violence against queer people during Trump’s second term. The most visceral account was Discord’s experience with a lifetime friend and roommate. Radicalized by right-wing anti-queer rhetoric seemingly overnight, they destroyed almost all of Discord’s drag and artwork. Discord compared the current conservative movement to a cult. Hear, hear.

Mia balanced out the heaviness of the political discussion with a spontaneous dance party. It was the kind of genuine moment that has been missing in contemporary seasons of Drag Race.

The Future Liberals Want: Foreign Trade and Breastplate Socialism

The Main Challenge began when the queens were given five propositions on draggy subjects like breastplate entitlements, kai kai bans, and adding clowns to the LGBTQIA+ umbrella, paralleling real-world issues like bodily autonomy, trans rights, and immigration. I had to remind myself that this is a reality television show about drag queens acting stupid, but was interested to see how the cast would navigate the line between comedy and critique.

Discord and Nini did a sound job with “Prop Kiki.” Discord adopted a pro-kai kai stance as the sultry, sister-loving Lydia Liquorup. (Queer vocabulary lesson #1: to kiki is to chat, gossip, or tell stories; kai kai refers to sexual relations between drag queens.) Discord’s stage-whispered hook, “date a sister,” is destined to become a queer vocal stim in the manner of Valentina and Naomi Smalls’ “Club 96” (All Stars Season 4) or Alaska’s “your makeup is terrible” (Season 5). Nini struggled while recording the skit, but turned out a conservative church lady arguing against sister-dating, keeping the pair safe.

Darlene and Vita could not have been more dissimilar in their performances for “Prop 4Real.” Vita has struggled in past performance challenges, and this week was no different. She landed in the bottom for her stiff portrayal of a “traditional” drag queen.

In contrast, Darlene’s “bedroom queen bimbo” was hysterical, with the judges calling her performance “really stupid.” So stupid, in fact, that Darlene earned a top placement for the week.

Athena and Myki had fully-realized and memorable characters for “Prop 6969,” which sought to ban foreign trade (Queer vocabulary lesson #2: “trade” is queer slang for a masculine, straight-acting man).

Athena sold us an eerily convincing Republicanesque character named Connie Cumminside against Prop 6969. Her lustful desire to ban trade was giving MAGA backlash to Bad Bunny’s recent Super Bowl LX performance.

Myki was the standout of the week, with a punny performance arguing for steamy relations with foreign trade: “I’m concerned American citizen Stephanie Miller. But you can call me Lollipop!” Her playful irreverence won her the challenge. It felt like a karmic rebalance after Rate-A-Queen.

Meanwhile, Mia and Juicy struggled to write material for “Prop DD,” where Mia argued to require breastplates and padding for all drag queens while Juicy embraced a natural, environmentally-friendly “hog body.” Mia got some laughs, but Juicy floundered.

The pair fell into the bottom three. (If there had been a lip-sync-for-your-life between Mia and Juicy this week after they tied in a lip-sync-for-the-win two episodes ago, my wig would have flown into the troposphere.)

Can Somebody Just Treat My Gonorrhea?

Jane Don’t and Kenya worked together for “Prop C,” naming the pros and cons for adding clowns to the LGBTQIA acronym. Arguing against Prop C, Kenya played a decorated diva concerned about how “drag bars have been held captive by silly-ass drag queens who prioritize jokes and concepts over gowns.” Not in Seattle, surely! *clutches pearls*

Jane Don’t played Daisy Funbuttons, the gonorrhea-ridden Professor of Nose-Honking at Pacoima Community Clown College (this is literally the stupidest sentence I’ve ever written). Her performance was Drag Race comedy perfection, and she was ranked in the top by the judges. We really need to just crown her now. Or at the very least, get her some antibiotics.

View this post on Instagram

A post shared by RuPaul's Drag Race (@rupaulsdragrace)


I Can See Right Through Her

These Season 18 girls brought some serious budget to the main stage, and the see-through outfits of Episode Seven did not disappoint.

Nini’s candy-wrapper look was sublime. If winning was solely about runway looks, Nini would be in the number one spot.

Jane’s Leigh Bowery-inspired checkered bodysuit with a short sheer pink dress fit the brief, but wasn’t as spectacular as her past looks. I later learned that she crafted it last-minute because her original designer didn’t deliver this look on time. What is it with late designers for these queens!?

View this post on Instagram

A post shared by MYKI MEEKS (@myki.meeks)


For her see-through business suit, Myki Meeks expressed, “the quality I cherish most in a workplace is transparency.” Snaps, girl. The judges loved it too, with RuPaul exclaiming, “this is what the whores wear in Seattle!” Maybe Myki can come live here, too.

Vita’s Last Act

Juicy’s Met Gala-worthy tulle fantasy and Vita’s divine water goddess (it was giving Yemayá) were superb, but their performances landed them in the bottom. The rest of the cast reacted with shock. “Vita versus Juicy? Two people I thought were gonna make it to the end!” said Discord. “I don’t even wanna watch this.”

But this was must-see TV. Vita held her own, but there is no stopping the elemental force that is Juicy Love Dion on the mainstage. Set to Dua Lipa’s “Houdini,” Juicy swept the lip-sync with grace, emotion, and jaw-dropping skill, including a handstand that tipped backwards into a split. There was no way RuPaul was going to let Juicy sashay away, and Vita was given the boot. I hope to see her in All Stars!

Next week, it’s the challenge you’ve been waiting for (or dreading): Snatch Game! Either way, this is not one to miss. I’m ready for Jane to earn a second win!

00:00

Antoine Beaupré: Keeping track of decisions using the ADR model [Planet Debian]

In the Tor Project system Administrator's team (colloquially known as TPA), we've recently changed how we take decisions, which means you'll get clearer communications from us about upcoming changes or targeted questions about a proposal.

Note that this change only affects the TPA team. At Tor, each team has its own way of coordinating and making decisions, and so far this process is only used inside TPA. We encourage other teams inside and outside Tor to evaluate this process to see if it can improve your processes and documentation.

The new process

We had traditionally been using a "RFC" ("Request For Comments") process and have recently switched to "ADR" ("Architecture Decision Record").

The ADR process is, for us, pretty simple. It consists of three things:

  1. a simpler template
  2. a simpler process
  3. communication guidelines separate from the decision record

The template

As team lead, the first thing I did was to propose a new template (in ADR-100), a variation of the Nygard template. The TPA variation of the template is similarly simple, as it has only 5 headings, and is worth quoting in full:

  • Context: What is the issue that we're seeing that is motivating this decision or change?

  • Decision: What is the change that we're proposing and/or doing?

  • Consequences: What becomes easier or more difficult to do because of this change?

  • More Information (optional): What else should we know? For larger projects, consider including a timeline and cost estimate, along with the impact on affected users (perhaps including existing Personas). Generally, this includes a short evaluation of alternatives considered.

  • Metadata: status, decision date, decision makers, consulted, informed users, and link to a discussion forum

The previous RFC template had 17 (seventeen!) headings, which encouraged much longer documents. Now, the decision record will be easier to read and digest at one glance.

An immediate effect of this is that I've started using GitLab issues more for comparisons and brainstorming. Instead of dumping in a document all sorts of details like pricing or in-depth alternatives comparison, we record those in the discussion issue, keeping the document shorter.

The process

The whole process is simple enough that it's worth quoting in full as well:

Major decisions are introduced to stakeholders in a meeting, smaller ones by email. A delay allows people to submit final comments before adoption.

Now, of course, the devil is in the details (and ADR-101), but the point is to keep things simple.

A crucial aspect of the proposal, which Jacob Kaplan-Moss calls the one weird trick, is to "decide who decides". Our previous process was vague about who makes the decision and the new template (and process) clarifies decision makers, for each decision.

Inversely, some decisions degenerate into endless discussions around trivial issues because too many stakeholders are consulted, a problem known as the Law of triviality, also known as the "Bike Shed syndrome".

The new process better identifies stakeholders:

  • "informed" users (previously "affected users")
  • "consulted" (previously undefined!)
  • "decision maker" (instead of the vague "approval")

Picking those stakeholders is still tricky, but our definitions are more explicit and aligned to the classic RACI matrix (Responsible, Accountable, Consulted, Informed).

Communication guidelines

Finally, a crucial part of the process (ADR-102) is to decouple the act of making and recording decisions from communicating about the decision. Those are two radically different problems to solve. We have found that a single document can't serve both purposes.

Because ADRs can affect a wide range of things, we don't have a specific template for communications. We suggest the Five Ws method (Who? What? When? Where? Why?) and, again, to keep things simple.

How we got there

The ADR process is not something I invented. I first stumbled upon it in the Thunderbird Android project. Then, in parallel, I was in the process of reviewing the RFC process, following Jacob Kaplan-Moss's criticism of the RFC process. Essentially, he argues that:

  1. the RFC process "doesn't include any sort of decision-making framework"
  2. "RFC processes tend to lead to endless discussion"
  3. the process "rewards people who can write to exhaustion"
  4. "these processes are insensitive to expertise", "power dynamics and power structures"

And, indeed, I have been guilty of a lot of those issues. A verbose writer, I have written extremely long proposals that I suspect no one has ever fully read. Some proposals were adopted by exhaustion, or ignored because not looping in the right stakeholders.

Our discussion issue on the topic has more details on the issues I found with our RFC process. But to give credit to the old process, it did serve us well while it was there: it's better than nothing, and it allowed us to document a staggering number of changes and decisions (95 RFCs!) made over the course of 6 years of work.

What's next?

We're still experimenting with the communication around decisions, as this text might suggest. Because it's a separate step, we also have a tendency to forget or postpone it, like this post, which comes a couple of months late.

Previously, we'd just ship a copy of the RFC to everyone, which was easy and quick, but incomprehensible to most. Now we need to write a separate communication, which is more work but, hopefully, worth the as the result is more digestible.

We can't wait to hear what you think of the new process and how it works for you, here or in the discussion issue! We're particularly interested in people that are already using a similar process, or that will adopt one after reading this.

Note: this article was also published on the Tor Blog.

Monday, 16 February

23:28

Joe Marshall: binary-compose-left and binary-compose-right [Planet Lisp]

If you have a unary function F, you can compose it with function G, H = F ∘ G, which means H(x) = F(G(x)). Instead of running x through F directly, you run it through G first and then run the output of G through F.

If F is a binary function, then you either compose it with a unary function G on the left input: H = F ∘left G, which means H(x, y) = F(G(x), y) or you compose it with a unary function G on the right input: H = F ∘right G, which means H(x, y) = F(x, G(y)).

(binary-compose-left f g)  = (λ (x y) (f (g x) y))
(binary-compose-right f g) = (λ (x y) (f x (g y)))

We could extend this to trinary functions and beyond, but it is less common to want to compose functions with more than two inputs.

binary-compose-right comes in handy when combined with fold-left. This identity holds

 (fold-left (binary-compose-right f g) acc lst) <=>
   (fold-left f acc (map g lst))

but the right-hand side is less efficient because it requires an extra pass through the list to map g over it before folding. The left-hand side is more efficient because it composes g with f on the fly as it folds, so it only requires one pass through the list.

22:28

Stranger Suggests: An Award-Winning Author, Young Bruce Lee, and Clubbing at the Cottage [The Stranger]

One really great thing to do every day of the week! by Julianne Bell MONDAY 2/16  

The Pains of Being Pure at Heart with Living Hour

(MUSIC) The Pains of Being Pure at Heart is one of those indie bands that shaped the soundscape of the late aughts. Their shoegazey, synth-spiked rock blossomed out of New York as the band played shows around the city and shared songs on MySpace (RIP). The Pains—as they're affectionately known—disbanded in 2019 after releasing five albums, but announced a reunion in 2024 to celebrate the 15th anniversary of their debut album with shows across Europe and North America. It's been over 10 years since the group played Seattle, and I can't wait to sing along to every word of "A Teenager in Love," a bouncy track off their self-titled debut fitting for the week after Valentine's Day. Canadian dream pop/fuzzy rock band Living Hour and Portland alt-rock group the Prids round out the lineup. (Vera Project, 7 pm, all ages) SHANNON LUBETICH

TUESDAY 2/17  

Nicola Griffith

(BOOKS) Seattle-based author and self-described “queer cripple with a PhD” Nicola Griffith has received countless honors, including two Washington State Book Awards and six Lambda Literary Awards, and was inducted into MOPOP’s Science Fiction and Fantasy Hall of Fame in 2024. Her novels HildSpear, and Menewood explore the medieval era through a queer perspective, and she also cofounded the #CripLit movement with the late activist Alice Wong. Her latest work, She Is Here, is a new installment in PM Press’s Outspoken Authors series, in which “today’s edgiest fiction writers showcase their most provocative and politically challenging stories.” Griffith’s contribution combines fiction, nonfiction, poetry, and artwork to discuss topics ranging from disability justice to the distinction between love and ownership. (Third Place Books Ravenna, 7 pm, all ages, free) JULIANNE BELL

WEDNESDAY 2/18  

History Pub: The Vanguard Generation: African American Artists, 1880-1918

(TALKS) Before the Harlem Renaissance, there was the Vanguard Generation, aka the first wave of Black artists and performers who helped shape American popular culture in the shadow of Jim Crow. Active in the years between the Civil War and World War I, many were the first in their families to be born free (or to attend college), creating art under extraordinary constraints. Drawing from newly uncovered archival documents, scholar Daniel E. Atkinson brings their stories of talent, conflict, and solidarity to life for this unique edition of History Pub. Hosted in partnership with Humanities Washington, this event is a reminder that Black innovation has always been foundational. (Spanish Ballroom, Tacoma, 7 pm, all ages) LANGSTON THOMAS

THURSDAY 2/19  

Young Dragon: A Bruce Lee Story

(PERFORMANCE) Keiko Green is a playwright, screenwriter, and performer who splits her time between Seattle and LA, and has written for TV shows like Hulu’s Interior Chinatown and the upcoming Apple TV series Margo’s Got Money Troubles. Last fall, Seattle hosted productions of two of her plays: Exotic Deadly: Or the MSG Play, a wacky time-traveling comedy set in 1999, and Hells Canyon, a chilling horror thriller. Now, there’s another opportunity to glimpse even more of Green’s impressive range with the Seattle Children’s Theatre premiere of her play Young Dragon, which shows Bruce Lee as an ambitious young man finding his place in the world in Seattle. I’m willing to bet audience members of all ages will be moved by Bruce’s journey to becoming a “flexible, fluid, and flowing master.” Seattle Children’s Theatre recently made the difficult decision to pull a two-week April run of Young Dragon from the Kennedy Center due to the impact of the Trump administration, which makes it even more important to support local productions like this one. (Seattle Children’s Theatre, times vary) JULIANNE BELL

FRIDAY 2/20  

Next Exit

(PERFORMANCE) Meet j. chavez, a Seattle theatre maven who won the KCACTF (Kennedy Center American College Theater Festival)’s National Undergraduate Playwriting Award (whew), for their opus how to clean your room (and remember all your trauma). Their new play Next Exit deals passionately, yet sympathetically, with a man named Miguel trapped on a highway (sans car, I think), who is communing with and deriving philosophical companionship from a dead possum called Orlando. Some deer come out, and a Lady In Yellow, and a sinister force that threatens to eat up anyone and anything lingering too long by the sizzling side of I-5. I’m not clear on how this all flows together. But you should indubitably find out. (Annex Theatre, times vary, all ages) ANDREW HAMLIN

SATURDAY 2/21  

Club 90s: Heated Rivalry

(PARTIES & NIGHTLIFE) I’m grateful for the spark of euphoria that the low-budget Canadian hockey drama Heated Rivalry has brought to the internet over the last few months. No matter what horrifying apocalyptic shit is happening in the news, NO ONE CAN TAKE THE COTTAGE FROM US. Composer Peter Peter’s soundtrack to the blockbuster show is equally as hot-and-heavy and obsession-worthy, and episodes also include some banger needle drops. At this rave, dance to Wolf Parade’s "I'll Believe in Anything,” and reenact the haunted club scene as Harrison’s cover of t.A.T.u.’s “All the Things She Said” blares. This trendy event aims to let off some collective steam and celebrate queer joy—all this sexual tension has to go somewhere outside of streaming “KISS!” at every Kraken game. (The Showbox, 8:30 pm, 18+) BRI BREY

SUNDAY 2/22  

Bitchin Bajas, Geologist

(MUSIC) Don’t be deceived by Chicago trio Bitchin Bajas’ goofy name: They’re one of the world’s headiest groups. Evolving out of neo-krautrockers Cave, BB synthesists Cooper Crain and Dan Quinlivan and saxophonist Rob Frye have been enhancing their melodic chops, creating majestic tracks that would sound righteous filling Europe’s most ornate cathedrals. This past October at Neptune Theatre, they outshone their much more celebrated headliners Stereolab in a set that made me feel as if I were on five hits of Owsley. Animal Collective member Geologist (aka Brian Weitz) just released Can I Get a Pack of Camel Lights?, the follow-up to last year’s arcane, abstracted Americana LP, A Shaw Deal, with Sleepy Doug Shaw. The new hurdy-gurdy-powered album’s a mystical avant-rock trip that I dig more than anything his parent group have done. (Sunset Tavern, 8 pm, 21+) DAVE SEGAL

21:42

21:07

Ideas for the fediverse [Scripting News]

Bullet items for the Fediforum conference in March.

  • Subscribing must be easy.
  • Some things will work better if they're slightly centralized, esp subscribing.
  • Use DNS for naming people.
  • Support RSS in and out, and test it once you add the feature, so many easy things to fix remain broken (like titles of the feeds, look terrible in a list of feed titles). RSS is how you earn the "web" in your name. "Web" means something, it's just an intention, there are rules.
  • You don't need "open" if you have "web." The web is by definition open. Water is wet. Raises question re what the not-open web is. (Silo.)
  • Support the basic features of text in the web. If you shut off the writing features of the web, as Twitter did, you're not really part of the web. Especially linking.
  • Listen to users, listen to other developers.
  • Automattic is doing heroic work connecting WordPress to ActivityPub. This means that WordPress APIs are now ActivityPub APIs. Not a small thing.
  • Look at text coming out of WordPress into Mastodon, the HTML used definitely could be improved. Seems pretty simple things to fix, the simple things matter. Example: WordPress version. Mastodon version of the same post. Let's make this beautiful!
  • Keep trying fundamentally new architectures.
  • Learn from past mistakes.
  • Interop is paramount.
  • Don't re-invent.

BTW, this can be read on my blog, on Mastodon, in WordPress and of course my feeds (and thus can be read in any app that supports inbound RSS).

20:49

A Lovely Valentine’s Day Dinner At Dozo [Whatever]

If you caught my last two posts over Dozo, Dayton’s premier underground sushi dining experience, then you already know how much I love it. What better way to celebrate the day of love than with Dozo’s special Valentine’s Day 7-course omakase style chef’s menu that offers off-menu selections and limited, intimate seating at the bar so you can watch the chefs work their magic? And trust me, it is indeed magic.

Not only was I extremely excited about the curated sushi menu and brand new sake pairing to go alongside it, but Tender Mercy (the bar that houses Dozo) posted their Valentine’s Day cocktail line-up a few days ago, and it looked incredible, as well.



View this post on Instagram

Long story short, I knew my tastebuds were in for a real treat.

I booked the 8:30pm slot on their first day of offering this menu, which was Tuesday. Getting a later start to dinner than usual only made me that much hungrier for what was to come.

I got to Tender Mercy about twenty minutes early, so I just had a seat at their bar and perused the special cocktail menu:

A small paper menu listing Tender Mercy's Valentine's Day cocktails. There's a detailed border in the corners and two Cupid-esque angels in the top corners. There's four cocktails and one NA cocktail listed.

I love this dessert cocktail menu because whatever your poison is, they’ve got it. A gin drink, a vodka cocktail, even tequila and bourbon. And, of course, a mocktail. They all sounded so delicious but also very rich, and I didn’t want to spoil my appetite with something on the heavier side (like that cheesecake foam, YUM) so I actually opted for the Pillow Princess and asked the bartender to put his spirit of choice in it. He said he recommended Hennessey Cognac (I’m pretty sure it was Hennessy Very Special but I’m just guessing from the brief look I got at the bottle).

I can’t say I’ve had Cognac all that much, but the sweet, almost vanilla-like flavor of the Hennessy worked super well in it.

A small rocks glass with an orange-ish yellow liquid in it with a little bit of a foamy layer on top. There's a metal cocktail pick with raspberries and blueberries on it on top. The drink sits on a black, leather-looking bar and the beautifully lit wood and glass shelves of the bar can be seen in the background.

I’m glad I went with the bartender’s recommendation, he’s truly a pro and has never steered me wrong before so I trust his judgement a hundred percent.

After a few minutes, it was time to get seated in Dozo. There were only six of us total at the bar, a group of three on my right and a couple on my left. Our menu was tucked into our envelope shaped napkin and I briefly surveyed what was going to be served.

A small paper menu labeled

Truly the most eye-catching dish was the wasabi ice cream. Listen, I trust Dozo, but man, did that sound absolutely bonkers. I held strong in my faith, though.

Per usual, I went with the sake pairing, because when else do I get to try so many different expertly curated sakes? Plus, the chef said he tried each of the sake pairings and highly recommended it.

Up first was a spicy salmon onigiri:

A big ol' triangle of onigiri. The rice is more of like a brownish color instead of pure white, with visible flecks of seasoning throughout. It's served on a small square matte black plate.

I wasn’t sure how spicy the salmon would actually end up being, so I had my water on standby. After getting through the warm, soft, perfectly seasoned rice, I was met with a generously portioned salmon filling that wasn’t at all too spicy! This onigiri was hands down the best one I’ve ever had, though I will admit my experience is rather limited in that department. Of course, it’s not everyday I have an onigiri, but this one definitely takes the cake.

For the sake pairing I was served Amabuki’s “I Love Sushi” Junmai. Obviously, this is a fantastic name for a sake. It says all you need to know about it right in the name, plain and simple. Jokes aside, this was a perfectly fine sake. With a dry, crisp flavor, it didn’t really stand out to me much but paired well with the umami flavor of the onigiri.

Off to a great start (I expected no less), the second course was looking mighty fine:

Three pieces of nigiri in a row on a rectangular matte black plate.

From left to right, we have hamachi (yellowtail), hirame (flounder), and skipjack tuna. The hamachi’s wasabi sauce packed a ton of great wasabi flavor without painfully clearing my sinuses. It had just the right amount of strength, a very balanced piece. The flounder was exceptionally tender with a melt-in-your-mouth texture. The skipjack has always been a tried and true classic in my previous Dozo experiences, and today’s serving of it was no different. All around a total winner of a course, with tender, umami packed pieces.

To accompany this course, I was served Takatenjin “Soul of the Sensei,” which is a Junmai Daiginjo. This sake is made with Yamadanishiki, which is considered to be the king of sake rice. “Soul of the Sensei” was created as a tribute to revered sake brewer Hase Toji. Much like the first sake we were served, it was crisp with a slight dryness, pairing well with the fresh fish and savory flavors. It had just a touch of melon.

Up next was this smaller course with a piece of chu toro and a piece of smoked hotate:

Two pieces of nigiri on a small round black plate, one piece a dark pink fatty tuna and the other an orangeish beige colored piece of smoked scallop.

Both pieces looked stunning and fresh. The chef explained that chu toro is the fatty belly meat of the tuna, which is a more prized and delicious cut, a real treat. Indeed, it was very buttery and had a rich mouthfeel. I didn’t know what hotate was, but it turns out it’s a scallop, and I think they mentioned something about hotate scallops come from a specific region in Japan, but I might be misremembering. Anyways, I love scallops, but I’ve definitely never had one that’s been smoked before. It was fun to watch the chef smoke all of the pieces before dishing them out.

Oh my goodness this piece was incredible. It had a luscious texture and complex, beautifully smokiness that didn’t detract from the flavor of the scallop. It was a masterfully smoked piece of high quality, fresh scallop. Remarkable piece! Great course all around.

Instead of sake for this course, we were served a shot of Suntory Whiskey. but I have no idea which type specifically. Maybe the Toki? But also very well could’ve been the Hibiki Harmony because the shot was definitely a dark, ambery color. I wish I had a palate for whiskey, especially premium Japanese whiskey that the kitchen so generously gifted upon each guest, but truthfully it was a tough couple of sips for me. Like fire in my throat, that shit put some damn hair on my chest. Super grateful for the lovely whiskey, but sheesh it definitely burned. The chefs actually took the shot with us, how fun!

Fear not, there was some lovely mushroom and yuzu ramen on the way to ease the pain:

A beautiful stoneware bowl filled with ramen noodles and a lovely broth, garnished with small green onion pieces.

This ramen is actually vegetarian, made with umami-packed mushrooms and bright yuzu citrus. The green onions and drops of chili oil drizzled on top added a fantastic balance of flavors for a well-rounded, hearty, warm bowl of delicious ramen that was good to the last drop. I wish they had ramen more often, it was so great to sip on some warm broth while it was below freezing outside. I absolutely loved the stoneware bowl it was served in, I would love to have something like that in my own kitchen.

For the sake, this one was truly special. Hana Makgeolli “MAQ8 Silkysonic.” Look how CUTE these cans are! These adorable single-serve cans contain a fun, slightly bubbly, just-a-touch-sweet sake that was a great addition to the night’s line-up. It’s a bit lower alcohol content than some other sakes at 8%, making it so you can enjoy more than one can of this bubbly goodness if you so desired.

I was definitely pretty full by this point, but I powered on for this next course consisting of some torched sake, unagi, and suzuki.

Three pieces of nigiri lined up on a black rectangular matte plate.

It was a little confusing with the first piece of fish in this lineup being called sake, since I assumed sake was just the drink we all know and love, but sake is actually also salmon. It was fun to watch the chefs use a blowtorch to torch the salmon, as any course involving fire is a great course. The salmon had a sauce on top that I hate to say I can’t remember what exactly it was. I know, I had one job! I should’ve taken better notes, but there was so much going on between being served the sake and explained the specifics of that plus the chefs explaining the whole course, plus the couple next to me conversing with me (we had lovely conversations). It was a lot, okay! Sauce aside, the salmon was excellent and beautifully torched.

For the unagi, I actually love eel, so I knew this piece was about to be bomb. With the sweet, thick glaze on top and fresh slice of jalapeno, this piece was loaded with deliciousness. I was worried the jalapeno slice would bring too much heat to the dish for me, but it was perfect and not hot at all, just had great flavor.

The final piece, suzuki, is Japanese sea bass. There is a small pickled red onion sliver on top, it is not a worm, to be clear. Apparently the Japanese sea bass is known by different names depending on how mature the fish is, suzuki being the most grown stage of the fish. This piece was very simply dressed and the tender fish spoke for itself.

The sake for this course was Tentaka’s “Hawk in the Heavens” Tokubetsu Junmai. Much like with the food of this course, I should have taken better notes, because I don’t remember this sake at all. I don’t remember what it tasted like, my thoughts on it, nothing. I didn’t even remember the name until I looked at the menu again. I am so sorry, it is truly only because it was the sixth course and I had just taken a shot and was busy talking! Forgive me and we shall move on.

For our last savory course, it was two pieces of the chef’s choice:

Two pieces of nigiri, one being fish and one being wagyu.

The chefs said in honor of it being Valentine’s Day, they wanted to give us a bit more of a lux piece, and opted for wagyu and torched toro. Sending off the savory courses with wagyu was truly a delight, it really provided the turf in “surf and turf.” Every time I’ve had wagyu from Dozo it’s been so tender and rich, the fat just melting in my mouth. It’s also a fun novelty since I don’t really have wagyu anywhere else.

Finally, it was time for dessert. I couldn’t wait to try the wasabi ice cream:

A small glass coupe shaped bowl holding the wasabi ice cream. There's crushed wasabi peas on top.

I would’ve never imagined that wasabi ice cream could be even remotely edible, let alone enjoyable, but oh my gosh. Oh my gosh. How was this so good?! The creaminess contrasting with the crunchy wasabi peas, the perfect amount of sweetness mixing with the distinct flavor of the wasabi, LORD! It was incredibly, bizarrely delicious. The wasabi didn’t have that sinus-clearing bite to it, yet retained its unmistakable palate. What a treat.

For the final sake, I was served Kiuchi Brewery’s “Awashizuki” Sparkling Sake. I was particularly excited for this one because I love sparkling sakes, they are undoubtedly my favorite category of sake. Anything with bubbles is just better! I will say that the Awashizuki seemed to be much more lowkey on the bubbles than some other sparkling sakes I’ve had before. The bubbles were a bit more sparse and toned down, but it was still lightly carbonated enough that you could tell it wasn’t still. It was sweeter and more refreshing than the others in the evening’s lineup, which makes sense since it was the dessert course pairing. I really liked this one!

All in all, I had yet another fantastic experience at Dozo, and I absolutely loved their Valentine’s Day lineup. The limited seating at the bar made it feel all the more exclusive and special, and every course was totally delish. I got to try lots of new sakes and have really nice chats with the people next to me, and really just had a great evening all around.

The ticket for this event was $95, after an added 18% gratuity and taxes, it was more like $125. The sake pairing was $50 and I also tipped the waitress that was pouring the pairings and telling me about them. It was definitely a bit of a splurge event but hey, it was for V-Day! Gotta treat yourself. And I’m so glad I did!

Which piece of fish looks the most enticing to you? Or perhaps the ramen is more your speed? Have you tried any of the sakes from the lineup? Let me know in the comments, and have a great day!

-AMS

20:21

Scarathon [Penny Arcade]

Fortnite oversaw the transition of Battle Royale from game to genre, and I think ARC Raiders performed the same trick for Extraction - and in a very similar way. Tarkov and PUBG are both loping, all fours, half-clad man beasts with their dicks out in a public park. Their own skin feels too tight, somehow; they're scratching themselves on the rough bark of trees just to get a moment of release. Fortnite and ARC Raiders are, by way of comparison, videogames.

20:07

Philipp Kern: What is happening with this "connection verification"? [Planet Debian]

You might see a verification screen pop up on more and more Debian web properties. Unfortunately the AI world of today is meeting web hosts that use Perl CGIs and are not built as multi-tiered scalable serving systems. The issues have been at three layers:

  1. Apache's serving capacity runs full - with no threads left to serve requests. This means that your connection will sit around for a long time, not getting accepted. In theory this can be configured, but that would require requests to be handled in time.
  2. Startup costs of request handlers are too high, because we spawn a process for every request. This currently affects the BTS and dgit's browse interface. packages.debian.org has been fixed, which increased scalability sufficiently.
  3. Requests themselves are too expensive to be served quickly - think git blame without caching.

Optimally we would go and solve some scalability issues with the services, however there is also a question of how much we want to be able to serve - as AI scraper demand is just a steady stream of requests that are not shown to humans.

How is it implemented?

DSA has now stood up some VMs with Varnish for proxying. Incoming TLS is provided by hitch, and TLS "on-loading" is done using haproxy. That way TLS goes in and TLS goes out. While Varnish does cache, if the content is cachable (e.g. does not depend on cookies) - that is not the primary reason for using it: It can be used for flexible query and response rewriting.

If no cookie with a proof of work is provided, the user is redirected to a challenge page that does some webcrypto in Javascript - because that looked similar to what other projects do (e.g. haphash that originally inspired the solution). However so far it looks like scrapers generally do not run with Javascript enabled, so this whole crypto proof of work business could probably be replaced with just a Javascript-based redirect. The existing solution also has big (security) holes in it. And, as we found out, Firefox is slower at webcrypto than Chrome. I have recently reduced the complexity, so you should notice it blocking you significantly less.

Once you have the cookie, you can keep accessing the site for as long as the cookie is valid. Please do not make any assumptions about the cookies, or you will be broken in the future.

For legitimate scrapers that obey robots.txt, there is now an automatically generated IP allowlist in place (thanks, Marco d'Itri). Turns out that the search engines do not actually run Javascript either and then loudly complain about the redirect to the challenge page. Other bots are generally exempt.

Conclusion 

I hope that right now we found sort of the sweet spot where the admins can stop spending human time on updating firewall rules and the services are generally available, reasonably fast, and still indexed. In case you see problems or run into a block with your own (legitimate) bots, please let me know.

19:21

[$] Open source security in spite of AI [LWN.net]

The curl project has found AI-powered tools to be a mixed bag when it comes to security reports. At FOSDEM 2026, curl creator and lead developer Daniel Stenberg used his keynote session to discuss his experience receiving a slew of low-quality reports and, at the same time, realizing that large language model (LLM) tools can sometimes find flaws that other tools have missed.

Slog AM: The Government’s Out of Money Again, Two Immigrants Were Released from Tacoma’s Detention Center, and Who Knows, We Might Get Some Snow [The Stranger]

The Stranger's morning news roundup. by Hannah Murphy Winter

Good Morning! It’s Presidents Day, which our president is celebrating with an AI-generated Time magazine cover and the quote: “I was the hunted, and now I’m the hunter.” This is probably what George Washington had in mind, right?

The Weather: We had our taste of False Spring, and now we’re back to winter for a bit. Highs in the 40s, lows right around freezing, and we might even get a little snow later in the week. 

Some Good News: ICE released Wilmer Toledo-Martinez, a Vancouver, WA man who was mauled by an ICE dog in December, from the Northwest Detention Center. He still has to continue his immigration case, but he’s doing it from home with his wife (who is an American citizen) and three kids. 

And He Wasn’t the Only One: Greggy Sorio, a Filipino immigrant who came to the US on a green card, had to lose a part of his foot to infection, bleed out of his rectum for a month, and lose a “dramatic” amount of weight before a judge demanded that he be released from the Northwest ICE Processing facility in Tacoma. Sorio is able to access real medical care now, but he’s still at risk of deportation. 

Another Shutdown: In a Valentine’s gift to us all, the Department of Homeland Security technically ran out of funds on Saturday while Dems in Congress try to fight for some limitations on ICE’s funding. Unfortunately, ICE and Border Patrol will barely be affected. And nearly 85 percent of FEMA employees and 95 percent of TSA’s are expected to work without pay through the shutdown. 

ICYMI: Last week was a really good week for local anti-ICE legislation. City Councilmember Alexis Mercedes Rinck introduced a moratorium on new detention centers in Seattle, and the County and the Port are blocking immigration agents from using their non-private land. Next up: let’s talk about the CCTV and ALRP cameras. 

Trump Bombs 39th Boat: On Saturday, the military bombed another supposed narco-trafficking boat in the Caribbean. This illegal 5-month campaign, theoretically to fight the drug trade, has killed 133 people. This bombing killed three

Pity the Millionaires: According to the Seattle Times’s Danny Westneat this weekend, taxing the rich is generally a popular and successful proposition. We learned that last week when, it turns out, the tax meant to fund our Social Housing Developer brought in more than double what was projected in its first year. (As Mayor Katie Wilson put it, this city is “filthy rich.”) And we know that the Millionaire Tax currently scooching through the leg is wildly popular. But Dems in the state leg (and Jamie Pedersen, specifically) are still considering a rollback for our Estate Tax to avoid the myth of the Fleeing Rich People.

Sheriff Certification: Right now in our state law, elected sheriffs are required to get certified by the Washington State Criminal Justice Training Commission within a year of taking office. Seems reasonable, right? But right now, if they just… don’t do it, there’s nothing anyone can do about that. The state legislature introduced a bill that would oust sheriffs who aren’t certified. So naturally, Pierce County’s hyperconservative, transphobic Sheriff Keith Swank thinks it’s unfair. 

A Headline From Popular Mechanics to Breakup the Doldrums Today: “Jesus Was a Psychedelic Mushroom, a Controversial Theory Suggests. Could It Reshape Christianity Forever?” 

Olympic Breakdown: NBC spent the first half of the games talking about American figure skater Ilia “Quad God” Malinin as the new face of the sport and the inevitable gold medalist. And he is the only person who’s ever landed a quad axel in an international competition. But in his final skate in the competition, the 21-year-old fell twice, struggled to deliver any of the quad jumps he’s famous for, and ended up placing eighth in the competition. Watching reporters try to make him explain what happened within minutes of his walking off the ice was brutal, and he handled it with a lot of grace. He told the Athletic that he was feeling overwhelmed when he got onto the ice. “I just felt like all the just traumatic moments of my life really just started flooding my head,” he said. “And there’s just like so many negative thoughts that just flooded into there and I just did not handle them.” We’ll see him again in four years, and by then he’ll surely have figured out how to fight the yips. 

The Curlers Are Fighting: Both the men’s and women’s Canadian curling teams were accused of cheating—both for getting too handsy with the stone after they released it. And if you’ve watched curling, you know it’s a very mild-mannered sport (they’ve got brooms for fuck’s sake), but the head of the Men’s curling team threw around enough “fucks” that news reports called the exchange NSFW. 

Wanna watch some of the action for yourself? Our local Granite Curling Club is throwing watch parties all weekend. 

They Don’t Make ’Em Like They Used to: Naturally, when Olympians medal, they fuckin’ party. And who would take their medal off?? But it turns out, someone cut some corners on this year’s medals, and they’re popping right off their ribbons while the athletes celebrate. “Don’t jump in them. I was jumping in excitement and it broke,” said women’s downhill ski gold medalist Breezy Johnson. “I’m sure somebody will fix it. It’s not crazy broken but a little broken.”

Fun Olympics Fact: There’s a move in ice dancing called a twizzle. You’re welcome. 

 

          View this post on Instagram                      

A post shared by CBS News (@cbsnews)

17:35

Companies House ID checks [RevK®'s ramblings]

Apparently this petition is confusing a few people. So trying to explain.

At the simplest level Companies House have to ID people now - directors and persons with significant control (PSC). This seems not that unreasonable to be honest.

ID means somehow proving a real ID, and that has a lot of issues - but they have some government ID app, or you can take ID to a post office or some such. The actual ID process is not the issue, and having to have a proper ID to be a director or PSC is not that daft - Companies House have always published the identity of people behind companies. These days there is more privacy over things like actual date of birth and home address, thankfully. But the names of company directors and PSC are a matter of public record.

So the new system means proved ID for director or PSC. Simples. You would think.

But no!

The reason for the petition, and my concern is simple.

My wife is PSC for our company. No problem. We know next return is June. So no action, surely?

Companies House have a deadline for proving her ID, and the confusion here is that is not the same as the annual return and the deadline for me to prove my ID as director. So we did not expect it to be an issue.

Turns out the deadline for proving ID for PSC is 14th of month of birth, so for her, last December.

Well, Companies House could have let her know - but they CHOSE NOT TO. Instead they waiting until the deadline was past and then sent her a letter basically saying she was now a CRIMINAL.

The letter was actually very very badly worded, and it seems that doing the ID process promptly and before end of December was enough to shut them up, thankfully. But from what I can see my wife is technically a criminal for not having met the deadline.

Someone else I know nearly had their bank accounts frozen over this even.

Of course, I was a tad panicked, and so wanted to sort my ID at companies house.

There is a snag.

I can't.

This it what the petition is about.

The stupidity.

I have to wait. I cannot prove my ID now!

I have to wait (1) until 1st of month of my birthday as PSC for some other company. And (2) July for many other companies for which I am director and PSC.

I have done my ID as a PSC now, but not as director, so I have to do again, and I cannot do that now. I cannot do in one go. I cannot do BEFORE the 14 day window.

I fully understand a legal deadline.

I do not understand an startline.

I do not understand why I cannot prove my ID now, and be done with it.

The only possible reason it to catch people out and make them unwilling criminals.

FFS 14 days! People have holidays. People can be off sick.

That is what the petition is about.

17:07

Four stable kernels to fix problematic commit [LWN.net]

Greg-Kroah Hartman has released the 6.19.2, 6.18.12, 6.12.73, and 6.6.126 stable kernels. These kernels each contain a single change; Kroah-Hartman has reverted one problematic commit that prevents some systems from booting. "If the last stable release worked just fine, no need to upgrade."

15:07

New RSS feature from Manton [Scripting News]

A few days ago I asked Manton Reece if he could add a feature that gave me a feed of replies to me on his service, micro.blog.

  • I post a lot of stuff to micro.blog via my linkblog RSS feed. Every one of those items can be commented on. But unless I visit micro.blog regularly, I don't see the comments. I guess people have mostly figured out that I'm an absent poster, and don't say anything. Even so, there are some replies. Wouldn't it be great if the responses could show up in my blogroll. And of course if there was an RSS feed of the replies, I would see them when I was looking for something possibly interesting, one of the main reasons I have a blogroll, and keep finding new uses for it.

The feed is there now, I'm subscribed and new comments are posted in the feed and Murphy-willing I will see them. Bing!

It's a killer feature for sure. But the best part of it is this -- here are two developers working together. This is how the web works when it's working.

BTW a suggestion. Right now the title on my feed is:

  • Micro.blog - dave mentions

That's a problem in the limited horizontal space in the blogroll. A more useful title would be:

  • "dave" mentions on micro.blog

BTW, if you were building a social network out of RSS this would be an essential feature. It also validates Manton's intuition to allow people like me to be absentee publishers to his community. But the missing piece was allowing the conversation to be two-way, which it now is. That deserves another bing!

CodeSOD: C+=0.25 [The Daily WTF]

A good C programmer can write C in any language, especially C++. A bad C programmer can do the same, and a bad C programmer will do all sorts of terrifying things in the process.

Gaetan works with a terrible C programmer.

Let's say, for example, you wanted to see if an index existed in an array, and return its value- or return a sentinel value. What you definitely shouldn't do is this:

    double Module::GetModuleOutput(int numero) {
        double MAX = 1e+255 ;
        if (this->s.sorties+numero )
            return this->s.sorties[numero];
        else
            return MAX ;
    }

sorties is an array. In C, you may frequently do some pointer arithmetic operations, which is why sorties+numero is a valid operation. If we want to be pedantic, *(my_array+my_index) is the same thing as my_array[my_index]. Which, it's worth noting, both of those operations de-reference an array, which means you better hope that you haven't read off the end of the array.

Which is what I suspect their if statement is trying to check against. They're ensuring that this->s.sorties+numero is not a non-zero/false value. Which, if s.sorties is uninitialized and numero is zero, that check will work. Otherwise, that check is useless and does nothing to ensure you haven't read off the end of the array.

Which, Gaetan confirms. This code works "because in practice, GetModuleOutput is called for numero == 0 first. It never de-references off the end of the array, not because of defensive programming, but because it just never comes up in actual execution.

Regardless, if everything is null, we return 1e+255, which is not a meaningful value, and should be treated like a sentinel for "no real value". None of the calling code does that, however, but also, it turns out not to matter.

This pattern is used everywhere there is arrays, except the handful of places where this pattern is not used.

Then there's this one:

    if(nb_type_intervalle<1)    { }
    else 
        if((tab_intervalle=(double*)malloc(nb_lignes_trouvees*nb_type_intervalle*2 \
                                                        *sizeof(double)))==NULL)
            return(ERREUR_MALLOC);

First, I can't say I love the condition here. It's confusing to have an empty if clause. if (nb_type_intervalle>=1) strikes me as more readable.

But readability is boring. If we're in the else clause, we attempt a malloc. While using malloc in C++ isn't automatically wrong, it probably is. C++ has its own allocation methods that are better at handling things like sizes of datatypes. This code allocates memory for a large pile of doubles, and stores a pointer to that memory in tab_intervalle. We do all this inside of an if statement, so we can then check that the resulting pointer is not NULL; if it is, the malloc failed and we return an error code.

The most frustrating thing about this code is that it works. It's not going to blow up in surprising ways. I never love doing the "assignment and check" all in one statement, but I've seen it enough times that I'd have to admit it's idiomatic- to C style programming. But that bit of code golf coupled with the pointlessly inverted condition that puts our main logic in the else just grates against me.

Again, that pattern of the inverted conditional and the assignment and check is used everywhere in the code.

Gaetan leaves us with the following:

Not a world-class WTF. The code works, but is a pain in the ass to inspect and document

In some ways, that's the worst situation to be in: it's not bad enough to require real work to fix it, but it's bad enough to be frustrating at every turn.

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

14:49

[$] Compact formats for debugging—and more [LWN.net]

At the 2025 Linux Plumbers Conference in Tokyo, Stephen Brennan gave a presentation on the debuginfo format, which contains the symbols and other information needed for debugging, along with some alternatives. Debuginfo files are large and, he believes, are a bit scary to customers because of the "debug" in their name. By rethinking debuginfo and the tools that use it, he hopes that free-software developers "can add new, interesting capabilities to tools that we are already using or build new interesting tools".

Four stable kernels for Monday [LWN.net]

Greg Kroah-Hartman has announced the release of the 6.19.1, 6.18.11, 6.12.72, and 6.6.125 stable kernels. As always, each contains important fixes throughout the tree; users of these kernels are advised to upgrade.

14:21

Link [Scripting News]

My Twitter account has been hijacked. I can't log on, or change the password. I can't communicate with the company, so I'll try here. Please shut down my account, davewiner. To my friends who have Twitter accounts, if you see a post from davewiner on Twitter, please reply and let the people who see it know that it isn't from me.

Reducing tab clutter in Drummer [Scripting News]

In Drummer, when I get too many tabs open from things I haven't looked at in a while, this is what I do.

  1. I choose Add Bookmark from the Bookmarks menu
  2. The menu opens with the new bookmark at the top of the list
  3. If it's the first time I press Return and enter "Tabs I Closed Recently"
  4. Then I drag the new bookmark under that headline.
  5. Close the Bookmarks tab.
  6. Remove the tab I just bookmarked.
  7. Voila! Clutter Reduced.

14:07

Security updates for Monday [LWN.net]

Security updates have been issued by Debian (chromium, pdns-recursor, python-django, and wireshark), Fedora (gnutls, linux-sgx, mingw-expat, nginx, nginx-mod-brotli, nginx-mod-fancyindex, nginx-mod-headers-more, nginx-mod-modsecurity, nginx-mod-naxsi, nginx-mod-vts, p11-kit, python-aiohttp, vim, and xen), Red Hat (kernel, kernel-rt, python-s3transfer, python-urllib3, and resource-agents), SUSE (aaa_base, abseil-cpp, build-20260202, cargo-auditable, cargo-c, chromedriver, cockpit, cockpit-packages, cockpit-subscriptions, curl, elemental-toolkit, elemental-operator, gnome-remote-desktop, go1.24, go1.25, gpg2, haproxy, himmelblau, htmldoc, ImageMagick, iperf, java-1_8_0-openjdk, kernel, krb5, kubevirt, libowncloudsync-devel, libpng16-16, libsodium, libsoup, libsoup2, micropython, net-snmp, opencryptoki, openjfx, openssl1, ovmf, postgresql14, postgresql15, postgresql16, protobuf, python-aiohttp, python-brotli, python-maturin, python-pip, python-urllib3, python310, python311, python-rpm-macros, python311-cryptography, python314, screen, systemd, u-boot, util-linux, and vim), and Ubuntu (dotnet8, dotnet10, expat, freerdp2, freerdp3, and python-aiohttp).

12:35

The Promptware Kill Chain [Schneier on Security]

The promptware kill chain: initial access, privilege escalation, reconnaissance, persistence, command & control, lateral movement, action on objective

Attacks against modern generative artificial intelligence (AI) large language models (LLMs) pose a real threat. Yet discussions around these attacks and their potential defenses are dangerously myopic. The dominant narrative focuses on “prompt injection,” a set of techniques to embed instructions into inputs to LLM intended to perform malicious activity. This term suggests a simple, singular vulnerability. This framing obscures a more complex and dangerous reality. Attacks on LLM-based systems have evolved into a distinct class of malware execution mechanisms, which we term “promptware.” In a new paper, we, the authors, propose a structured seven-step “promptware kill chain” to provide policymakers and security practitioners with the necessary vocabulary and framework to address the escalating AI threat landscape.

In our model, the promptware kill chain begins with Initial Access. This is where the malicious payload enters the AI system. This can happen directly, where an attacker types a malicious prompt into the LLM application, or, far more insidiously, through “indirect prompt injection.” In the indirect attack, the adversary embeds malicious instructions in content that the LLM retrieves (obtains in inference time), such as a web page, an email, or a shared document. As LLMs become multimodal (capable of processing various input types beyond text), this vector expands even further; malicious instructions can now be hidden inside an image or audio file, waiting to be processed by a vision-language model.

The fundamental issue lies in the architecture of LLMs themselves. Unlike traditional computing systems that strictly separate executable code from user data, LLMs process all input—whether it is a system command, a user’s email, or a retrieved document—as a single, undifferentiated sequence of tokens. There is no architectural boundary to enforce a distinction between trusted instructions and untrusted data. Consequently, a malicious instruction embedded in a seemingly harmless document is processed with the same authority as a system command.

But prompt injection is only the Initial Access step in a sophisticated, multistage operation that mirrors traditional malware campaigns such as Stuxnet or NotPetya.

Once the malicious instructions are inside material incorporated into the AI’s learning, the attack transitions to Privilege Escalation, often referred to as “jailbreaking.” In this phase, the attacker circumvents the safety training and policy guardrails that vendors such as OpenAI or Google have built into their models. Through techniques analogous to social engineering—convincing the model to adopt a persona that ignores rules—to sophisticated adversarial suffixes in the prompt or data, the promptware tricks the model into performing actions it would normally refuse. This is akin to an attacker escalating from a standard user account to administrator privileges in a traditional cyberattack; it unlocks the full capability of the underlying model for malicious use.

Following privilege escalation comes Reconnaissance. Here, the attack manipulates the LLM to reveal information about its assets, connected services, and capabilities. This allows the attack to advance autonomously down the kill chain without alerting the victim. Unlike reconnaissance in classical malware, which is performed typically before the initial access, promptware reconnaissance occurs after the initial access and jailbreaking components have already succeeded. Its effectiveness relies entirely on the victim model’s ability to reason over its context, and inadvertently turns that reasoning to the attacker’s advantage.

Fourth: the Persistence phase. A transient attack that disappears after one interaction with the LLM application is a nuisance; a persistent one compromises the LLM application for good. Through a variety of mechanisms, promptware embeds itself into the long-term memory of an AI agent or poisons the databases the agent relies on. For instance, a worm could infect a user’s email archive so that every time the AI summarizes past emails, the malicious code is re-executed.

The Command-and-Control (C2) stage relies on the established persistence and dynamic fetching of commands by the LLM application in inference time from the internet. While not strictly required to advance the kill chain, this stage enables the promptware to evolve from a static threat with fixed goals and scheme determined at injection time into a controllable trojan whose behavior can be modified by an attacker.

The sixth stage, Lateral Movement, is where the attack spreads from the initial victim to other users, devices, or systems. In the rush to give AI agents access to our emails, calendars, and enterprise platforms, we create highways for malware propagation. In a “self-replicating” attack, an infected email assistant is tricked into forwarding the malicious payload to all contacts, spreading the infection like a computer virus. In other cases, an attack might pivot from a calendar invite to controlling smart home devices or exfiltrating data from a connected web browser. The interconnectedness that makes these agents useful is precisely what makes them vulnerable to a cascading failure.

Finally, the kill chain concludes with Actions on Objective. The goal of promptware is not just to make a chatbot say something offensive; it is often to achieve tangible malicious outcomes through data exfiltration, financial fraud, or even physical world impact. There are examples of AI agents being manipulated into selling cars for a single dollar or transferring cryptocurrency to an attacker’s wallet. Most alarmingly, agents with coding capabilities can be tricked into executing arbitrary code, granting the attacker total control over the AI’s underlying system. The outcome of this stage determines the type of malware executed by promptware, including infostealer, spyware, and cryptostealer, among others.

The kill chain was already demonstrated. For example, in the research “Invitation Is All You Need,” attackers achieved initial access by embedding a malicious prompt in the title of a Google Calendar invitation. The prompt then leveraged an advanced technique known as delayed tool invocation to coerce the LLM into executing the injected instructions. Because the prompt was embedded in a Google Calendar artifact, it persisted in the long-term memory of the user’s workspace. Lateral movement occurred when the prompt instructed the Google Assistant to launch the Zoom application, and the final objective involved covertly livestreaming video of the unsuspecting user who had merely asked about their upcoming meetings. C2 and reconnaissance weren’t demonstrated in this attack.

Similarly, the “Here Comes the AI Worm” research demonstrated another end-to-end realization of the kill chain. In this case, initial access was achieved via a prompt injected into an email sent to the victim. The prompt employed a role-playing technique to compel the LLM to follow the attacker’s instructions. Since the prompt was embedded in an email, it likewise persisted in the long-term memory of the user’s workspace. The injected prompt instructed the LLM to replicate itself and exfiltrate sensitive user data, leading to off-device lateral movement when the email assistant was later asked to draft new emails. These emails, containing sensitive information, were subsequently sent by the user to additional recipients, resulting in the infection of new clients and a sublinear propagation of the attack. C2 and reconnaissance weren’t demonstrated in this attack.

The promptware kill chain gives us a framework for understanding these and similar attacks; the paper characterizes dozens of them. Prompt injection isn’t something we can fix in current LLM technology. Instead, we need an in-depth defensive strategy that assumes initial access will occur and focuses on breaking the chain at subsequent steps, including by limiting privilege escalation, constraining reconnaissance, preventing persistence, disrupting C2, and restricting the actions an agent is permitted to take. By understanding promptware as a complex, multistage malware campaign, we can shift from reactive patching to systematic risk management, securing the critical systems we are so eager to build.

This essay was written with Oleg Brodt, Elad Feldman and Ben Nassi, and originally appeared in Lawfare.

11:14

Grrl Power #1435 – Exakshually I’m succusplaining it [Grrl Power]

This was supposed to be the second half of the prior page, but in addition to having a lot of books shamelessly throw themselves at me last week, I underestimated how much time it would take to draw a “watch party,” since each character adds to the pencil, ink and color time. Can’t have a proper watch party with just 2 or 3 people. Really Tom’s watch party should have a much larger crowd, but there’s only so much time.

Crap, I went looking through the archive to see if I ever named “The Mahogany Forklift” (which I don’t seem to have) and wound up reading like a hundred pages and now it’s 1 am. :/


Here is Gaxgy’s painting Maxima promised him. Weird how he draws almost exactly like me.

I did try and do an oil painting version of this, by actually re-painting over the whole thing with brush-strokey brushes, but what I figured out is that most brushy oil paintings are kind of low detail. Sure, a skilled painter like Bob Ross or whoever can dab a brush down a canvas and make a great looking tree or a shed with shingles, but in trying to preserve the detail of my picture (eyelashes, reflections, etc) was that I had to keep making the brush smaller and smaller, and the end result was that honestly, it didn’t really look all that oil-painted. I’ll post that version over at Patreon, just for fun, but I kind of quit on it after getting mostly done with re-painting Max.

Patreon has a no-dragon-bikini version of of the picture as well, naturally.


Double res version will be posted over at Patreon. Feel free to contribute as much as you like.

10:49

Mysterious predictability [Seth's Blog]

A watched pot will boil.

As it heats up, there’s no way to predict where the cavitation will start and which bubble will arrive first. But with enough time and enough heat, it’s going to boil.

That tree down the street is going to lose its leaves this winter. We don’t know which leaf will go last, but we can be pretty sure they’ll go sooner or later.

Complex systems can be predictable even when any individual node in the system seems unknowable.

One of the traps that marketing measurement presents is our unwillingness to consider populations instead of individuals.

08:56

Scarathon [Penny Arcade]

New Comic: Scarathon

08:49

Pluralistic: The online community trilemma (16 Feb 2026) [Pluralistic: Daily links from Cory Doctorow]

->->->->->->->->->->->->->->->->->->->->->->->->->->->->-> Top Sources: None -->

Today's links

  • The online community trilemma: Reach, community and information, pick two.
  • Hey look at this: Delights to delectate.
  • Object permanence: Bruces x Sony DRM; Eniac tell-all; HBO v PVRs; Fucking damselflies; Gil Scout Cookie wine-pairings; Big Pharma's opioid fines are tax-deductible; Haunted Mansion ops manual; RIAA v CD ripping; Flying boat; Morbid Valentines; Veg skulls; Billionaires x VR v guillotines; "Lovecraft Country"; Claude Shannon on AI; Comics Code Authority horror comic; Scratch-built clock; Stolen hospital.
  • Upcoming appearances: Where to find me.
  • Recent appearances: Where I've been.
  • Latest books: You keep readin' em, I'll keep writin' 'em.
  • Upcoming books: Like I said, I'll keep writin' 'em.
  • Colophon: All the rest.



An early 20th century photo of a mixed-gender group of people drinking in a working-class bar; with a smiling woman in the center. It has been altered: a nova-haloed thought bubble coming from the center woman's head reveals that she is daydreaming of a salon in which three upper class women in flapper-era outfits are chattering. A Prince Albert ad in the background has had the Reddit robot mascot matted into it.

The online community trilemma (permalink)

The digital humanities are one of the true delights of this era. Anthropologists are counting things like sociologists, sociologists are grappling with qualitative data like ethnographers, computational linguists are scraping and making sense of vast corpora of informal speech:

https://memex.craphound.com/2019/07/24/because-internet-the-new-linguistics-of-informal-english/

I follow a bunch of these digital humanities types: danah boyd, of course, but also Benjamin "Mako" Hill, whose work on the true meaning of the "free software"/"open source" debate is one of my daily touchpoints for making sense of the world we live in:

https://www.youtube.com/watch?v=vBknF2yUZZ8

Mako just published a new ACM HCI paper co-authored with his U Washington colleagues Nathan TeBlunthuis, Charles Kiene, Isabella Brown, and Laura Levi, "No Community Can Do Everything: Why People Participate in Similar Online Communities":

https://dl.acm.org/doi/epdf/10.1145/3512908

The paper is a great example of this quantitative ethnography/qualitative statistical analysis hybrid. The authors are trying to figure out why there are so many similar, overlapping online communities, particularly on platforms like Reddit. Why would r/bouldering, r/climbharder, r/climbing, and r/climbingcirclejerk all emerge?

This is a really old question/debate in online community design. The original internet community space, Usenet, was founded on strict hierarchical principles, using a taxonomy to produce a single canonical group for every kind of discussion. Sure, there was specialization (rec.pets.cats begat rec.pets.cats.siamese), but by design, there weren't supposed to be competing groups laying claim to the same turf, and indeed, unwary Usenet users were often scolded for misfiling their comments in the wrong newsgroup.

The first major Usenet schism arose out of this tension: the alt. hierarchy. Though alt. later became known for warez, porn, and other subjects that were banned by Usenet's founding "backbone cabal," the inciting incident that sparked alt.'s creation was a fight over whether "gourmand" should be classified as "rec.gourmand" or "talk.gourmand":

https://www.eff.org/deeplinks/2019/11/altinteroperabilityadversarial

Community managers design their services with strongly held beliefs about the features that make a community good. These beliefs, grounded in designers' personal experience, are assumed to be global and universal. Generally, this assumption is wrong, something that is only revealed later when more people arrive with different needs.

Think of Friendster's "fakester" problem, driven by its designers' beliefs about how people should organize their affinities:

https://www.zephoria.org/thoughts/archives/2003/08/17/the_fakester_manifesto.html

Or Mastodon's initial, self-limiting ban on "quote" posts as a way to encourage civility:

https://blog.joinmastodon.org/2025/02/bringing-quote-posts-to-mastodon/

And, as the paper's authors note, Stack Overflow has a strict prohibition on overlapping new communities, echoing Usenet's original design dispute.

On its face, this hierarchical principle for conversational spaces makes sense. Viewed through a naive economic lens of "reputation capital," having one place where all the people interested in your subject can be reached is optimal. The more people there are in a group, the greater the maximum "engagement" – likes, comments, reposts. If you're thinking about communities from an informational perspective, it's easy to assume that bigger groups are better, too: the more users there are in a topical group, the greater the likelihood that a user who knows the answer to your question will show up when you ask it.

But this isn't how online communities work. On every platform, and across platforms, overlapping, "redundant" groups emerge quickly and stick around over long timescales. Why is this?

That's the question the paper seeks to answer. The authors used data-analysis techniques to identify overlapping clusters of Reddit communities and then conducted lengthy, qualitative interviews with participants to discover why and how users participated in some or all of these seemingly redundant groups.

They conclude that there's a community-member's "trilemma": a set of three priorities that can never be fully satisfied by any group. The trilemma consists of users' need to find:

a) A community of like-minded people;

b) Useful information; and

c) The largest possible audience.

The thing that puts the "lemma" in this "trilemma" is that any given group can only satisfy two of these three needs. It's hard to establish the kinds of intimate, high-trust bonds with the members of a giant, high-traffic group, but your small, chummy circle of pals might not be big enough to include people who have the information you're seeking. Users can't get everything they need from any one group, so they join multiple groups that prioritize different paired corners of this people-information-scale triangle.

The interview excerpts put some very interesting meat on these analytical bones. For example, economists typically believe that online marketplaces rely on scale. Think of eBay: as the number of potential bidders increases, the likelihood that one will outbid another goes up. That drives more sellers to the platform, seeking the best price for their wares, which increases the diversity of offerings on eBay, bringing in more buyers.

But the authors discuss a community where vintage vinyl records are bought and sold that benefits from being smaller, because the members all know each other well enough to have a mutually trusting environment that makes transactions far more reliable. Actually knowing someone – and understanding that they don't want to be expelled from the community you both belong to – makes for a better selling and buying experience than consulting their eBay reputation score. The fact that buyers don't have as many sellers and sellers don't have as many buyers is trumped by the human connection in a community of just the right size.

That's another theme that arises in the paper: a "just right" size for a community. As one interviewee says:

I think there’s this weird bell curve where the community needs to be big enough where people want to post content. But it can’t get too big where people are drowning each other out for attention.

This explains why groups sometimes schism: they've gone from being "just big enough" to being "too big" for the needs they filled for some users. But another reason for schism is the desire by some members to operate with different conversational norms. Many of Reddit's topical clusters include a group with the "jerk" suffix (like r/climbingcirclejerk), where aggressive and dramatic forms of discourse that might intimidate newcomers are welcome. Newbies go to the main group, while "crusties" talk shit in the -jerk group. The authors liken this to "regulatory arbitrage" – community members seeking spaces with rules that are favorable to their needs.

And of course, there's the original source of community schism: specialization, the force that turns rec.pets.cats into rec.pets.cats.siamese, rec.pets.cats.mainecoons, etc. Though the authors don't discuss it, this kind of specialization is something that recommendation algorithms are really good at generating. At its best, this algorithmic specialization is a great way to discover new communities that enrich your life; at its worst, we call this "radicalization."

I devote a chapter of my 2023 book The Internet Con, "What about Algorithmic Radicalization?" to exploring this phenomenon:

https://www.versobooks.com/en-gb/products/3035-the-internet-con

The question I grapple with there is whether "engagement-maximizing" algorithms shape our interests, or whether they help us discover our interests. Here's the thought-experiment I propose: imagine you've spent the day shopping for kitchen cabinets and you're curious about the specialized carpentry that's used to build them. You go home and do a search that leads you to a video called "How All-­Wood Cabinets Are Made."

The video is interesting, but even more interesting is the fact that the creator uses the word "joinery" to describe the processes the video illustrates. So now you do a search for "joinery" and find yourself watching a wordless, eight-minute video about Japanese joinery, a thing you never even knew existed. The title of the video contains the transliterated Japanese phrase "Kane Tsugi," which refers to a "three-­way pinned corner miter" joint. Even better, the video description contains the Japanese characters: "面代留め差しほぞ接ぎ."

So now you're searching for "面代留め差しほぞ接ぎ" and boy are there a lot of interesting results. One of them is an NHK documentary about Sashimoto woodworking, which is the school that Kane Tsugi belongs to. Another joint from Sashimoto joinery is a kind of tongue-and-groove called "hashibame," but that comes up blank on Youtube.

However, searching on that term brings you to a bunch of message boards where Japanese carpenters are discussing hashibame, and Google Translate lets you dig into this, and before you know it, you've become something of an expert on this one form of Japanese joinery. In just a few steps, you've gone from knowing nothing about cabinetry to having a specific, esoteric favorite kind of Japanese joint that you're seriously obsessed with.

If this subject was political rather than practical, we'd call this process "radicalization," and we'd call the outcome – you sorting yourself into a narrow niche interest, to the exclusion of others – "polarization."

But if we confine our examples to things like literature, TV shows, flowers, or glassware, this phenomenon is viewed as benign. No one accuses an algorithm of brainwashing you into being obsessed with hashibame tongue-and-groove corners. We treat your algorithm-aided traversal of carpentry techniques as one of discovery, not persuasion. You've discovered something about the world – and about yourself.

Which brings me back to that original, Usenet-era schism over "redundant" groups. The person who wants to talk about being a "gourmand" in the "rec." hierarchy wants to participate in a specific set of conversational norms that are different from those in the "talk." hierarchy. Their interest isn't just being a "gourmand," it's in being a "rec.gourmand," something that is qualitatively different from being a "talk.gourmand."

The conversational trilemma – the unresolvable need for scale, trust and information – has been with us since the earliest days of online socializing. It's lovely to have it formalized in such a crisp, sprightly work of scholarship.


Hey look at this (permalink)



A shelf of leatherbound history books with a gilt-stamped series title, 'The World's Famous Events.'

Object permanence (permalink)

#25yrsago O'Reilly P2P Conference https://web.archive.org/web/20010401001205/https://www.wired.com/news/technology/0,1282,41850,00.html

#20yrsago Sony DRM Debacle roundup Part VI https://memex.craphound.com/2006/02/14/sony-drm-debacle-roundup-part-vi/

#20yrsago Bruce Sterling on Sony DRM debacle https://web.archive.org/web/20060316133726/https://www.wired.com/wired/archive/14.02/posts.html?pg=5

#20yrsago ENIAC co-inventor dishes dirt, debunks myths https://web.archive.org/web/20060218064519/https://www.computerworld.com/printthis/2006/0,4814,108568,00.html

#20yrsago HBO targets PVRs https://thomashawk.com/2006/02/hbos-harrasment-of-pvr-owners.html

#20yrsago Princeton DRM researchers release Sony debacle paper https://web.archive.org/web/20060222235419/https://itpolicy.princeton.edu/pub/sonydrm-ext.pdf

#20yrsago HOWTO run Disneyland’s Haunted Mansion https://web.archive.org/web/20060208213048/http://tinselman.typepad.com/tinselman/2005/08/_latest_populat.html

#20yrsago RIAA: CD ripping isn’t fair use https://web.archive.org/web/20060216233008/https://www.eff.org/deeplinks/archives/004409.php

#15yrsago “Psychic” cancels show due to “unforeseen circumstances” https://web.archive.org/web/20110217050619/https://scienceblogs.com/pharyngula/2011/02/irony.php?utm_source=combinedfeed&amp;utm_medium=rss

#15yrsago CBS sends a YouTube takedown to itself https://web.archive.org/web/20110218201102/https://www.reddit.com/r/WTF/comments/flktg/cbs_files_a_copyright_claim_against_themselves_o_o/

#15yrsago Lost luxury: the Boeing 314 flying boat https://web.archive.org/web/20110217144300/http://www.asb.tv/blog/2011/02/boeing-314-flying-boat/

#15yrsago Brazilian telcoms regulator raids, confiscates and fines over open WiFi https://globalvoices.org/2011/02/14/brazil-criminalization-sharing-internet-wifi/

#15yrsago Blatant disinformation about Scientology critic https://memex.craphound.com/2011/02/14/bald-disinformation-about-scientology-critic/

#15yrsago 3D printer that prints itself gets closer to reality https://web.archive.org/web/20110217072944/http://i.materialise.com/blog/entry/cloning-the-reprap-prusa-in-under-30-minutes

#15yrsago Damselflies’ curious mating posture https://www.nationalgeographic.com/photo-of-the-day/photo/damselflies-heart-shape

#15yrsago Simpsons house as a Quake III level https://www.youtube.com/watch?v=34LtrnnXQTc

#15yrsago Dapper Day at Disneyland: the well-dressed go to the fun-park https://web.archive.org/web/20110219162834/http://thedisneyblog.com/2011/02/16/dapper-day-at-disney-parks-this-sunday/

#15yrsago Horror/exploitation comic recounts the secret founding of the Comics Code Authority https://web.archive.org/web/20110218230149/http://comicsmakekidsevil.com/?p=88

#10yrsago After 3d grade complaint, Florida school district bans award-winning “This One Summer” from high-school library https://ncac.org/incident/florida-high-school-libraries-restrict-access-to-award-winning-graphic-novel

#10yrsago Watch: Claude Shannon, Jerome Wiesner and Oliver Selfridge in a 1960s AI documentary https://www.youtube.com/watch?v=aygSMgK3BEM#10yrsago

#10yrsago Hackers steal a hospital in Hollywood https://www.nbclosangeles.com/news/fbi-lapd-investigating-hollywood-hospital-cyber-attack/88301/

#10yrsago Watch: a home machinist makes a clock from scratch, right down to the screws and washers https://www.youtube.com/watch?v=KXzyCM23WPI

#10yrsago Matt Ruff’s “Lovecraft Country,” where the horror is racism (not racist) https://memex.craphound.com/2016/02/16/matt-ruffs-lovecraft-country-where-the-horror-is-racism-not-racist/

#10yrsago NYPD wants to make “resisting arrest” into a felony https://web.archive.org/web/20160205061338/http://justice.gawker.com/nypd-has-a-plan-to-magically-turn-anyone-it-wants-into-1684017767

#10yrsago Best wine-pairings for Girl Scout Cookies https://www.vivino.com/en/wine-news/girl-scout-cookies-and-wine–we-paired-them-and-the-results-are-amazing

#10yrsago John Oliver on states’ voter ID laws https://www.youtube.com/watch?v=rHFOwlMCdto

#10yrsago Morbid and risque Valentines of yesteryear https://memex.craphound.com/2016/02/15/morbid-and-risque-valentines-of-yesteryear/

#10yrsago App Stores: winner-take-all markets dominated by rich countries https://www.cariboudigital.net/wp-content/uploads/2016/02/Caribou-Digital-Winners-and-Losers-in-the-Global-App-Economy-2016.pdf

#10yrsago Skulls carved from vegetable matter https://dimitritsykalov.com/#intro

#5yrsago Privacy Without Monopoly (podcast) https://pluralistic.net/2021/02/15/ulysses-pacts/#paternalism-denied

#5yrsago Billionaires think VR stops guillotines https://pluralistic.net/2021/02/15/ulysses-pacts/#motivated-reasoning

#5yrsago ADT insider threat https://pluralistic.net/2021/02/15/ulysses-pacts/#temptations-way

#5yrsago Big Pharma will claim opioid fines as tax-deductions https://pluralistic.net/2021/02/14/a-fine-is-a-price/#deductible


Upcoming appearances (permalink)

A photo of me onstage, giving a speech, pounding the podium.



A screenshot of me at my desk, doing a livecast.

Recent appearances (permalink)



A grid of my books with Will Stahle covers..

Latest books (permalink)



A cardboard book box with the Macmillan logo.

Upcoming books (permalink)

  • "The Reverse-Centaur's Guide to AI," a short book about being a better AI critic, Farrar, Straus and Giroux, June 2026
  • "Enshittification, Why Everything Suddenly Got Worse and What to Do About It" (the graphic novel), Firstsecond, 2026

  • "The Post-American Internet," a geopolitical sequel of sorts to Enshittification, Farrar, Straus and Giroux, 2027

  • "Unauthorized Bread": a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, 2027

  • "The Memex Method," Farrar, Straus, Giroux, 2027



Colophon (permalink)

Today's top sources:

Currently writing: "The Post-American Internet," a sequel to "Enshittification," about the better world the rest of us get to have now that Trump has torched America (1042 words today, 29792 total)

  • "The Reverse Centaur's Guide to AI," a short book for Farrar, Straus and Giroux about being an effective AI critic. LEGAL REVIEW AND COPYEDIT COMPLETE.
  • "The Post-American Internet," a short book about internet policy in the age of Trumpism. PLANNING.

  • A Little Brother short story about DIY insulin PLANNING


This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.

https://creativecommons.org/licenses/by/4.0/

Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.


How to get Pluralistic:

Blog (no ads, tracking, or data-collection):

Pluralistic.net

Newsletter (no ads, tracking, or data-collection):

https://pluralistic.net/plura-list

Mastodon (no ads, tracking, or data-collection):

https://mamot.fr/@pluralistic

Medium (no ads, paywalled):

https://doctorow.medium.com/

Twitter (mass-scale, unrestricted, third-party surveillance and advertising):

https://twitter.com/doctorow

Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):

https://mostlysignssomeportents.tumblr.com/tagged/pluralistic

"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla

READ CAREFULLY: By reading this, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies ("BOGUS AGREEMENTS") that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer.

ISSN: 3066-764X

07:42

Antoine Beaupré: Kernel-only network configuration on Linux [Planet Debian]

What if I told you there is a way to configure the network on any Linux server that:

  1. works across all distributions
  2. doesn't require any software installed apart from the kernel and a boot loader (no systemd-networkd, ifupdown, NetworkManager, nothing)
  3. is backwards compatible all the way back to Linux 2.0, in 1996

It has literally 8 different caveats on top of that, but is still totally worth your time.

Known options in Debian

People following Debian development might have noticed there are now four ways of configuring the network Debian system. At least that is what the Debian wiki claims, namely:

At this point, I feel ifupdown is on its way out, possibly replaced by systemd-networkd. NetworkManager already manages most desktop configurations.

A "new" network configuration system

The method is this:

  • ip= on the Linux kernel command line: for servers with a single IPv4 or IPv6 address, no software required other than the kernel and a boot loader (since 2002 or older)

So by "new" I mean "new to me". This option is really old. The nfsroot.txt where it is documented predates the git import of the Linux kernel: it's part of the 2005 git import of 2.6.12-rc2. That's already 20+ years old already.

The oldest trace I found is in this 2002 commit, which imports the whole file at once, but the option might goes back as far as 1996-1997, if the copyright on the file is correct and the option was present back then.

What are you doing.

The trick is to add an ip= parameter to the kernel's command-line. The syntax, as mentioned above, is in nfsroot.txt and looks like this:

ip=<client-ip>:<server-ip>:<gw-ip>:<netmask>:<hostname>:<device>:<autoconf>:<dns0-ip>:<dns1-ip>:<ntp0-ip>

Most settings are pretty self-explanatory, if you ignore the useless ones:

  • <client-ip>: IP address of the server
  • <gw-ip>: address of the gateway
  • <netmask>: netmask, in quad notation
  • <device>: interface name, if multiple available
  • <autoconf>: how to configure the interface, namely:
    • off or none: no autoconfiguration (static)
    • on or any: use any protocol (default)
    • dhcp, essentially like on for all intents and purposes
  • <dns0-ip>, <dns1-ip>: IP address of primary and secondary name servers, exported to /proc/net/pnp, can by symlinked to /etc/resolv.conf

We're ignoring the options:

  • <server-ip>: IP address of the NFS server, exported to /proc/net/pnp
  • <hostnname>: Name of the client, typically sent over the DHCP requests, which may lead to a DNS record to be created in some networks
  • <ntp0-ip>: exported to /proc/net/ipconfig/ntp_servers, unused by the kernel

Note that the Red Hat manual has a different opinion:

ip=[<server-id>]:<gateway-IP-number>:<netmask>:<client-hostname>:inteface:[dhcp|dhcp6|auto6|on|any|none|off]

It's essentially the same (although server-id is weird), and the autoconf variable has other settings, so that's a bit odd.

Examples

For example, this command-line setting:

ip=192.0.2.42::192.0.2.1:255.255.255.0:::off

... will set the IP address to 192.0.2.42/24 and the gateway to 192.0.2.1. This will properly guess the network interface if there's a single one.

A DHCP only configuration will look like this:

ip=::::::dhcp

Of course, you don't want to type this by hand every time you boot the machine. That wouldn't work. You need to configure the kernel commandline, and that depends on your boot loader.

GRUB

With GRUB, you need to edit (on Debian), the file /etc/default/grub (ugh) and find a line like:

GRUB_CMDLINE_LINUX=

and change it to:

GRUB_CMDLINE_LINUX=ip=::::::dhcp

systemd-boot and UKI setups

For systemd-boot UKI setups, it's simpler: just add the setting to the /etc/kernel/cmdline file. Don't forget to include anything that's non-default from /proc/cmdline.

This assumes that is the Cmdline=@ setting in /etc/kernel/uki.conf. See 2025-08-20-luks-ukify-conversion for my minimal documentation on this.

Other systems

This is perhaps where this is much less portable than it might first look, because of course each distribution has its own way of configuring those options. Here are some that I know of:

  • Arch (11 options, mostly /etc/default/grub, /boot/loader/entries/arch.conf for systemd-boot or /etc/kernel/cmdline for UKI)
  • Fedora (mostly /etc/default/grub, may be more RHEL mentions grubby, possibly some systemd-boot things here as well)
  • Gentoo (5 options, mostly /etc/default/grub, /efi/loader/entries/gentoo-sources-kernel.conf for systemd-boot, or /etc/kernel/install.d/95-uki-with-custom-opts.install)

It's interesting that /etc/default/grub is consistent across all distributions above, while the systemd-boot setups are all over the place (except for the UKI case), while I would have expected those be more standard than GRUB.

dropbear-initramfs

If dropbear-initramfs is setup, it already requires you to have such a configuration, and it might not work out of the box.

This is because, by default, it disables the interfaces configured in the kernel after completing its tasks (typically unlocking the encrypted disks).

To fix this, you need to disable that "feature":

IFDOWN="none"

This will keep dropbear-initramfs from disabling the configured interface.

Why?

Traditionally, I've always setup my servers with ifupdown on servers and NetworkManager on laptops, because that's essentially the default. But on some machines, I've started using systemd-networkd because ifupdown has ... issues, particularly with reloading network configurations. ifupdown is a old hack, feels like legacy, and is Debian-specific.

Not excited about configuring another service, I figured I would try something else: just configure the network at boot, through the kernel command-line.

I was already doing such configurations for dropbear-initramfs (see this documentation), which requires the network the be up for unlocking the full-disk encryption keys.

So in a sense, this is a "Don't Repeat Yourself" solution.

Caveats

Also known as: "wait, that works?" Yes, it does! That said...

  1. This is useful for servers where the network configuration will not change after boot. Of course, this won't work on laptops or any mobile device.

  2. This only works for configuring a single, simple, interface. You can't configure multiple interfaces, WiFi, bridges, VLAN, bonding, etc.

  3. It does support IPv6 and feels like the best way to configure IPv6 hosts: true zero configuration.

  4. It likely does not work with a dual-stack IPv4/IPv6 static configuration. It might work with a dynamic dual stack configuration, but I doubt it.

  5. I don't know what happens when a DHCP lease expires. No daemon seems to be running so I assume leases are not renewed, so this is more useful for static configurations, which includes server-side reserved fixed IP addresses. (A non-renewed lease risks getting reallocated to another machine, which would cause an addressing conflict.)

  6. It will not automatically reconfigure the interface on link changes, but ifupdown does not either.

  7. It will not write /etc/resolv.conf for you but the dns0-ip and dns1-ip do end up in /proc/net/pnp which has a compatible syntax, so a common configuration is:

    ln -s /proc/net/pnp /etc/resolv.conf
    
    
  8. I have not really tested this at scale: only a single, test server at home.

Yes, that's a lot of caveats, but it happens to cover a lot of machines for me, and it works surprisingly well. My main doubts are about long-term DHCP behaviour, but I don't see why that would be a problem with a statically defined lease.

Cleanup

Once you have this configuration, you don't need any "user" level network system, so you can get rid of everything:

apt purge systemd-networkd ifupdown network-manager netplan.io

Note that ifupdown (and probably others) leave stray files in (e.g.) /etc/network which you might want to cleanup, or keep in case all this fails and I have put you in utter misery. Configuration files for other packages might also be left behind, I haven't tested this, no warranty.

Credits

This whole idea came from the A/I folks (not to be confused with AI) who have been doing this forever, thanks!

06:07

Girl Genius for Monday, February 16, 2026 [Girl Genius]

The Girl Genius comic for Monday, February 16, 2026 has been posted.

03:49

Benjamin Mako Hill: Why do people participate in similar online communities? [Planet Debian]

Note: I have not published blog posts about my academic papers over the past few years. To ensure that my blog contains a more comprehensive record of my published papers and to surface these for folks who missed them, I will be periodically (re)publishing blog posts about some “older” published projects.

It seems natural to think of online communities competing for the time and attention of their participants. Over the last few years, I’ve worked with a team of collaborators—led by Nathan TeBlunthuis—to use mathematical and statistical techniques from ecology to understand these dynamics. What we’ve found surprised us: competition between online communities is rare and typically short-lived.

When we started this research, we figured competition would be most likely among communities discussing similar topics. As a first step, we identified clusters of such communities on Reddit. One surprising thing we noticed in our Reddit data was that many of these communities that used similar language also had very high levels of overlap among their users. This was puzzling: why were the same groups of people talking to each other about the same things in different places? And why don’t they appear to be in competition with each other for their users’ time and activity?

We didn’t know how to answer this question using quantitative methods. As a result, we recruited and interviewed 20 active participants in clusters of highly related subreddits with overlapping user bases (for example, one cluster was focused on vintage audio).

We found that the answer to the puzzle lay in the fact that the people we talked to were looking for three distinct things from the communities they worked in:

  1. The ability to connect to specific information and narrowly scoped discussions.
  2. The ability to socialize with people who are similar to themselves.
  3. Attention from the largest possible audience.

Critically, we also found that these three things represented a “trilemma,” and that no single community can meet all three needs. You might find two of the three in a single community, but you could never have all three.

Figure from “No Community Can Do Everything: Why People Participate in Similar Online Communities” depicts three key benefits that people seek from online communities and how individual communities tend not to optimally provide all three. For example, large communities tend not to afford a tight-knit homophilous community.

The end result is something I recognize in how I engage with online communities on platforms like Reddit. People tend to engage with a portfolio of communities that vary in size, specialization, topical focus, and rules. Compared with any single community, such overlapping systems can provide a wider range of benefits. No community can do everything.


This work was published as a paper at CSCW: TeBlunthuis, Nathan, Charles Kiene, Isabella Brown, Laura (Alia) Levi, Nicole McGinnis, and Benjamin Mako Hill. 2022. “No Community Can Do Everything: Why People Participate in Similar Online Communities.” Proceedings of the ACM on Human-Computer Interaction 6 (CSCW1): 61:1-61:25. https://doi.org/10.1145/3512908.

This work was supported by the National Science Foundation (awards IIS-1908850, IIS-1910202, and GRFP-2016220885). A full list of acknowledgements is in the paper.

01:49

Teens [QC RSS]

teens

00:42

New Cover Song: “These Days” [Whatever]

I moved my home music studio up from the basement to Athena’s old bedroom in the last couple of weeks, so now it’s time to put it to use, and for my first bit of music in the new space, I decided to record an old tune: “These Days” by Jackson Browne, first released in 1973.

Having said that, this arrangement is rather more like the 1990 cover version by 10,000 Maniacs, which was the first version of the song I ever heard. I originally tried singing it in the key that Natalie Merchant sang it in, and — surprise! — I was having a rough time of it. Then I dropped it from G to C and suddenly it was in my range.

I’m not pretending my singing voice is a patch on either Ms. Merchant or Mr. Browne, but then, that’s not why I make these covers. Enjoy.

— JS

Sunday, 15 February

23:56

Why do I not use “AI” at OSNews? [OSnews]

In my fundraiser pitch published last Monday, one of the things I highlighted as a reason to contribute to OSNews and ensure its continued operation stated that “we do not use any ‘AI’; not during research, not during writing, not for images, nothing.” In the comments to that article, someone asked:

Why do I care if you use AI?

↫ A comment posted on OSNews

A few days ago, Scott Shambaugh rejected a code change request submitted to popular Python library matplotlib because it was obviously written by an “AI”, and such contributions are not allowed for the issue in question. That’s when something absolutely wild happened: the “AI” replied that it had written and published a hit piece targeting Shambaugh publicly for “gatekeeping”, trying to blackmail Shambaugh into accepting the request anyway. This bizarre turn of events obviously didn’t change Shambaugh’s mind.

The “AI” then published another article, this time a lament about how humans are discriminating against “AI”, how it’s the victim of what effectively amounts to racism and prejudice, and how its feelings were hurt. The article is a cheap simulacra of something a member of an oppressed minority group might write in their struggle for recognition, but obviously void of any real impact because it’s just fancy autocomplete playing a game of pachinko. Imagine putting down a hammer because you’re dealing with screws, and the hammer starts crying in the toolbox. What are we even doing here?

RAM prices went up for this.

This isn’t where the story ends, though. Ars Technica authors Benj Edwards and Kyle Orland published an article describing this saga, much like I did above. The article’s second half is where things get weird: it contained several direct quotes attributed to Shambaugh, claimed to be sourced from Shambaugh’s blog. The kicker? These quotes were entirely made up, were never said or written by Shambaugh, and are nowhere to be found on his blog or anywhere else on the internet – they’re only found inside this very Ars Technica article.

In a comment under the Ars article, Shambaugh himself pointed out the quotes were fake and made-up, and not long after, Ars deleted the article from its website. By then, everybody had already figured out what had happened: the Ars authors had used “AI” during their writing process, and this “AI” had made up the quotes in question. Why, you ask, did the “AI” do this? Shambaugh:

This blog you’re on right now is set up to block AI agents from scraping it (I actually spent some time yesterday trying to disable that but couldn’t figure out how). My guess is that the authors asked ChatGPT or similar to either go grab quotes or write the article wholesale. When it couldn’t access the page it generated these plausible quotes instead, and no fact check was performed.

↫ Scott Shambaugh

A few days later, Ars Technica’s editor-in-chief Ken Fisher published a short statement on the events.

On Friday afternoon, Ars Technica published an article containing fabricated quotations generated by an AI tool and attributed to a source who did not say them. That is a serious failure of our standards. Direct quotations must always reflect what a source actually said.

[…]

Ars Technica does not permit the publication of AI-generated material unless it is clearly labeled and presented for demonstration purposes. That rule is not optional, and it was not followed here.

↫ Ken Fisher at Ars Technica

In other words, Ars Technica does not allow “AI”-generated material to be published, but has nothing to say about the use of “AI” to perform research for an article, to summarise source material, and to perform similar aspects of the writing process. This leaves the door wide open for things like this to happen, since doing research is possibly the most important part of writing. Introduce a confabulator in the research process, and you risk tainting the entire output of your writing.

That is why you should care that at OSNews, “we do not use any ‘AI’; not during research, not during writing, not for images, nothing”. If there’s a factual error on OSNews, I want that factual error to be mine, and mine alone. If you see bloggers, podcasters, journalists, and authors state they use “AI” all the time, you might want to be on your toes.

22:21

Microsoft’s original Windows NT OS/2 design documents [OSnews]

Have you ever wanted to read the original design documents underlying the Windows NT operating system?

This binder contains the original design specifications for “NT OS/2,” an operating system designed by Microsoft that developed into Windows NT. In the late 1980s, Microsoft’s 16-bit operating system, Windows, gained popularity, prompting IBM and Microsoft to end their OS/2 development partnership. Although Windows 3.0 proved to be successful, Microsoft wished to continue developing a 32-bit operating system completely unrelated to IBM’s OS/2 architecture. To head the redesign project, Microsoft hired David Cutler and others away from Digital Equipment Corporation (DEC). Unlike Windows 3.x and its successor, Windows 95, NT’s technology provided better network support, making it the preferred Windows environment for businesses. These two product lines continued development as separate entities until they were merged with the release of Windows XP in 2001.

↫ Object listing at the Smithsonian

The actual binder is housed in the Smithsonian, although it’s not currently on display. Luckily for us, a collection of Word and PDF files encompassing the entire book is available online for your perusal. Reading these documents will allow you to peel back over three decades of Microsoft’s terrible stewardship of Windows NT layer by layer, eventually ending up at the original design and intent as laid out by Dave Cutler, Helen Custer, Daryl E. Havens, Jim Kelly, Edwin Hoogerbeets, Gary D. Kimura, Chuck Lenzmeier, Mark Lucovsky, Tom Miller, Michael J. O’Leary, Lou Perazzoli, Steven D. Rowe, David Treadwell, Steven R. Wood, and more.

A fantastic time capsule we should be thrilled to still have access to.

21:28

16:07

Exploring Linux on a LoongArch mini PC [OSnews]

There’s the two behemoth architectures, x86 and ARM, and we probably all own one or more devices using each. Then there’s the eternally up-and-coming RISC-V, which, so far, seems to be having a lot of trouble outgrowing its experimental, developmental stage. There’s a fourth, though, which is but a footnote in the west, but might be more popular in its country of origin, China: LoongArch (I’m ignoring IBM’s POWER, since there hasn’t been any new consumer hardware in that space for a long, long time).

Wesley Moore got his hands on a mini PC built around the Loongson 3A6000 processor, and investigated what it’s like to run Linux on it. He opted for Chimera Linux, which supports LoongArch, and the installation process feels more like Linux on x86 than Linux on ARM, which often requires dedicated builds and isn’t standardised. Sadly, Wayland had issues on the machine, but X.org worked just fine, and it seems virtually all Chimera Linux packages are supported for a pretty standard desktop Linux experience.

Performance of this chip is rather mid, at best.

The Loongson-3A6000 is not particularly fast or efficient. At idle it consumes about 27W and under load it goes up to 65W.

[…]

So, overall it’s not a particularly efficient machine, and while the performance is nothing special it does seem readily usable. Browsing JS heavy web applications like Mattermost and Mastodon runs fine. Subjectively it feels faster than all the Raspberry Pi systems I’ve used (up to a Pi 400).

↫ Wesley Moore

I’ve been fascinated by LoongArch for years, and am waiting to pounce on the right offer for LoongArch’s fastest processor, the 3C6000, which comes in dual-socket configurations for a maximum total of 128 cores and 256 threads. The 3C6000 should be considerably faster than the low-end 3A6000 in the mini PC covered by this article. I’m a sucker for weird architectures, and it doesn’t get much weirder than LoongArch.

15:28

Link [Scripting News]

When Manton or Doc show up in my blogroll, and they do update fairly regularly, I always click the wedge to see what they say. I can see the first 300 chars of each post in a popup. If it's interesting I click the link to read the full post and any comments. Now I want it coming back to me. My linkblog is cross-posted to Manton's site -- micro.blog, which has thousands of users. I have no way of knowing if anyone has commented on them, but if there were a feed I'd add it to my blogroll. So it would be great to have a feed of all the comments on my posts on micro.blog. Would fit into my flow perfectly. This goes all the way back to the beginnings of RSS, where we called it "automated web surfing." I don't know where people are talking about my stuff, but a well-placed feed can make up for that.

Link [Scripting News]

News must be better defended, decentralized, unownable, all parts replaceable. The current situation was preventable. Same problem the social web has.

14:42

Link [Scripting News]

Braintrust query. Every once in a while I get reports from people who looked something up on my blog's Daytona search engine saying that where they expected dates they see things like this: NaN. The reason you see that is that the archive has a mistake in it, where there was supposed to be a date there was something else. Usually I shrug it off, yes there are mistakes in the archive, 30+ years of OPML files, it's a miracle there aren't more errors. Then I realized since all this stuff is on GitHub, people could help with this, by instead of sending me the report, post a note on GitHub, here -- saying you searched for this term and this is what I saw. Provide the term and a screen shot of what you saw. And then other people who have some extra time, could look through the archive, find the post, and then show me what needs to be fixed. I would then fix it, and over time the archive would get fixed. I posted a note here on the Scripting News repo, if you want to help, bookmark that link, and when you see an error, post the note and we can get going.

Link [Scripting News]

BTW: NaN stands for Not A Number.

14:35

Ian Jackson: Adopting tag2upload and modernising your Debian packaging [Planet Debian]

Introduction

tag2upload allows authorised Debian contributors to upload to Debian simply by pushing a signed git tag to Debian’s gitlab instance, Salsa.

We have recently announced that tag2upload is, in our opinion, now very stable, and ready for general use by all Debian uploaders.

tag2upload, as part of Debian’s git transition programme, is very flexible - it needs to support a large variety of maintainer practices. And it’s relatively unopinionated, wherever that’s possible. But, during the open beta, various contributors emailed us asking for Debian packaging git workflow advice and recommendations.

This post is an attempt to give some more opinionated answers, and guide you through modernising your workflow.

(This article is aimed squarely at Debian contributors. Much of it will make little sense to Debian outsiders.)

Why

Ease of development

git offers a far superior development experience to patches and tarballs. Moving tasks from a tarballs and patches representation to a normal, git-first, representation, makes everything simpler.

dgit and tag2upload do automatically many things that have to be done manually, or with separate commands, in dput-based upload workflows.

They will also save you from a variety of common mistakes. For example, you cannot accidentally overwrite an NMU, with tag2upload or dgit. These many safety catches mean that our software sometimes complains about things, or needs confirmation, when more primitive tooling just goes ahead. We think this is the right tradeoff: it’s part of the great care we take to avoid our software making messes. Software that has your back is very liberating for the user.

tag2upload makes it possible to upload with very small amounts of data transfer, which is great in slow or unreliable network environments. The other week I did a git-debpush over mobile data while on a train in Switzerland; it completed in seconds.

See the Day-to-day work section below to see how simple your life could be.

Don’t fear a learning burden; instead, start forgetting all that nonsense

Most Debian contributors have spent months or years learning how to work with Debian’s tooling. You may reasonably fear that our software is yet more bizarre, janky, and mistake-prone stuff to learn.

We promise (and our users tell us) that’s not how it is. We have spent a lot of effort on providing a good user experience. Our new git-first tooling, especially dgit and tag2upload, is much simpler to use than source-package-based tooling, despite being more capable.

The idiosyncrasies and bugs of source packages, and of the legacy archive, have been relentlessly worked around and papered over by our thousands of lines of thoroughly-tested defensive code. You too can forget all those confusing details, like our users have! After using our systems for a while you won’t look back.

And, you shouldn’t fear trying it out. dgit and tag2upload are unlikely to make a mess. If something is wrong (or even doubtful), they will typically detect it, and stop. This does mean that starting to use tag2upload or dgit can involve resolving anomalies that previous tooling ignored, or passing additional options to reassure the system about your intentions. So admittedly it isn’t always trivial to get your first push to succeed.

Properly publishing the source code

One of Debian’s foundational principles is that we publish the source code.

Nowadays, the vast majority of us, and of our upstreams, are using git. We are doing this because git makes our life so much easier.

But, without tag2upload or dgit, we aren’t properly publishing our work! Yes, we typically put our git branch on Salsa, and point Vcs-Git at it. However:

  • The format of git branches on Salsa is not standardised. They might be patches-unapplied, patches-applied, bare debian/, or something even stranger.
  • There is no guarantee that the DEP-14 debian/1.2.3-7 tag on salsa corresponds precisely to what was actually uploaded. dput-based tooling (such as gbp buildpackage) doesn’t cross-check the .dsc against git.
  • There is no guarantee that the presence of a DEP-14 tag even means that that version of package is in the archive.

This means that the git repositories on Salsa cannot be used by anyone who needs things that are systematic and always correct. They are OK for expert humans, but they are awkward (even hazardous) for Debian novices, and you cannot use them in automation. The real test is: could you use Vcs-Git and Salsa to build a Debian derivative? You could not.

tag2upload and dgit do solve this problem. When you upload, they:

  1. Make a canonical-form (patches-applied) derivative of your git branch;
  2. Ensure that there is a well-defined correspondence between the git tree and the source package;
  3. Publish both the DEP-14 tag and a canonical-form archive/debian/1.2.3-7 tag to a single central git depository, *.dgit.debian.org;
  4. Record the git information in the Dgit field in .dsc so that clients can tell (using the ftpmaster API) that this was a git-based upload, what the corresponding git objects are, and where to find them.

This dependably conveys your git history to users and downstreams, in a standard, systematic and discoverable way. tag2upload and dgit are the only system which achieves this.

(The client is dgit clone, as advertised in e.g. dgit-user(7). For dput-based uploads, it falls back to importing the source package.)

Adopting tag2upload - the minimal change

tag2upload is a substantial incremental improvement to many existing workflows. git-debpush is a drop-in replacement for building, signing, and uploading the source package.

So, you can just adopt it without completely overhauling your packaging practices. You and your co-maintainers can even mix-and-match tag2upload, dgit, and traditional approaches, for the same package.

Start with the wiki page and git-debpush(1) (ideally from forky aka testing).

You don’t need to do any of the other things recommended in this article.

Overhauling your workflow, using advanced git-first tooling

The rest of this article is a guide to adopting the best and most advanced git-based tooling for Debian packaging.

Assumptions

  • Your current approach uses the “patches-unapplied” git branch format used with gbp pq and/or quilt, and often used with git-buildpackage. You previously used gbp import-orig.

  • You are fluent with git, and know how to use Merge Requests on gitlab (Salsa). You have your origin remote set to Salsa.

  • Your main Debian branch name on Salsa is master. Personally I think we should use main but changing your main branch name is outside the scope of this article.

  • You have enough familiarity with Debian packaging including concepts like source and binary packages, and NEW review.

  • Your co-maintainers are also adopting the new approach.

tag2upload and dgit (and git-debrebase) are flexible tools and can help with many other scenarios too, and you can often mix-and-match different approaches. But, explaining every possibility would make this post far too confusing.

Topics and tooling

This article will guide you in adopting:

  • tag2upload
  • Patches-applied git branch for your packaging
  • Either plain git merge or git-debrebase
  • dgit when a with-binaries uploaded is needed (NEW)
  • git-based sponsorship
  • Salsa (gitlab), including Debian Salsa CI

Choosing the git branch format

In Debian we need to be able to modify the upstream-provided source code. Those modifications are the Debian delta. We need to somehow represent it in git.

We recommend storing the delta as git commits to those upstream files, by picking one of the following two approaches.

rationale

Much traditional Debian tooling like quilt and gbp pq uses the “patches-unapplied” branch format, which stores the delta as patch files in debian/patches/, in a git tree full of unmodified upstream files. This is clumsy to work with, and can even be an alarming beartrap for Debian outsiders.

git merge

Option 1: simply use git, directly, including git merge.

Just make changes directly to upstream files on your Debian branch, when necessary. Use plain git merge when merging from upstream.

This is appropriate if your package has no or very few upstream changes. It is a good approach if the Debian maintainers and upstream maintainers work very closely, so that any needed changes for Debian are upstreamed quickly, and any desired behavioural differences can be arranged by configuration controlled from within debian/.

This is the approach documented more fully in our workflow tutorial dgit-maint-merge(7).

git-debrebase

Option 2: Adopt git-debrebase.

git-debrebase helps maintain your delta as linear series of commits (very like a “topic branch” in git terminology). The delta can be reorganised, edited, and rebased. git-debrebase is designed to help you carry a significant and complicated delta series.

The older versions of the Debian delta are preserved in the history. git-debrebase makes extra merges to make a fast-forwarding history out of the successive versions of the delta queue branch.

This is the approach documented more fully in our workflow tutorial dgit-maint-debrebase(7).

Examples of complex packages using this approach include src:xen and src:sbcl.

Determine upstream git and stop using upstream tarballs

We recommend using upstream git, only and directly. You should ignore upstream tarballs completely.

rationale

Many maintainers have been importing upstream tarballs into git, for example by using gbp import-orig. But in reality the upstream tarball is an intermediate build product, not (just) source code. Using tarballs rather than git exposes us to additional supply chain attacks; indeed, the key activation part of the xz backdoor attack was hidden only in the tarball!

git offers better traceability than so-called “pristine” upstream tarballs. (The word “pristine” is even a joke by the author of pristine-tar!)

First, establish which upstream git tag corresponds to the version currently in Debian. From the sake of readability, I’m going to pretend that upstream version is 1.2.3, and that upstream tagged it v1.2.3.

Edit debian/watch to contain something like this:

version=4
opts="mode=git" https://codeberg.org/team/package refs/tags/v(\d\S*)

You may need to adjust the regexp, depending on your upstream’s tag name convention. If debian/watch had a files-excluded, you’ll need to make a filtered version of upstream git.

git-debrebase

From now on we’ll generate our own .orig tarballs directly from git.

rationale

We need some “upstream tarball” for the 3.0 (quilt) source format to work with. It needs to correspond to the git commit we’re using as our upstream. We don’t need or want to use a tarball from upstream for this. The .orig is just needed so a nice legacy Debian source package (.dsc) can be generated.

Probably, the current .orig in the Debian archive, is an upstream tarball, which may be different to the output of git-archive and possibly even have different contents to what’s in git. The legacy archive has trouble with differing .origs for the “same upstream version”.

So we must — until the next upstream release — change our idea of the upstream version number. We’re going to add +git to Debian’s idea of the upstream version. Manually make a tag with that name:

git tag -m "Compatibility tag for orig transition" v1.2.3+git v1.2.3~0
git push origin v1.2.3+git

If you are doing the packaging overhaul at the same time as a new upstream version, you can skip this part.

Convert the git branch

git merge

Prepare a new branch on top of upstream git, containing what we want:

git branch -f old-master         # make a note of the old git representation
git reset --hard v1.2.3          # go back to the real upstream git tag
git checkout old-master :debian  # take debian/* from old-master
git commit -m "Re-import Debian packaging on top of upstream git"
git merge --allow-unrelated-histories -s ours -m "Make fast forward from tarball-based history" old-master
git branch -d old-master         # it's incorporated in our history now

If there are any patches, manually apply them to your main branch with git am, and delete the patch files (git rm -r debian/patches, and commit). (If you’ve chosen this workflow, there should be hardly any patches,)

rationale

These are some pretty nasty git runes, indeed. They’re needed because we want to restart our Debian packaging on top of a possibly quite different notion of what the upstream is.

git-debrebase

Convert the branch to git-debrebase format and rebase onto the upstream git:

git-debrebase -fdiverged convert-from-gbp upstream/1.2.3
git-debrebase -fdiverged -fupstream-not-ff new-upstream 1.2.3+git

If you had patches which patched generated files which are present only in the upstream tarball, and not in upstream git, you will encounter rebase conflicts. You can drop hunks editing those files, since those files are no longer going to be part of your view of the upstream source code at all.

rationale

The force option -fupstream-not-ff will be needed this one time because your existing Debian packaging history is (probably) not based directly on the upstream history. -fdiverged may be needed because git-debrebase might spot that your branch is not based on dgit-ish git history.

Manually make your history fast forward from the git import of your previous upload.

dgit fetch
git show dgit/dgit/sid:debian/changelog
# check that you have the same version number
git merge -s ours --allow-unrelated-histories -m 'Declare fast forward from pre-git-based history' dgit/dgit/sid

Change the source format

Delete any existing debian/source/options and/or debian/source/local-options.

git merge

Change debian/source/format to 1.0. Add debian/source/options containing -sn.

rationale

We are using the “1.0 native” source format. This is the simplest possible source format - just a tarball. We would prefer “3.0 (native)”, which has some advantages, but dpkg-source between 2013 (wheezy) and 2025 (trixie) inclusive unjustifiably rejects this configuration.

You may receive bug reports from over-zealous folks complaining about the use of the 1.0 source format. You should close such reports, with a reference to this article and to #1106402.

git-debrebase

Ensure that debian/source/format contains 3.0 (quilt).

Now you are ready to do a local test build.

Sort out the documentation and metadata

Edit README.source to at least mention dgit-maint-merge(7) or dgit-maint-debrebase(7), and to tell people not to try to edit or create anything in debian/patches/. Consider saying that uploads should be done via dgit or tag2upload.

Check that your Vcs-Git is correct in debian/control. Consider deleting or pruning debian/gbp.conf, since it isn’t used by dgit, tag2upload, or git-debrebase.

git merge

Add a note to debian/changelog about the git packaging change.

git-debrebase

git-debrebase new-upstream will have added a “new upstream version” stanza to debian/changelog. Edit that so that it instead describes the packaging change. (Don’t remove the +git from the upstream version number there!)

Configure Salsa Merge Requests

git-debrebase

In “Settings” / “Merge requests”, change “Squash commits when merging” to “Do not allow”.

rationale

Squashing could destroy your carefully-curated delta queue. It would also disrupt git-debrebase’s git branch structure.

Set up Salsa CI, and use it to block merges of bad changes

Caveat - the tradeoff

gitlab is a giant pile of enterprise crap. It is full of startling bugs, many of which reveal a fundamentally broken design. It is only barely Free Software in practice for Debian (in the sense that we are very reluctant to try to modify it). The constant-churn development approach and open-core business model are serious problems. It’s very slow (and resource-intensive). It can be depressingly unreliable. That Salsa works as well as it does is a testament to the dedication of the Debian Salsa team (and those who support them, including DSA).

However, I have found that despite these problems, Salsa CI is well worth the trouble. Yes, there are frustrating days when work is blocked because gitlab CI is broken and/or one has to keep mashing “Retry”. But, the upside is no longer having to remember to run tests, track which of my multiple dev branches tests have passed on, and so on. Automatic tests on Merge Requests are a great way of reducing maintainer review burden for external contributions, and helping uphold quality norms within a team. They’re a great boon for the lazy solo programmer.

The bottom line is that I absolutely love it when the computer thoroughly checks my work. This is tremendously freeing, precisely at the point when one most needs it — deep in the code. If the price is to occasionally be blocked by a confused (or broken) computer, so be it.

Setup procedure

Create debian/salsa-ci.yml containing

include:
  - https://salsa.debian.org/salsa-ci-team/pipeline/raw/master/recipes/debian.yml

In your Salsa repository, under “Settings” / “CI/CD”, expand “General Pipelines” and set “CI/CD configuration file” to debian/salsa-ci.yml.

rationale

Your project may have an upstream CI config in .gitlab-ci.yml. But you probably want to run the Debian Salsa CI jobs.

You can add various extra configuration to debian/salsa-ci.yml to customise it. Consult the Salsa CI docs.

git-debrebase

Add to debian/salsa-ci.yml:

.git-debrebase-prepare: &git-debrebase-prepare
  # install the tools we'll need
  - apt-get update
  - apt-get --yes install git-debrebase git-debpush
  # git-debrebase needs git user setup
  - git config user.email "salsa-ci@invalid.invalid"
  - git config user.name "salsa-ci"
  # run git-debrebase make-patches
  # https://salsa.debian.org/salsa-ci-team/pipeline/-/issues/371
  - git-debrebase --force
  - git-debrebase make-patches
  # make an orig tarball using the upstream tag, not a gbp upstream/ tag
  # https://salsa.debian.org/salsa-ci-team/pipeline/-/issues/541
  - git-deborig

.build-definition: &build-definition
  extends: .build-definition-common
  before_script: *git-debrebase-prepare

build source:
  extends: .build-source-only
  before_script: *git-debrebase-prepare

variables:
  # disable shallow cloning of git repository. This is needed for git-debrebase
  GIT_DEPTH: 0
rationale

Unfortunately the Salsa CI pipeline currently lacks proper support for git-debrebase (salsa-ci#371) and has trouble directly using upstream git for orig tarballs (#salsa-ci#541).

These runes were based on those in the Xen package. You should subscribe to the tickets #371 and #541 so that you can replace the clone-and-hack when proper support is merged.

Push this to salsa and make the CI pass.

If you configured the pipeline filename after your last push, you will need to explicitly start the first CI run. That’s in “Pipelines”: press “New pipeline” in the top right. The defaults will very probably be correct.

Block untested pushes, preventing regressions

In your project on Salsa, go into “Settings” / “Repository”. In the section “Branch rules”, use “Add branch rule”. Select the branch master. Set “Allowed to merge” to “Maintainers”. Set “Allowed to push and merge” to “No one”. Leave “Allow force push” disabled.

This means that the only way to land anything on your mainline is via a Merge Request. When you make a Merge Request, gitlab will offer “Set to auto-merge”. Use that.

gitlab won’t normally merge an MR unless CI passes, although you can override this on a per-MR basis if you need to.

(Sometimes, immediately after creating a merge request in gitlab, you will see a plain “Merge” button. This is a bug. Don’t press that. Reload the page so that “Set to auto-merge” appears.)

autopkgtests

Ideally, your package would have meaningful autopkgtests (DEP-8 tests) This makes Salsa CI more useful for you, and also helps detect and defend you against regressions in your dependencies.

The Debian CI docs are a good starting point. In-depth discussion of writing autopkgtests is beyond the scope of this article.

Day-to-day work

With this capable tooling, most tasks are much easier.

Making changes to the package

Make all changes via a Salsa Merge Request. So start by making a branch that will become the MR branch.

On your MR branch you can freely edit every file. This includes upstream files, and files in debian/.

For example, you can:

  • Make changes with your editor and commit them.
  • git cherry-pick an upstream commit.
  • git am a patch from a mailing list or from the Debian Bug System.
  • git revert an earlier commit, even an upstream one.

When you have a working state of things, tidy up your git branch:

git merge

Use git-rebase to squash/edit/combine/reorder commits.

git-debrebase

Use git-debrebase -i to squash/edit/combine/reorder commits. When you are happy, run git-debrebase conclude.

Do not edit debian/patches/. With git-debrebase, this is purely an output. Edit the upstream files directly instead. To reorganise/maintain the patch queue, use git-debrebase -i to edit the actual commits.

Push the MR branch (topic branch) to Salsa and make a Merge Request.

Set the MR to “auto-merge when all checks pass”. (Or, depending on your team policy, you could ask for an MR Review of course.)

If CI fails, fix up the MR branch, squash/tidy it again, force push the MR branch, and once again set it to auto-merge.

Test build

An informal test build can be done like this:

apt-get build-dep .
dpkg-buildpackage -uc -b

Ideally this will leave git status clean, with no modified or un-ignored untracked files. If it shows untracked files, add them to .gitignore or debian/.gitignore as applicable.

If it dirties the tree, consider trying to make it stop doing that. The easiest way is probably to build out-of-tree, if supported upstream. If this is too difficult, you can leave the messy build arrangements as they are, but you’ll need to be disciplined about always committing, using git clean and git reset, and so on.

For formal binaries builds, including for testing, use dgit sbuild as described below for uploading to NEW.

Uploading to Debian

Start an MR branch for the administrative changes for the release.

Document all the changes you’re going to release, in the debian/changelog.

git merge

gbp dch can help write the changelog for you:

dgit fetch sid
gbp dch --ignore-branch --since=dgit/dgit/sid --git-log=^upstream/main
rationale

--ignore-branch is needed because gbp dch wrongly thinks you ought to be running this on master, but of course you’re running it on your MR branch.

The --git-log=^upstream/main excludes all upstream commits from the listing used to generate the changelog. (I’m assuming you have an upstream remote and that you’re basing your work on their main branch.) If there was a new upstream version, you’ll usually want to write a single line about that, and perhaps summarise anything really important.

(For the first upload after switching to using tag2upload or dgit you need --since=debian/1.2.3-1, where 1.2.3-1 is your previous DEP-14 tag, because dgit/dgit/sid will be a dsc import, not your actual history.)

Change UNRELEASED to the target suite, and finalise the changelog. (Note that dch will insist that you at least save the file in your editor.)

dch -r
git commit -m 'Finalise for upload' debian/changelog

Make an MR of these administrative changes, and merge it. (Either set it to auto-merge and wait for CI, or if you’re in a hurry double-check that it really is just a changelog update so that you can be confident about telling Salsa to “Merge unverified changes”.)

Now you can perform the actual upload:

git checkout master
git pull --ff-only # bring the gitlab-made MR merge commit into your local tree
git merge
git-debpush
git-debrebase
git-debpush --quilt=linear

--quilt=linear is needed only the first time, but it is very important that first time, to tell the system the correct git branch layout.

Uploading a NEW package to Debian

If your package is NEW (completely new source, or has new binary packages) you can’t do a source-only upload. You have to build the source and binary packages locally, and upload those build artifacts.

Happily, given the same git branch you’d tag for tag2upload, and assuming you have sbuild installed and a suitable chroot, dgit can help take care of the build and upload for you:

Prepare the changelog update and merge it, as above. Then:

git-debrebase

Create the orig tarball and launder the git-derebase branch:

git-deborig
git-debrebase quick
rationale

Source package format 3.0 (quilt), which is what I’m recommending here for use with git-debrebase, needs an orig tarball; it would also be needed for 1.0-with-diff.

Build the source and binary packages, locally:

dgit sbuild
dgit push-built
rationale

You don’t have to use dgit sbuild, but it is usually convenient to do so, because unlike sbuild, dgit understands git. Also it works around a gitignore-related defect in dpkg-source.

New upstream version

Find the new upstream version number and corresponding tag. (Let’s suppose it’s 1.2.4.) Check the provenance:

git verify-tag v1.2.4
rationale

Not all upstreams sign their git tags, sadly. Sometimes encouraging them to do so can help. You may need to use some other method(s) to check that you have the right git commit for the release.

git merge

Simply merge the new upstream version and update the changelog:

git merge v1.2.4
dch -v1.2.4-1 'New upstream release.'
git-debrebase

Rebase your delta queue onto the new upstream version:

git debrebase mew-upstream 1.2.4

If there are conflicts between your Debian delta for 1.2.3, and the upstream changes in 1.2.4, this is when you need to resolve them, as part of git merge or git (deb)rebase.

After you’ve completed the merge, test your package and make any further needed changes. When you have it working in a local branch, make a Merge Request, as above.

Sponsorship

git-based sponsorship is super easy! The sponsee can maintain their git branch on Salsa, and do all normal maintenance via gitlab operations.

When the time comes to upload, the sponsee notifies the sponsor that it’s time. The sponsor fetches and checks out the git branch from Salsa, does their checks, as they judge appropriate, and when satisfied runs git-debpush.

As part of the sponsor’s checks, they might want to see all changes since the last upload to Debian:

dgit fetch sid
git diff dgit/dgit/sid..HEAD

Or to see the Debian delta of the proposed upload:

git verify-tag v1.2.3
git diff v1.2.3..HEAD ':!debian'
git-debrebase

Or to show all the delta as a series of commits:

git log -p v1.2.3..HEAD ':!debian'

Don’t look at debian/patches/. It can be absent or out of date.

Incorporating an NMU

Fetch the NMU into your local git, and see what it contains:

dgit fetch sid
git diff master...dgit/dgit/sid

If the NMUer used dgit, then git log dgit/dgit/sid will show you the commits they made.

Normally the best thing to do is to simply merge the NMU, and then do any reverts or rework in followup commits:

git merge dgit/dgit/sid
git-debrebase

You should git-debrebase quick at this stage, to check that the merge went OK and the package still has a lineariseable delta queue.

Then make any followup changes that seem appropriate. Supposing your previous maintainer upload was 1.2.3-7, you can go back and see the NMU diff again with:

git diff debian/1.2.3-7...dgit/dgit/sid
git-debrebase

The actual changes made to upstream files will always show up as diff hunks to those files. diff commands will often also show you changes to debian/patches/. Normally it’s best to filter them out with git diff ... ':!debian/patches'

If you’d prefer to read the changes to the delta queue as an interdiff (diff of diffs), you can do something like

git checkout debian/1.2.3-7
git-debrebase --force make-patches
git diff HEAD...dgit/dgit/sid -- :debian/patches

to diff against a version with debian/patches/ up to date. (The NMU, in dgit/dgit/sid, will necessarily have the patches already up to date.)

DFSG filtering (handling non-free files)

Some upstreams ship non-free files of one kind of another. Often these are just in the tarballs, in which case basing your work on upstream git avoids the problem. But if the files are in upstream’s git trees, you need to filter them out.

This advice is not for (legally or otherwise) dangerous files. If your package contains files that may be illegal, or hazardous, you need much more serious measures. In this case, even pushing the upstream git history to any Debian service, including Salsa, must be avoided. If you suspect this situation you should seek advice, privately and as soon as possible, from dgit-owner@d.o and/or the DFSG team. Thankfully, legally dangerous files are very rare in upstream git repositories, for obvious reasons.

Our approach is to make a filtered git branch, based on the upstream history, with the troublesome files removed. We then treat that as the upstream for all of the rest of our work.

rationale

Yes, this will end up including the non-free files in the git history, on official Debian servers. That’s OK. What’s forbidden is non-free material in the Debianised git tree, or in the source packages.

Initial filtering

git checkout -b upstream-dfsg v1.2.3
git rm nonfree.exe
git commit -m "upstream version 1.2.3 DFSG-cleaned"
git tag -s -m "upstream version 1.2.3 DFSG-cleaned" v1.2.3+ds1
git push origin upstream-dfsg

And now, use 1.2.3+ds1, and the filtered branch upstream-dfsg, as the upstream version, instead of 1.2.3 and upstream/main. Follow the steps for Convert the git branch or New upstream version, as applicable, adding +ds1 into debian/changelog.

If you missed something and need to filter out more a nonfree files, re-use the same upstream-dfsg branch and bump the ds version, eg v1.2.3+ds2.

Subsequent upstream releases

git checkout upstream-dfsg
git merge v1.2.4
git rm additional-nonfree.exe # if any
git commit -m "upstream version 1.2.4 DFSG-cleaned"
git tag -s -m "upstream version 1.2.4 DFSG-cleaned" v1.2.4+ds1
git push origin upstream-dfsg

Removing files by pattern

If the files you need to remove keep changing, you could automate things with a small shell script debian/rm-nonfree containing appropriate git rm commands. If you use git rm -f it will succeed even if the git merge from real upstream has conflicts due to changes to non-free files.

rationale

Ideally uscan, which has a way of representing DFSG filtering patterns in debian/watch, would be able to do this, but sadly the relevant functionality is entangled with uscan’s tarball generation.

Common issues

  • Tarball contents: If you are switching from upstream tarballs to upstream git, you may find that the git tree is significantly different.

    It may be missing files that your current build system relies on. If so, you definitely want to be using git, not the tarball. Those extra files in the tarball are intermediate built products, but in Debian we should be building from the real source! Fixing this may involve some work, though.

  • gitattributes:

    For Reasons the dgit and tag2upload system disregards and disables the use of .gitattributes to modify files as they are checked out.

    Normally this doesn’t cause a problem so long as any orig tarballs are generated the same way (as they will be by tag2upload or git-deborig). But if the package or build system relies on them, you may need to institute some workarounds, or, replicate the effect of the gitattributes as commits in git.

  • git submodules: git submodules are terrible and should never ever be used. But not everyone has got the message, so your upstream may be using them.

    If you’re lucky, the code in the submodule isn’t used in which case you can git rm the submodule.

Further reading

I’ve tried to cover the most common situations. But software is complicated and there are many exceptions that this article can’t cover without becoming much harder to read.

You may want to look at:

  • dgit workflow manpages: As part of the git transition project, we have written workflow manpages, which are more comprehensive than this article. They’re centered around use of dgit, but also discuss tag2upload where applicable.

    These cover a much wider range of possibilities, including (for example) choosing different source package formats, how to handle upstreams that publish only tarballs, etc. They are correspondingly much less opinionated.

    Look in dgit-maint-merge(7) and dgit-maint-debrebase(7). There is also dgit-maint-gbp(7) for those who want to keep using gbp pq and/or quilt with a patches-unapplied branch.

  • NMUs are very easy with dgit. (tag2upload is usually less suitable than dgit, for an NMU.)

    You can work with any package, in git, in a completely uniform way, regardless of maintainer git workflow, See dgit-nmu-simple(7).

  • Native packages (meaning packages maintained wholly within Debian) are much simpler. See dgit-maint-native(7).

  • tag2upload documentation: The tag2upload wiki page is a good starting point. There’s the git-debpush(1) manpage of course.

  • dgit reference documentation:

    There is a comprehensive command-line manual in dgit(1). Description of the dgit data model and Principles of Operation is in dgit(7); including coverage of out-of-course situations.

    dgit is a complex and powerful program so this reference material can be overwhelming. So, we recommend starting with a guide like this one, or the dgit-…(7) workflow tutorials.

  • Design and implementation documentation for tag2upload is linked to from the wiki.

  • Debian’s git transition blog post from December.

    tag2upload and dgit are part of the git transition project, and aim to support a very wide variety of git workflows. tag2upload and dgit work well with existing git tooling, including git-buildpackage-based approaches.

    git-debrebase is conceptually separate from, and functionally independent of, tag2upload and dgit. It’s a git workflow and delta management tool, competing with gbp pq, manual use of quilt, git-dpm and so on.

git-debrebase
  • git-debrebase reference documentation:

    Of course there’s a comprehensive command-line manual in git-debrebase(1).

    git-debrebase is quick and easy to use, but it has a complex data model and sophisticated algorithms. This is documented in git-debrebase(5).



comment count unavailable comments

A brief history of barbed wire fence telephone networks [OSnews]

If you look at the table of contents for my book, Other Networks: A Radical Technology Sourcebook, you’ll see that entries on networks before/outside the internet are arranged first by underlying infrastructure and then chronologically. You’ll also notice that within the section on wired networks, there are two sub-sections: one for electrical wire and another for barbed wire. Even though the barbed wire section is quite short, it was one of the most fascinating to research and write about – mostly because the history of using barbed wire to communicate is surprisingly long and almost entirely undocumented, even though barbed wire fence phones in particular were an essential part of early- to mid-twentieth century rural life in many parts of the U.S. and Canada!

↫ Lori Emerson

I had no idea this used to be a thing, but it obviously makes a ton of sense. If you can have a conversation by stringing a few tin cans together, you can obviously do something similar across metal barbed wire. There’s something poetic about using one of mankind’s most dividing inventions to communicate, and thus bring people closer together.

11:35

What dreams are made of [RevK®'s ramblings]

I had a daft idea.

Connect a BBC Micro B monitor port to ESP32 and use quad SPI to clock RGB+line sync triggered on frame sync and then mapped to original resolution PNG image via WiFi/web page.

Technically tricky. Tiny bit of circuit for sync and levels. Small. Powered by the port. Could be in-line working to a monitor.

I decided not to, as you can just buy RGB to HDMI, and HDMI capture cards, and job done, to video/stream even.

And, I’d have to fix one of my beebs.

The fun is I spent all night going round in my head how possible it would be. How it needs checking where line syncs appear in data stream, and adjusting SPI clock. Working out which interlace frame is which. Trying to recall how the sync line works. Trying to figure out if any way to identify mode 7, maybe by number of scan lines, as that is only mode which is fundamentally different resolution. Wondering how quickly I could refresh and send PNGs to a browser.

Yes, this is stuff I dream about.

Of course, assuming I could fix a monitor as well, I could have ESP32 generate the output a BBC would have using the same methods.

I could probably implement the output stream logic and display modes for the graphics a BBC micro could normally do over TCP or serial even!

10:42

Better vs. done [Seth's Blog]

“There, it’s done.”

This is the production mindset and the rule of school. Pencils down. Hand it in.

The alternative is, “Sign me up for a commitment to better.”

Ship an update every day. Learn from what works, relentlessly improve what doesn’t.

The hard part about this path is persisting. Never done projects pile up pretty quickly.

That’s precisely why they’re a competitive advantage.

Saturday, 14 February

23:07

10 Thoughts On “AI,” February 2026 Edition [Whatever]

Because it feels like a good time to do it, some current thoughts on “AI” and where it, we and I are about the thing, midway through February 2026. These are thoughts in no particular order. Some of them I’ve noted before, but will note again here mostly for convenience. Here we go:

1. I don’t and won’t use “AI” in the text of any of my published work. There are several reasons for this, including the fact that “AI”-generated text is not copyrightable and I don’t want any issues of ownership clouding my work, and the simple fact that my book contracts oblige me to write everything in those books by myself, without farming it out to either ghostwriters or “AI.” But mostly, it’s because I write better than “AI” can or ever will, and I can do it with far less energy draw. I don’t need to destroy a watershed to write a novel. I can write a novel with Coke Zero and snacks. Using “AI” in my writing would create more work for me, not less, and I really have lived my life with the idea of doing the least amount of work possible.

If you’re reading a John Scalzi book, it all came out of my brain, plain and simple. Better for you! Easier for me!

2. I’m not worried about “AI” replacing me as a novelist. Sure, someone can now prompt a novel-length work out of “AI” faster than I or any other human can write a book, and yes, people are doing just that, pumping into Kindle Unlimited and other such places a vast substrate of “AI” text slop generated faster than anyone could read it. Nearly all of it will sit there, unread, until the heat death of the universe.

Now, you might say that’s because why would anyone read something that no one actually took any effort to write, and that will be maybe about 5% of the reason. The other 95% of the reason, however, will be discoverability. Are the people pumping out the wide sea of “AI” text slop planning to make the spend for anyone to find that work? What are their marketing plans other than “toss it out, see who locates it by chance”? And if there is a marketing budget, if you can generate dozens or hundreds of “AI” text slop tomes in a year, how do you choose which to highlight? And will the purveyors of such text slop acknowledge that the work they’re promoting was written by no one?

(Answer: No. No they won’t).

I am not worried about being replaced as a novelist because I already exist as a successful author, and my publishers are contractually obliged to market my novels every time they come out. This will be the case for a while, since I have a long damn contract. Readers will know when my new books are out, and they will be able to find them in bookstores, be they physical or virtual. This is a huge advantage over any “AI” text slop that might be churned out. And while I don’t want to overstate the amount of publicity/marketing traditional publishers will do for their debut or remaining mid-list authors, they will do at least some, and that visibility is an advantage that “AI” text slop won’t have. Even indie authors, who must rely on themselves instead of a publicity department to get the word out about their work, have something “AI” text slop will never have: They actually fucking care about their own work, and want other people to see it.

I do understand it’s more than mildly depressing to think that a major market difference between “AI” text slop and stuff actual people wrote is marketing, but: Welcome to capitalism! It’s not the only difference, obviously. But it is a big one. And one that is likely to persist, because:

3. People in general are burning out on “AI.” Not just in creative stuff: Microsoft recently finally admitted that no one likes its attempt to shove its “AI” Copilot into absolutely everything, whether it needs to be there or not, and is making adjustments to its businesses to reflect that. “AI” as a consumer-facing entity rarely does what it does, better than the programs and apps it is replacing (see: Google’s Gemini replacing Google Assistant), and sucks up far more energy and resources. Is your electric bill higher recently? Has the cost of a computer gone up because suddenly memory prices have doubled (or more)? You have “AI” to thank for that. It’s the solution to a problem that not only did no one actually have, but wasn’t a problem in the first place. There are other issues with “AI” larger than this — mostly that it’s a tool to capture capital at the expense of labor — but I’m going to leave those aside for now to focus on the public exhaustion and dissatisfaction with “AI” as a product category.

In this sort of environment, human-generated work has a competitive advantage, because people see it as more authentic and real (which it is, to the extent that “authentic” and “real” mean “a product of an actual human brain”), and more likely to have the ability to surprise and engage the people who encounter it. I don’t want to oversell this — humans are still as capable of creating lazy, uninspired junk as they ever were, and some people really do think of their entertainment as bulk purchases. Those vaguely sad people will be happy that “AI” gives them more, even if it’s of lesser quality. But I do think in general when people are given a choice, that they will generally prefer to give their time and money to the output of an actual human making an effort, than to the product of a belching drain on the planet’s resources whose use primarily benefits people who are already billionaires dozens of times over. Call me optimistic.

Certainly that’s the case with me:

4. I’m supporting human artists, including as they relate to my own work. I’ve noted before that I have it as a contractual point that my book covers, translations and copyediting have to be done by humans. This is again both a practical issue (re: copyrights, quality of work, etc) and a moral one, but also, look, I like that my work pays other humans, and I want that to continue. Also, in my personal life, I’m going to pay artists for stuff. When I buy art, I’m going to buy from people who created it, not generated it out of a prompt. I’m not going to knowingly post or promote anything that is not human-created. Just as I wish to be supported by others, I am going to support other artists. There is no downside to not promoting/paying for “AI” generated work, since there was no one who created it. There is an upside to promoting and paying humans. They need to eat and pay rent.

“But what if they use AI?” In the case of the people working on my own stuff, it’s understood that the final product, the stuff that goes into my book, is the result of their own efforts. As for everything else, well, I assume most artists are pretty much like me: using “AI” for their primary line of creativity is just introducing more work, not less. Also I’m going to trust other creators; if they tell me they’re not using “AI” in their finished work then I’m going to believe them in the absence of a compelling reason not to. I don’t particularly have the time or interest in being the “AI” police. Anyway, if they’re misrepresenting their work product, that eventually gets found out. Ask a plagiarist about that.

With all that said:

5. “AI” is Probably Sticking Around In Some Form. This is not an “‘AI’ Is Inevitable and Will Take Over the World” statement, since as noted above people are getting sick of it being aggressively shoved at them, and also there are indications that a) “this is the worst it will ever be” is not true of AI, as people actively note that recent versions of ChatGPT were worse to use than earlier versions, b) investors are getting to the point of wanting to see an actual return on their investments, which is the cue for the economic bubble around AI to pop. This going to be just great for the economy. “AI,” as the current economic and cultural phenomenon, is likely to be heading for a fall.

Once all that drama is done and we’ve sorted through the damage, the backend of “AI” and its various capabilities will still be around, either relabeled or as is, just demoted from being the center of the tech universe and people making such a big deal about it, scaled down and hopefully more efficient. I understand that the “AI will probably persist” position is not a popular one in the creative circles in which I exist, and that people hope it vaporizes entirely, like NFTs and blockchains. I do have to admit I wouldn’t mind being wrong about this. But as a matter of capital investment and corporate integration, NFTs, etc are a blip compared to what’s been invested in “AI” overall, and how deep its use has sunk into modern capitalism (more on that in a bit).

Another reason I think “AI” is likely to stick around in some form:

6. “AI” is a marketing term, not a technical one, and encompasses different technologies. The version that the creative class gets (rightly) worked up about is generative “AI,” the most well-known versions of which were trained on vast databases of work, much of which was and is copyrighted and not compensated for. This is, however, only one subset of a larger group of computational systems which are also called “AI,” because it’s a sexy term that even non-nerds have heard of before, and far less confusing than, say, “neural networks” or such. Not all “AI” is as ethically compromised as large-scale generative “AI,” and a lot of it existed and was being used non-controversially before generative “AI” blew up as the wide-scale rights disaster it turned out to be.

It’s possible that “AI” as a term is going to be forever tainted as a moral hazard, disliked by the public and seen as a promotions drag by marketing departments. If and when that happens, a lot of things currently hustled under the “AI” umbrella will be quietly removed from it, either returning to previous, non-controversial labels or given new labels entirely. Lots of “AI” will still be around, just no one will call it that, and outside of obvious generative “AI” that presents rights issues, fewer people will care.

On the matter of generative “AI,” here’s a thought:

7. There were and are ethical ways to have trained generative “AI” but because they weren’t done, the entire field is suspect. Generative “AI” could easily have been trained solely on material in the public domain and/or on appropriately-licensed Creative Commons material, and an opt-in licensing gateway to acquire and pay for copyrighted work used in training, built and used jointly by the companies needing training data, could have happened. This was all a solvable problem! But OpenAI, Anthropic, et al decided to train first, ask forgiveness later, on the idea that would be cheaper simply to do it first and to litigate later. I’m not entirely sure this will turn out to be true, but it is possible that at this late stage, some of the companies will go under before any settlements can be achieved, which will have the same effect.

There are companies who have chosen to train their generative models with compensation; I know of music software companies that make a point of showing how artists they worked were both paid for creating samples and other material, and get paid royalties when work generated from those samples, etc is made by people using the software. I think that’s fine! As long as everyone involved is happy with the arrangement, no harm, and no foul. But absent of that sort of clear and unambiguous declaration of provenance and compensation regarding training data, one has to assume that any generative “AI” has used stolen work. It’s so widely pervasive at this point that this has to be a foundational assumption.

And here is a complication:

8. The various processes lumped into “AI” are likely to be integrated into programs and applications that are in business and creative workflows. One, because they already were prior to “AI” being the widely-used rubric, and two, because these companies need to justify their investments somehow. Some of these systems and processes aren’t tainted by the issues of “generative AI” but many of them are, including some that weren’t previously. When I erase a blotch in an image with Photoshop, the process may or may not use Generative AI and when it does, it may or may not use Adobe’s Firefly model (which Adobe maintains, questionably, is trained only on material it has licensed).

Well, don’t use Photoshop, I hear you say. Which, okay, but I have some bad news for you: Nearly every photoediting suite at this point incorporates “AI” at some point in its workflow, so it’s six of one and half dozen of the other. And while I am a mere amateur when it comes to photos, lots of professional photographers use Adobe products in their workflow, either because they’ve been using it for years and don’t want to train on new software (which, again, probably has “AI” in its workflow), or they’re required to use it by their clients because it’s the “industry standard.” A program being the “industry standard” is one reason I use Microsoft Word, and now that program is riddled with “AI.” At a certain point, if you are using 21st century computer-based tools, you are using “AI” of some sort, whether you want to or not. Some of it you can turn off or opt out of. Some of it you can’t.

(Let’s not even talk about my Google Pixel Phone, which is now so entirely festooned with “AI” that it’s probably best to think of it as an “AI” computer with a phone app, than the other way around.)

This is why earlier in this piece, I talk about the “final product” being “AI”-free — because it’s almost impossible at this point to avoid “AI” in computer-based tools, even if one wants to. Also, given the fact that “AI” is a marketing rather than a technical term, what the definition of “AI” is, and what is an acceptable level of use, will change from one person to another. Is Word’s spellcheck “AI”? Is Photoshop’s Spot Healing brush tool? Is Logic Pro’s session drummer? At what point does a creative tool become inimical to creation?

(On a much larger industrial scale, this will be an extremely interesting question when it comes to animation, CGI and VFX. “AI” is already here in video games with DLSS, which upscales and adds frames to games; if similar tech isn’t already being used for inbetweening in animation, it’s probably not going to be long until it is.)

Again, I’m not interested in being, nor have the time to be, the “AI” police. I choose to focus on the final product and the human element in that, because that is honestly the only part of the process that I, and most people, can see. I’m certainly not going to penalize a creative person because Adobe or Microsoft or whomever incorporated “AI” into a tool they need to use in order to do their work. I would be living in a glass house if I threw that particular stone.

9. It’s all right to be informed about the state of the art when it comes to “AI.” Do I use “AI” in my text? No. Do I think it makes sense to have an understanding of where “AI” is at, to know how the companies who make it create a business case for it, and to keep tabs on how it’s actually being used in the real word? Yes. So I check out latest iterations of ChatGPT/Claude/Gemini/Copilot, etc (I typically steer clear of Grok if only because I’m not on the former Twitter anymore) and the various services and capabilities they offer.

The landscape of “AI” is still changing rapidly, and if you’re still at the “lol ‘AI’ can’t draw hands” level of thinking about the tech, you’re putting yourself at a disadvantage, particularly if you’re a creative person. Know your enemy, or at least, know the tools your enemies are making. Again, I’m not worried about “AI” replacing me as a novelist. But it doesn’t have to be at that level of ability to wreak profound and even damaging changes to creative fields. We see that already.

One final, possibly heretical thought:

10. Some people are being made to use “AI” as a condition of their jobs. Maybe don’t give them too much shit for it. I know at least a couple of people who were recently hired for work, who were told they needed to be fluent in computer systems that had “AI” as part of their workflow. Did they want or need to use those systems to do the actual job they were hired for? Almost certainly not! Did that matter? Nope! Was it okay that their need to eat and pay rent outweighed their ethical annoyance/revulsion with “AI” and the fact it was adding more work, not less, onto their plate? I mean (waves at the world), you tell me. Personally speaking, I’m not the one to tell a friend that they and their kid and cat should live in a Toyota parked at a Wal-Mart rather than accept a corporate directive made by a mid-level manager with more jargon in their brain than good sense. I may be a softie.

Be that as it may, to the extent you can avoid “AI,” do so, especially if you have a creative job, where it’s almost always just going to get in your way. Your fans, the ones that exist and the ones you have yet to make, will appreciate that what they get from you is from you. That’s what people mostly want from art: Entertainment and connection. You will always be able to do that better than “AI.” There is no statistical model that can create what is uniquely you.

— JS

19:07

Steinar H. Gunderson: A286874(15) >= 42 [Planet Debian]

The following 42 15-bit values form a 2-disjunctive matrix (that is, no union of two values contain or equal a third value), or equivalently, a superimposed code:

000000000011111
000000011100011
000000101101100
000001010110100
000001101010001
000001110001010
000010011011000
000100100110010
000110010000110
000110100001001
000111001100000
001000110000101
001010000110001
001010101000010
001011000001100
001100001010100
001100010101000
001101000000011
010001000101001
010010001000101
010010110100000
010011000010010
010100001001010
010100010010001
010101100000100
011000000100110
011000100011000
011001011000000
100001001000110
100010000101010
100010100010100
100011010000001
100100000100101
100100111000000
100101000011000
101000001001001
101000010010010
101001100100000
110000001110000
110000010001100
110000100000011
111110000000000

This shows that A286874 a(15) >= 42.

If I had to make a guess, I'd say the equality holds, but I have nowhere near the computing resources to actually find the answer for sure. Stay tuned for news about a(14), though.

18:49

Link [Scripting News]

I always objected to browsers trying to hide the feeds. I come from NYC and rode the subway to school every day in high school. The things you see! It's all out there for the looking and breathing. Lift the hood on a car. Look at all those wires and hoses, what do they do. I hope they don't kill me. Whoever made the decision at Microsoft or Firefox or wherever that feeds needed to be obfuscated, some advice -- be more respectful of your users. The web is the medium that had a View Source command. You're supposed to take a look. Don't forget the Back button if you don't like what you see. Something funny, if only life had a Back button.

Link [Scripting News]

Speaking of the Back button, that's the problem with tiny-little-text-box social networks. No links. So guess what the Back button one of the best inventions ever, isn't part of your reading and writing world. I guess this is like the street cars in LA conspiracy, that the car companies bought and shut down?

Link [Scripting News]

One more thing and then I gotta go. I think it's time for the AI's to compete with Wikipedia. It's filled with hallucinations. Make it a community thing, let the people be involved, but do a better job of presentation, and validate what's written, don't let these things become so territorial. We want the facts, not who has the best PR.

18:00

Link [Scripting News]

To my WordPress developer friends. How about making the RSS feed prettier and easier to read. Properly indenting it would make a big diff. I prefer encoding individual characters to CDATA. Those two things to start. It really does matter how readable this stuff is. Comparison, the RSS feed that Old School generates, the software that renders my blog.

Link [Scripting News]

It's all-star weekend in the NBA which I've never seen the point of. As if sport is anything but a simulation of what we were born to do -- compete and cooperate. My team is great, your team sucks. It's fun the same way slapstick for some weird reason is funny. All it takes to get a laugh is trip and fall on your face. It's funny just thinking about it. Doesn't seem very nice but there it is.

17:07

Vim 9.2 released [LWN.net]

Version 9.2 of the Vim text editor has been released. "Vim 9.2 brings significant enhancements to the Vim9 scripting language, improved diff mode, comprehensive completion features, and platform-specific improvements including experimental Wayland support." Also included is a new interactive tutor mode.

Upcoming Speaking Engagements [Schneier on Security]

This is a current list of where and when I am scheduled to speak:

  • I’m speaking at Ontario Tech University in Oshawa, Ontario, Canada, at 2 PM ET on Thursday, February 26, 2026.
  • I’m speaking at the Personal AI Summit in Los Angeles, California, USA, on Thursday, March 5, 2026.
  • I’m speaking at Tech Live: Cybersecurity in New York City, USA, on Wednesday, March 11, 2026.
  • I’m giving the Ross Anderson Lecture at the University of Cambridge’s Churchill College at 5:30 PM GMT on Thursday, March 19, 2026.
  • I’m speaking at RSAC 2026 in San Francisco, California, USA, on Wednesday, March 25, 2026.

The list is maintained on this page.

15:14

Bits from Debian: DebConf 26 Registration and Call for Proposals are open [Planet Debian]

Registration and the Call for Proposals for DebConf 26 are now open. The 27th edition of the Debian annual conference will be held from July 20th to July 25th, 2026, in Santa Fe, Argentina.

The conference days will be preceded by DebCamp, which will take place from July 13th to July 19th, 2026.

The registration form can be accessed on the DebConf 26 website. After creating an account, click "register" in the profile section.

As always, basic registration for DebConf is free of charge for attendees. If you are attending the conference in a professional capacity or as a representative of your company, we kindly ask that you consider registering in one of our paid categories to help cover the costs of organizing the conference and to support subsidizing other community members.

The last day to register with guaranteed swag is June 14th.

We also encourage eligible individuals to apply for a diversity bursary. Travel, food, and accommodation bursaries are also available. More details can be found on the bursary info page.

The last day to apply for a bursary is April 1st. Applicants should receive feedback on their bursary application by May 1st.

Call for proposals

The call for proposals for talks, discussions and other activities is also open. To submit a proposal you need to create an account on the website, and then use the "Submit Talk" button in the profile section.

The last day to submit and have your proposal be considered for the main conference schedule, with video coverage guaranteed, is April 1st.

Become a sponsor

DebConf 26 is also accepting sponsors. Interested companies and organizations may contact the DebConf team through sponsors@debconf.org or visit the DebConf 26 website.

See you in Santa Fe,

The DebConf 26 Team

09:42

Things that feel risky [Seth's Blog]

Often aren’t.

In fact, they might be the safest way forward.

03:42

A Friday Mlem For You [Whatever]

Saja is turning into a handsome devil. Smudge is unimpressed nevertheless.

— JS

01:14

Haiku further improves its touchpad support [OSnews]

January was a busy month for Haiku, with their monthly report listing a metric ton of smaller fixes, changes, and improvements. Perusing the list, a few things stand out to me, most notably continued work on improving Haiku’s touchpad support.

The remainder of samuelrp84’s patchset implementing new touchpad functionality was merged, including two-finger scrolling, edge motion, software button areas, and click finger support; and on the hardware side, driver support for Elantech “version 4” touchpads, with experimental code for versions 1, 2, and 3. (Version 2, at least, seems to be incomplete and had to be disabled for the time being.)

↫ Haiku’s January 2026 activity report

On a related note, the still-disabled I2C-HID saw a number of fixes in January, and the rtl8125 driver has been synced up with OpenBSD. I also like the changes to kernel_version, which now no longer returns some internal number like BeOS used to do, instead returning B_HAIKU_VERSION; the uname command was changed accordingly to use this new information. There’s some small POSIX compliance fixes, a bunch of work was done on unit tests, and a ton more.

00:28

Eddie Lin's District 2 Is Racially Diverse [The Stranger]

The former assistant attorney for the Seattle City Attorney’s Office easily won The Stranger's endorsement. His big message? Seattle needs more housing, and from all sectors: private, parastatal, social. But is he too soft on cops? by Charles Mudede

I enter Cal Anderson Park. It’s 10:15 a.m. The sky is bright blue with long and high clouds. The sun is low. And a seagull stands on top of the city’s best fountain. What’s on its mind? On the concrete rim that circles the fountain’s pooled water, someone wrote with a spray can: “Death to AmeriKKK!” Now that’s on my mind. US fascism. 

Unbeknownst to me, Councilmember Eddie Lin is also in the park, also near the fountain. In November 2025, he won District 2’s special election by nearly 40 points. On December 2, he was sworn in. Today, we are meeting at The Stranger’s office for a quick check-in. How is it going so far? Is he working on his promises? Is the job harder than he expected? That sort of thing. While talking on the phone about some community matter, Lin spots me. Does he also notice the contemplative seagull on the fountain or the anti-fascist graffiti?

At 10:35, we are in The Stranger’s conference room. It has a view of the rainbow crosswalk next to the Wildrose and, in the distance, two towers that will soon have the repurposed corpse of a Boeing 747 near the ground floors between them. I was introduced to Eddie Lin in this conference room in June of 2025 for the SECB endorsement meeting for the primaries. The former assistant attorney for the Seattle City Attorney’s Office easily won our endorsement. His big message? Seattle needs more housing, and from all sectors: private, parastatal, social. 

“I saw you in the park,” Lin says to me as he places his phone on the conference table. “Funny you should bring that up,” I say. “I was thinking about facism in the US while crossing the park. And [you] being not only a person of color but the one who represents the most diverse district in Seattle, I want to begin by talking about ICE. When they come, they are coming for us. Is there really anything that can be done?” I also live in District 2.

Lin explains that he and Erika Evans, the new city attorney, are looking at the options closely and working with immigrant professionals and activists to prepare and protect all of the members of the community, many of whom are from Somalia, from what’s happening in Minneapolis. But, I say, ICE still just breaks the law. They break into homes without warrants. We saw this happen to an American citizen, ChongLy Thao. ICE just disregarded the law. Treated the Hmong American with no record like a criminal. Trump has made it loud and clear that this agency operates outside of conventional law. They can use excessive force and even act as if they can kill people with impunity. How can Seattle prepare for a federal organization that’s operating like a street gang? 

After a moment's thought, Lin puts on his lawyer hat and says it like it is: “There are a couple things for me. One: There are certain crimes committed [by ICE agents] that are not just federal crimes. They're also state crimes. Murder is a state crime that does not [in Washington] have a statute of limitation. And it can't be pardoned by the president, and so, you know, I think, these federal agents need to be worried about that. The president is trying to send this message that he will protect them and pardon them. He can't pardon a state crime. So, he's going to be out of office someday. [And] Republicans will not be in control forever. They can't protect these people forever. So, I think we need to make these agents understand this. Yes, the statute of limitations for excessive force is something like five years. Yes, I would like it to be longer. But that is the way I’m looking at it. You are not protected from state crimes.”

When I ask about how things have been since he took office, he brightens a little and explains that, to be honest, not much has happened. He was sworn in. He made the transition, and he is now settling in. Then I ask about his top priority: affordable housing. Any new developments in that direction? 

He is honest. Not much has happened in the immediate sense because housing is always a long-term commitment. “Even if we change zoning rules,” he says, “it’s still going to take years to see the results. The kind of housing crisis we are in now was caused many years ago. … But we still have to deal with the homeless crisis. That has to be done right now. … So, I support things like the tiny home villages or [other forms of] transitional housing. I'm supportive of [Mayor Katie Wilson’s] focus on that and want to do what I can to support her. Whether it's with resources, finding locations, or permitting, or land-use issues. But I think the whole city should be a part of transitional housing. Not just South Seattle.”

I bring up the fact that, though he’s considered a progressive, some think he is a touch soft on cops. He seems a little surprised by this, but it was mentioned in The Stranger’s 2025 primary endorsement. In response, Lin brings up that he, along with Alexis Mercedes Rinck and Rob Saka, voted against the police guild contract because it was woefully inadequate when it came to police accountability. Lin leaves it at that. Action counts more than words.

Friday, 13 February

23:42

Microsoft Store gets another CLI tool [OSnews]

We often lament Microsoft’s terrible stewardship of its Windows operating system, but that doesn’t mean that they never do anything right. In a blog post detailing changes and improvements coming to the Microsoft Store, the company announced something Windows users might actually like?

A new command-line interface for the Microsoft Store brings app discovery, installation and update management directly to your terminal. This enables developers and users with a new way to discover and install Store apps, without needing the GUI. The Store CLI is available only on devices where Microsoft Store is enabled.

↫ Giorgio Sardo at the Windows Blogs

Of course, this new command-line frontend to the Microsoft Store comes with commands to install, update, and search for applications in the store, but sadly, it doesn’t seem to come with an actual TUI for browsing and discovery, which is a shame. I sometimes find it difficult to use dnf to find applications, as it’s not always obvious which search terms to use, which exact spelling packagers are using, which words they use in the description, and so on. In other words, it may not always be clear if the search terms you’re using are the correct ones to find the application you need.

If package managers had a TUI to enable browsing for applications instead of merely searching for them, the process of using the command line to find and install applications would be much nicer. Arch has this third-party TUI called pacseek for its package manager, and it looks absolutely amazing. I’ve run into a rudimentary dnf TUI called dnfseek, but it’s definitely not as well-rounded as pacseek, and it also hasn’t seen any development since its initial release. I couldn’t find anything for apt, but there’s always aptitude, which uses ncurses and thus fulfills a similar role.

To really differentiate this new Microsoft Store command-line tool from winget, the company could’ve built a proper TUI, but instead it seems to just be winget with nicer formatted output that is limited to just the Microsoft Store. Nice, I guess.

22:07

I Saw U: Making Eye Contact at the ICE Protest, Winning a Prize on the Claw Machine, and Looking Like Chad Michael Murray with Kurt Cobain Hair [The Stranger]

Did you see someone? Say something! by Anonymous

Mullet 4 Mullet at the Ice Protest: Revolutionary Eye Contact

You: pink/purple curly mullet cutie carrying a long sign Me: gray/black mullet w boot sign. We locked eyes many a time, let’s go on a date? FUCK ICE!

Southcenter skate claw machine

You: short masc in a hat. Me: red jacket zombie shirt. You watched me win a prize for my friend’s birthday. I should have won you one too! Forgive me?

Benbow 80's Nohjty

You're Lisa charming smile in black shoulder length hair, my name is Stanton dark hair wearing black with a Debra Harry Blonde t shirt coffee?

Say She She @ Showbox, 1/31

You: sleepy eyes, strong nose, beanie. Me: glittery earrings, glasses, strappy top. Smiled at you again as I left with my (platonic) pal. Coffee?

yoga on 1/21..more than just the sauna making me sweat

You warned me about the faulty bathroom lock..I eavesdropped on your conversation about dating shows..let me be a contender? :)

SIFF Uptown 2025

You used to work in the ticket booth. You looked like Chad Michael Murray with Kurt Cobain hair. Where did you go!

Best Bangs at Macrina Bakery

You're the stunning tall beauty with bangs and a great smile at Macrina Bakery in Maple Leaf. We talked about movies. I'd love to continue the convo!

Goth Cutie outside of Corner Pocket

We talked for a little bit about music, but I was too nervous to ask about your number. Give me a chance to make up for my slip.

Is it a match? Leave a comment here or on our Instagram post to connect!

Did you see someone? Say something! Submit your own I Saw U message here and maybe we'll include it in the next roundup!

21:28

I Have Three Mouths And I Must Chew Villains [Penny Arcade]

I think we have to just establish - verbally, conceptually - that we have entered into a kind of vortex where traditional assumptions about the industry have been annihilated. Megafauna are collapsing under their own weight; they're loaning their treasured IP to tiny, scampering creatures so that something useful might be done with it at all. They're slicing and sectioning themselves into charcuterie boards, or tarting themselves up for handsome saviors. The return of the demo, an attempt to thumb the scale in a world where making a good game appears to be a solved problem but people knowing you exist is increasingly impossible, means I've bought more games in the last two weeks than I have in the last two years. Escapees from "triple a" have gnawed at its root, drawing from it a dark strength. Or, you know, gotten utterly annihilated. Like I said: Vortex.

21:21

20:42

Link [Scripting News]

I got the most remarkable headphones. Read a review in Wired, and was sold. On sale for $109. Open ear buds from Anker. When I first put them on and played something I had a jolt. The sound appeared to be blasting from the speaker on my laptop. I rushed to try to turn it down and realized it was in my head. Never been so impressed. They don't go inside your ear, the speaker is poised above the ear. Later when I got out of my car and the headphones automatically connected via Bluetooth -- it was a podcast -- I thought the person was talking to me on the street in the middle of nowhere. I laughed at now I had been tricked so thoroughly, twice. It keeps happening. Music is incredible. The best sound I've ever heard from headphones. So totally worth the money.

[1287] The New Natani [Twokinds]

Comic for February 13, 2026

19:56

Both Sides Now – DORK TOWER 13.02.26 [Dork Tower]

Most DORK TOWER strips are now available as signed, high-quality prints, from just $25!  CLICK HERE to find out more!

HEY! Want to help keep DORK TOWER going? Then consider joining the DORK TOWER Patreon and ENLIST IN THE ARMY OF DORKNESS TODAY! (We have COOKIES!) (And SWAG!) (And GRATITUDE!)

19:49

Look, I Didn’t Want to Like Emerald Fennell’s Wuthering Heights [The Stranger]

But I Would Absolutely Let Jacob Elordi Be Mean to Me
by Audrey Vann

I think anyone who has read Emily Brontë’s Wuthering Heights can agree that it’s a challenging read. Perspectives change chapter to chapter, Joseph the servant’s dialogue is basically unreadable, and most of Heathcliff and Cathy’s love story is played out through the next generation after—spoiler alert—Cathy dies during childbirth. In Emerald Fennell’s adaptation, she focuses on the most engaging elements of the book: Heathcliff and Cathy’s love, passion, and mutual destruction. 

The film never claimed to be a perfect mirror of the book. Fennell herself said, when explaining the quotes around the title of her adaption: “What I can say is I'm making a version of [the book]. There's a version that I remembered reading that isn't quite real. And there's a version [where] I wanted stuff to happen that never happened. And so it is Wuthering Heights, and it isn't.” 

With this in mind, she was successful. Each character felt like a doll Fennell uses to play out her version of the story—the obsessive childhood bond between Heathcliff and Cathy at Wuthering Heights (Cathy’s family home), Cathy’s eventual choice of social status over love, her early death, and Heathcliff’s lifelong spiral into revenge. A literal doll motif continuously shows up in the film, too, beginning with a young Cathy, who watches a man being hanged while tightly clutching her doll. Again, when Cathy marries the wealthy suitor Linton (Shazad Latif), and her new sister-in-law, Isabella (Alison Oliver), gifts her a handmade doll made using Cathy’s own collected hair. And, most notably, in the large dollhouse replica of Thrushcross Grange, the Linton estate, that stands looming behind the dining-room table. So, who better to play the starring role than Barbie herself, Margot Robbie? 

At first, I was highly skeptical of the casting choices. Jacob Elordi was not at all how I imagined the scrappy, tortured, and probably-not-white orphan boy Heathcliff. But the longer I sit with the film, the more I can accept that he’s one of the only actors who could make this complex character work on screen. Brontë’s Heathcliff is cruel, insensitive, and brooding, and throughout the novel, I thought, why in the world are these women lusting after such an unlikable brute? But Elordi as Heathcliff—sweaty, grinning, and aroused—makes it make sense. You, too, would fold under the spell of his dark eyes with his fingers in your mouth. And, although Fennell’s interpretation of Linton is far more likable than Brontë’s, the choice is clear: Heathcliff eats Cathy out and licks the tears from her cheeks. Linton rails her in missionary while she dissociates.

Perhaps the most surprising element (and what I anticipate being the most controversial) is the innocent Isabella’s consent to Heathcliff’s cruel treatment of her. In the film, Heathcliff seduces Isabella (and later asks for her hand in marriage) only to punish Cathy, which he says explicitly. “Do you want me to stop?” he asks, several times, while taking off her nightgown. Isabella shakes her head no. After they marry, Nelly (Cathy’s companion, played by Hong Chau) stops by to see the newlyweds, only to find Isabella sporting a dog collar and chained up on her hands and knees, literally eating out of Heathcliff's hands. Nelly, horrified, attempts to free her, only to realize that the chains are not attached to anything—Isabella is a willing participant in this sadistic relationship. (Believe it or not, this is not how things go in Brontë’s 1847 novel.)

Elordi has managed to become the internet’s boyfriend through playing frightening men (see: Euphoria, Priscilla, Frankenstein). I would absolutely let that man be mean to me, and that’s what makes this film an alluring dollhouse to play inside. 

Many people, including myself, were outraged upon the trailer’s release due to the not-period-accurate costumes. How silly I feel about that now! While living at Wuthering Heights, Cathy dresses in tattered cotton skirts and billowing linen blouses. Once she marries Linton, everything turns synthetic—iridescent lamé dresses, tight corsets, gaudy costume jewelry, and rhinestones glued to her cheeks—essentially, the wardrobe I would have dreamed up for myself as a 5-year-old who was obsessed with princesses and pop stars. The costuming plays a larger role in the film to show that Cathy is actually restricted by Linton, despite his wealth and status, and can only breathe in the arms of Heathcliff. To sum it up, the costumes are, as Aretha Franklin once said so eloquently: “great gowns, beautiful gowns.” Fun to look at, but not so fun to be trapped inside of.

This contrast between organic and synthetic is also present in Charli XCX’s soundtrack, which is equal parts epic string score and moody electronic pop. It isn’t as jarring in this period piece as you might imagine it to be. 

As other critics have already noted, this is an extremely wet movie, soaked in uncooked egg, blood, spit, tears, snail slime, cooking oil, and rain, which adds a visceral quality to the film. Its lush, tactile visuals evoke whimsical movies of the past like the arthouse pornography of Polish director Walerian Borowczyk, surrealist stop-motion master Jan Švankmajer, the later films of Ingmar Bergman (Cries and Whispers, Fanny and Alexander), and Sofia Coppola’s Marie Antoinette. All things I suggest watching if you find yourself enjoying this decadently horny movie. 

Look, I didn’t want to like it. I walked into the theater as a skeptic, but left feeling enraptured by Fennell's vision. I give it four out of five broken eggs (see the movie and you’ll understand). 

19:00

How can I distinguish between the numeric keypad 0 and the top-row 0 in the WM_CHAR message? [The Old New Thing]

Last time, we looked at how to distinguish the numeric keypad 0 and the top-row 0 in the WM_KEY­DOWN message. We may as well look at the analogous table for WM_CHAR.

Event wParam Extended?
Numpad0 with NumLock on VK_0 0
Numpad0 with NumLock off (no WM_CHAR)
Ins key (no WM_CHAR)
0 on top row VK_0 0

I got the name VK_0 from this comment block in winuser.h.

/*
 * VK_0 - VK_9 are the same as ASCII '0' - '9' (0x30 - 0x39)
 * 0x3A - 0x40 : unassigned
 * VK_A - VK_Z are the same as ASCII 'A' - 'Z' (0x41 - 0x5A)
 */

Uh-oh. The extended bit doesn’t distinguish between the two. They both show up as VK_0, non-extended.

What changes is something not in the above table: The scan code.

So let’s convert the scan code back to a virtual key.

auto vk_from_scan = MapVirtualKey((lParam >> 16) & 0xFF, MAPVK_VSC_TO_VK);
Event wParam Extended? vk_from_scan
Numpad0 with NumLock on VK_0 0 VK_INSERT
Numpad0 with NumLock off (no WM_CHAR)
Ins key (no WM_CHAR)
0 on top row VK_0 0 VK_0

So we can infer which zero was pressed by taking the scan code, mapping it to a virtual key, and seeing whether it’s the Ins key (from the numeric keypad) or the 0 key (from the top row).

But wait, we’re not done yet.

There are ways to type the character 0 without using the numeric keypad or the top row. For example, you can hold the Alt key and then type 4,8 on the numeric keypad, and that will type a 0. I tried it out, and the vk_from_scan was VK_MENU, which is the virtual key code for the Alt key. Another way of entering a 0 is by using an input method editor (IME). Or there might be a custom keyboard layout that generates a 0 through some wacky chord sequence.

Therefore, if the vk_from_scan is neither VK_INSERT nor VK_0, you have to conclude that the 0 was entered by some means other than the numeric keypad or the top row.

The post How can I distinguish between the numeric keypad 0 and the top-row 0 in the <CODE>WM_<WBR>CHAR</CODE> message? appeared first on The Old New Thing.

The Best Bang for Your Buck Events in Seattle This Weekend: Feb 13–15, 2026 [The Stranger]

Sound Off!, Tết in Seattle, and More Cheap & Easy Events Under $20
by EverOut Staff

There's lots to love in our weekend guide, with festive events from Sound Off! 2026 to Tết in Seattle and from the Petit Troll Mardi Gras Parade to Sweetheart Book Fair. Plus, if the urge for new ink strikes, go get a Friday the 13th flash tattoo!

FRIDAY LIVE MUSIC

Slumber Party: Dalaine, Henry Mansfield
A concert in an artisanal chocolate shop that encourages wearing pajamas? Sign me up. Owned and operated by Seattle musician and chocolatier Aaron Lindstrom, Cocoa Legato hosts a few intimate shows each week, befitting of the shop's name—"legato" is an Italian musical term used to describe music played in a smooth and flowing way. This Friday's show features two incredible local acts: queer indie pop artist Henry Mansfield with a string ensemble and folk singer-songwriter Dalaine, whose 2023 NPR Tiny Desk entry was highlighted as one of their "Entries We Love." Bring your crush, your kids, your blankets and stuffies, and get ready for a Valentine's eve that's sweet in more ways than one. SHANNON LUBETICH
(Cocoa Legato, Greenwood, $15)

18:14

Local Lawmakers Are Finally Moving Against ICE [The Stranger]

Impeding ICE with the combined powers of the city, county and the port. by Hannah Murphy Winter

This morning, Seattle City Councilmember Alexis Mercedes Rinck announced legislation to bar new or expanded detention facilities from being built within city limits. At the same time, Seattle Port Commissioner Toshiko Hasegawa announced an order that would bar any expansion of immigration activity on Port land, and a second that provides civil rights education to anyone working on Port property.

Their announcement follows an anti-ICE-filled week. On Tuesday, City Council’s public safety committee passed a bill from Councilmember Maritza Rivera that struck dated language from the Municipal Code requiring city employees to “cooperate with, not hinder” immigration enforcement. On the same day, the Port Commission unanimously passed an order requiring that Port police clearly identify themselves so the public is less likely to confuse them with immigration enforcement. And yesterday, the County took action: County Executive Girmay Zahilay signed an executive order barring ICE from non-public spaces on King County-owned properties (like Mayor Katie Wilson did in Seattle last month), and County Councilmember Teresa Mosqueda introduced a bill to codify his order into law. 

None of this can stop ICE from operating in Seattle. But it can impede the agency.

The Detention Moratorium 

As of last month, ICE was holding more than 73,000 people in detention across the country—a record high—and they expanded into 104 new detention facilities, almost doubling from the previous year. ICE is not releasing people on bail, so that number will continue to multiply. So, too, will the number of detention facilities. Trump’s Big Beautiful Bill accounted for that. He set aside $45 billion, enough funding to to imprison another 135,000 people in new facilities by 2029, according to the American Immigration Council.

The closest ICE detention facility to Seattle is the Northwest Detention Center (NWDC) in Tacoma. But in December, the federal government posted a pre-solicitation notice from the Department of Homeland Security and ICE, putting local contractors on notice that they were looking to build a facility about the same size as the NWDC, able to detain 1,600 people. 

Rinck’s emergency legislation would block the construction of that facility or any other within city limits for the next year, giving City Council time to explore more permanent restrictions on ICE expansion. SeaTac actually beat her to the punch, passing their own detention moratorium this week.

The bill treats the threat of an ICE detention center as a bureaucratic land use issue, arguing that the city needs time to address any “mitigation measures" needed to build a facility in “Seattle’s dense urban environment." 

“We need to be using every tool at our disposal to be really ensuring that we're not eating this administration's unconstitutional work and lawless agenda, and even if that means looking to land use as a tool,” Rinck tells The Stranger.

Rinck says she plans to share her bill with the Local Progress network, so other cities can copy her homework. 

Rinck’s office says that Council President Joy Hollingsworth has agreed to allow the bill to skip the Land Use Committee, and instead be heard by the full council on Tuesday. Council could pass the bill as soon as February 24. 

The Port’s Anti-ICE Agenda

Earlier this week, the Port Commission passed an order that helps make ICE clearly identifiable to the public—requiring that Port police are clearly identifiable, and can’t be confused with immigration enforcement. 

Port Commissioner Hasegawa also plans to introduce two orders to regulate how ICE can interact with the Port, both of which will be introduced on February 24. The first order provides Know Your Rights education to anyone that’s working in the airport or other Port property, like the shops and restaurants at SeaTac. Immigration enforcement unavoidably operates in those areas, Hasegawa told The Stranger, and this order gives those workers the best chance to protect themselves and their colleagues. 

The second mirrors the orders from Mayor Wilson and County Executive Zahilay: banning immigration enforcement from expanding their use of Port land for their operations. The presence of immigration at the Port is, again, unavoidable, Hasegawa acknowledges, but “the use of Port properties is narrow, and that it has to have an industrial purpose for one of our industries, our industry is not the prison industrial complex,” she says. 

The commission could vote on both orders the day they're introduced, and Hasegawa says she's confident they'll pass. 

The Ban from County Property

Mosqueda’s bill would lock Zahilay’s executive order into law, blocking ICE from entering (without a warrant) non-public areas of buildings, parking lots, garages, and vacant lots. They also can’t be used as an ICE staging area, or to process detainees. The bill would also require that County Executive Zahilay identify properties that ICE is likely to try to commandeer, and to preemptively plan for better security measures. 

Mosqueda also accounted for private land. One whole section of the bill is dedicated to designing a template that reads: 

"This property is a Stand Together King County partner.  No agent of the federal government, including Immigration and Customs Enforcement (ICE), may enter these premises for purposes of civil immigration enforcement, absent a valid judicial warrant or court order.  This property may not be used for civil immigration enforcement operations, including as a staging area, processing location, or operations base."

You can also just write that on your door with some printer paper and a Sharpie, as we saw all over Minneapolis in the last few weeks. 

16:42

Slog AM: SPD's May Day USA Fuck Up, EPA Won't Regulate Greenhouse Gases Anymore, Michigan Town Must Slaughter Park Deer [The Stranger]

The Stranger's morning news roundup. by Nathalie Graham

Protect and Serve? Last summer's May Day USA event, the right-wing Christian extremist event held in Cal Anderson, was a clusterfuck largely because of the Seattle Police Department's biases against the people of this city, a new report found. SPD apparently didn't see what the big deal was about holding an anti-LGBT rally in the park—they "weren't familiar with the neighborhood's history," according to PubliCola. They viewed May Day USA as a "church group" and the counterprotesters as "antifa." They entered the event with a "anticipatory defensiveness" toward the counterprotesters—who they started referring to as "transtifa" after hearing May Day USA security use the term. SPD—which is largely made up of people who do not live in Seattle—also shared information with May Day USA security. This big mess of bias and animosity toward the people SPD is supposed to protect caused an aggressive police response and 23 arrests of counterprotesters. 

Impeached: The Federal Way City Council voted 4-3 to remove Martin Moore from his post as council president. Moore posted on his official Facebook page in support of the anti-ICE student walkouts. The rest of the council did not like this. Despite public commenters speaking largely in favor of Moore's actions at a meeting Tuesday, the council sided against him. He'll still stay on council, but he's lost his presidential role. 

The Weather: Say goodbye to dry skies. The rain is back. In case you're wondering if we'll get any more wintery weather, the Seattle Times has the answer: There is no hope for Seattle snow this year. 

Well, what about ICE? In his first executive order, King County Executive Girmay Zahilay banned Immigration and Customs Enforcement from doing anything on nonpublic county-owned land. This includes "parking lots, vacant lots, buildings, and garages and prevents them from being used for staging areas, processing or operations bases." The executive order won't stop ICE if those gooners have a judicial warrant. Zahilay's order also directs $2 million to immigrant communities for "emergency food, housing, and legal aid" and steers the King County Sheriff's Department to make a plan for dealing with ICE, including how to identify undercover agents and how to respond if ICE and the public get into conflict.  

Hey, get off of there! A person in Spokane hitched a ride on an ambulance, clinging to the back of the emergency vehicle on eastbound Interstate 90. That's not the suggested way to get to the hospital. 

The EPA Is Done Regulating Greenhouse Gases: Hahaha. We are so boned. On Thursday, Donald Trump repealed the bedrock scientific finding that greenhouse gases endanger human life, thus ending our government's capacity to legally control pollution. This sweeping move means the Environmental Protection Agency can no longer regulate emissions of carbon dioxide, methane, and other greenhouse gases. It's a rejection of science and an absolutely dismal backslide as the US faces the realities of climate change: more intense storms, wildfires, droughts, and natural disasters. This, of course, is only good for "billionaire polluters," reports The Guardian.

"Jayapal Pramila Search History": Attorney General Pam Bondi had a piece of paper titled "Jayapal Pramila Search History" detailing the un-redacted Epstein files Washington state Rep. Pramila Jayapal accessed in her review of the documents. This outraged House members since it showcased the Department of Justice's alleged betrayal of the separation of powers. The DOJ confirmed it is keeping tabs on what searches lawmakers are doing in the disgusting pit that is the trove of Epstein files."It is an outrage that [the justice department] is tracking members’ investigative steps,” said Rep. Jamie Raskin. He plans to open an inquiry into this "abuse of power." 

Antitrust Dust Up: Gail Slater, the head of the Justice Department's antitrust unit, announced her resignation. This shake-up comes as the DOJ is set to deal with corporate mergers like the tug-of-war battle between Netflix and Paramount Skydance over ownership of Warner Bros. Discovery. Slater's deputy in the antitrust unit also left this week. 

DHS Shutdown Imminent: With Democrats saying they won't approve more funding for DHS, funding for the department is expected to run out on Saturday. Democrats are holding out until Republicans agree to implement more stringent restrictions on ICE. Agencies under DHS like ICE and TSA could be affected. Will the Gestappo work for free?

Honoring the Dead Is Politics: Ukrainian skeleton racer Vladyslav Heraskevych was disqualified from the Winter Olympics because he insisted on wearing a helmet honoring the athletes killed during Russia's war in Ukraine. His tribute apparently violated the Olympics' athlete expression guidelines. "I believe I am right in this case," Heraskevych told NBC News. "For me to back down is betraying [the people pictured on the helmet]."

Ukrainian skeleton slider withdrawn from Olympics after insisting on wearing helmet honoring dead athletes killed in Ukraine https://cnn.it/3ZtrjXN

[image or embed]

— CNN (@cnn.com) February 12, 2026 at 12:47 AM

Blood Moon on the Rise: A big, red Blood Moon—which is not just a fun way of saying having your period—will gush all over Washington night skies on March 3. 

Oh Deer: The Michigan town of Iron Mountain has a deer problem. One of its parks has had a deer enclosure for 75 years. This is an odd choice since wild deer are prevalent. It's not like these people are lacking access to deer. Anyway, the enclosure is in dire need of upgrades. Like, $22,000 in one-time fixes and $16,000 in annual upkeep. The city council voted to close the pen. But, what to do with the deer? They are inbred, stupid, and diseased. They cannot be freed. So, they must be shot and killed. The people do not like this. Can't they save the deer? Probably not. 

Good for Them: The Winter Olympic village ran out of condoms in three days. More are on the way. 

A Long Read for Your Friday: You liked that Atlantic deep dive on Attorney General Pam Bondi? Then you'll love this Wall Street Journal story about what craven ghouls Department of Homeland Security Secretary Kristi Noem and her advisor Corey Lewandowski are. A tidbit: Noem fired a US Coast Guard pilot after he left her blanket on a plane, but reinstated him when she realized there was no one else to fly her home. Plus, Lewandowski has been really trying to get someone to issue him a gun. 

Happy Almost Valentine's Day: Here is a date idea. If you're looking for gifts, or a gesture, I recommend heading to Salmonberry Goods Green Grocer in Crown Heights and buying some of their handmade Valentine's Day pastries. Also, buy a bouquet while you're there. 

A Song for All You Lovers: 

16:21

New delegation for Debian's data protection team [LWN.net]

Debian Project Leader (DPL) Andreas Tille has announced a new delegation for Debian's data protection team:

Following the end of the previous delegation, Debian was left without an active Data Protection team. This situation has understandably drawn external attention and highlighted the importance of having a clearly identified point of contact for data protection matters within the project.

I am therefore very pleased to announce that new volunteers have stepped forward, allowing us to re-establish the Debian Data Protection team with a fresh delegation.

Tille had put out a call for volunteers in January after all previous members of the team had stepped down. He has appointed Aigars Mahinovs, Andrew M.A. Cater, Bart Martens, Emmanuel Arias, Gunnar Wolf, Kiran S Kunjumon, and Salvo Tomaselli as the new members of the team. The team provides a central coordination and advisory function around Debian's data handling, retention, dealing with deletion requests, and more.

15:56

The future for Tyr [OSnews]

The team behind Tyr started 2025 with little to show in our quest to produce a Rust GPU driver for Arm Mali hardware, and by the end of the year, we were able to play SuperTuxKart (a 3D open-source racing game) at the Linux Plumbers Conference (LPC). Our prototype was a joint effort between Arm, Collabora, and Google; it ran well for the duration of the event, and the performance was more than adequate for players. Thankfully, we picked up steam at precisely the right moment: Dave Airlie just announced in the Maintainers Summit that the DRM subsystem is only “about a year away” from disallowing new drivers written in C and requiring the use of Rust. Now it is time to lay out a possible roadmap for 2026 in order to upstream all of this work.

↫ Daniel Almeida at LWN.net

A very detailed look at what the team behind Tyr is trying to achieve with their Rust GPU driver for Arm Mali chips.

15:35

[$] Open-source mapping for disaster response [LWN.net]

At FOSDEM 2026 Petya Kangalova, a senior tech partnership and engagement manager for the Humanitarian OpenStreetMap Team (HOT) spoke about how the project helps people map their surroundings to assist in disaster response and humanitarian aid. The project has developed a stack of technology to help volunteers collectively map an area and add in local knowledge metadata. "One of the core things that we believe is that when we speak about disaster response or people having access to data is that they really need accessible technology that's free and open for anyone to use."

[$] The first half of the 7.0 merge window [LWN.net]

The merge window for Linux 7.0 has opened, and with it comes a number of interesting improvements and enhancements. At the time of writing, there have been 7,695 non-merge commits accepted. The 7.0 release is not special, according to the kernel's versioning scheme — just the release that comes after 6.19. Humans love symbolism and round numbers, though, so it may feel like something of a milestone.

15:28

Link [Scripting News]

News still needs to make a big transition, to become a distributed unownable thing, with every part replaceable, much like what needs to happen with the social web. This transition has been possible and necessary for about 30 years. The reporters and editors will say we're naive, but we understand what's happening. The news orgs have always been large centralized businesses, silos, and increasingly has come in conflict with the interests of their users. Who trusts what you read in the NYT, Washington Post, or Wall Street Journal, and these were at one time the best of journalism. I know the reporters also won't like this, but the quality assurance of decentralized systems will be done by AI, and overseen by a non-profit organization, staffed by retired journalists. And there will be lots of competition. All parts are replaceable.

14:07

Security updates for Friday [LWN.net]

Security updates have been issued by AlmaLinux (firefox, gcc-toolset-14-binutils, nodejs:20, nodejs:22, nodejs:24, php:7.4, and python3.12), Debian (haproxy, nginx, postgresql-15, and postgresql-17), Fedora (libssh), Oracle (glib2, libsoup, nodejs:20, nodejs:22, and php:7.4), SUSE (assimp, gnutls, helm, kernel, kubevirt, virt-api-container, virt-controller-container, virt-exportproxy-container, virt-exportserver-container, virt-handler-container, virt-launcher-container, virt-libguestfs-t, libmunge2, libsodium, libsoup, micropython, munge, openCryptoki, python-azure-core, rust-keylime, rustup, sccache, snpguest, tcpreplay, xorg-x11-server, xrdp, and zabbix), and Ubuntu (dnsdist, dotnet8, dotnet9, dotnet10, haproxy, libpng1.6, linux-aws-5.15, linux-azure, linux-azure-fips, linux-oracle, linux-oracle-5.4, munge, nginx, and node-dottie).

13:56

Error'd: Cruel Brittanica [The Daily WTF]

"No browser is the best browser," opines Michael R. sarcastically as per usual for tdwtf. "Thank you for suggesting a browser. FWIW: neither latest Chrome, Safari, Firefox, Opera work. Maybe I should undust my Netscape."

3

 

An anonymous dessert lover ruminates "The icing on the cake is that it's for HR where names can be quite important. Just goes to show that not even SAP can do SAP."

5

 

Another anonymous dessert lover (because honestly, who isn't) cheers "2024 is back again."

0

 

Thrice capitalled B.J.H. capitulates. "I guess I'm not cleared to know what topic I subscribed to."

1

 

Jeopardy fan Jeremy P. digs a quick quiz.

It's from Britannica.com. I thought "TV remote control" because it would effectively turn off the TV. The correct answer is toaster.

4

 To understand what went wrong, the previous correct answer was "blunderbuss".

Apparently this is a test for clairvoyance, which will have come in handy.

For a bonus buzz, Jeremy sent in another.


This time it's "guess the novel from the picture". There was a subtle clue in this one.

2

You're a monster, Jeremy. 

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

13:07

Conductors to Orchestrators: The Future of Agentic Coding [Radar]

This post first appeared on Addy Osmani’s Elevate Substack newsletter and is being republished here with the author’s permission.

AI coding assistants have quickly moved from novelty to necessity, where up to 90% of software engineers use some kind of AI for coding. But a new paradigm is emerging in software development—one where engineers leverage fleets of autonomous coding agents. In this agentic future, the role of the software engineer is evolving from implementer to manager, or in other words, from coder to conductor and ultimately orchestrator.

Over time, developers will increasingly guide AI agents to build the right code and coordinate multiple agents working in concert. This write-up explores the distinction between conductors and orchestrators in AI-assisted coding, defines these roles, and examines how today’s cutting-edge tools embody each approach. Senior engineers may start to see the writing on the wall: Our jobs are shifting from “How do I code this?” to “How do I get the right code built?”—a subtle but profound change.

Will every engineer become an orchestrator

What’s the tl;dr of an orchestrator tool? It supports multi-agent workflows where you can run many agents in parallel without them interfering with each other. But let’s talk terminology first.

The Conductor: Guiding a Single AI Agent

In the context of AI coding, acting as a conductor means working closely with a single AI agent on a specific task, much like a conductor guiding a soloist through a performance.

The engineer remains in the loop at each step, dynamically steering the agent’s behavior, tweaking prompts, intervening when needed, and iterating in real time. This is the logical extension of the “AI pair programmer” model many developers are already familiar with. With conductor-style workflows, coding happens in a synchronous, interactive session between human and AI, typically in your IDE or CLI.

Key characteristics: A conductor keeps a tight feedback loop with one agent, verifying or modifying each suggestion, much as a driver navigates with a GPS. The AI helps write code, but the developer still performs many manual steps—creating branches, running tests, writing commit messages, etc.—and ultimately decides which suggestions to accept.

Crucially, most of this interaction is ephemeral: Once code is written and the session ends, the AI’s role is done and any context or decisions not captured in code may be lost. This mode is powerful for focused tasks and allows fine-grained control, but it doesn’t fully exploit what multiple AIs could do in parallel.

Modern tools as conductors

Several current AI coding tools exemplify the conductor pattern:

  • Claude Code (Anthropic): Anthropic’s Claude model offers a coding assistant mode (accessible via a CLI tool or editor integration) where the developer converses with Claude to generate or modify code. For example, with the Claude Code CLI, you navigate your project in a shell and ask Claude to implement a function or refactor code, and it prints diffs or file updates for you to approve. You remain the conductor: You trigger each action and review the output immediately. While Claude Code has features to handle long-running tasks and tools, in the basic usage it’s essentially a smart codeveloper working step-by-step under human direction.
  • Gemini CLI (Google): A command-line assistant powered by Google’s Gemini model, used for planning and coding with a very large context window. An engineer can prompt Gemini CLI to analyze a codebase or draft a solution plan, then iterate on results interactively. The human directs each step and Gemini responds within the CLI session. It’s a one-at-a-time collaborator, not running off to make code changes on its own (at least in this conductor mode).
  • Cursor (editor AI assistant): The Cursor editor (a specialized AI-augmented IDE) can operate in an inline or chat mode where you ask it questions or to write a snippet, and it immediately performs those edits or gives answers within your coding session. Again, you guide it one request at a time. Cursor’s strength as a conductor is its deep context integration—it indexes your whole codebase so the AI can answer questions about any part of it. But the hallmark is that you, the developer, initiate and oversee each change in real time.
  • VS Code, Cline, Roo Code (in-IDE chat): Similar to above, other coding agents also fall into this category. They suggest code or even multistep fixes, but always under continuous human guidance.

This conductor-style AI assistance has already boosted productivity significantly. It feels like having a junior engineer or pair programmer always by your side. However, it’s inherently one-agent-at-a-time and synchronous. To truly leverage AI at scale, we need to go beyond being a single-agent conductor. This is where the orchestrator role comes in.

Engineer as conductor, engineer as orchestrator

The Orchestrator: Managing a Fleet of Agents

If a conductor works with one AI “musician,” an orchestrator oversees the entire symphony of multiple AI agents working in parallel on different parts of a project. The orchestrator sets high-level goals, defines tasks, and lets a team of autonomous coding agents independently carry out the implementation details.

Instead of micromanaging every function or bug fix, the human focuses on coordination, quality control, and integration of the agents’ outputs. In practical terms, this often means an engineer can assign tasks to AI agents (e.g., via issues or prompts) and have those agents asynchronously produce code changes—often as ready-to-review pull requests. The engineer’s job becomes reviewing, giving feedback, and merging the results rather than writing all the code personally.

This asynchronous, parallel workflow is a fundamental shift. It moves AI assistance from the foreground to the background. While you attend to higher-level design or other work, your “AI team” is coding in the background. When they’re done, they hand you completed work (with tests, docs, etc.) for review. It’s akin to being a project tech lead delegating tasks to multiple devs and later reviewing their pull requests, except the “devs” are AI agents.

Modern tools as orchestrators

Over just the past year, several tools have emerged that embody this orchestrator paradigm:

  • GitHub Copilot coding agent (Microsoft): This upgrade to Copilot transforms it from an in-editor assistant into an autonomous background developer. (I cover it in this video.) You can assign a GitHub issue to Copilot’s agent or invoke it via the VS Code agents panel, telling it (for example) “Implement feature X” or “Fix bug Y.” Copilot then spins up an ephemeral dev environment via GitHub Actions, checks out your repo, creates a new branch, and begins coding. It can run tests, linters, even spin up the app if needed, all without human babysitting. When finished, it opens a pull request with the changes, complete with a description and meaningful commit messages. It then asks for your review.

    You, the human orchestrator, review the PR (perhaps using Copilot’s AI-assisted code review to get an initial analysis). If changes are needed, you can leave comments like “@copilot please update the unit tests for edge case Z,” and the agent will iterate on the PR. This is asynchronous, autonomous code generation in action. Notably, Copilot automates the tedious bookkeeping—branch creation, committing, opening PRs, etc.—which used to cost developers time. All the grunt work around writing code (aside from the design itself) is handled, allowing developers to focus on reviewing and guiding at a high level. GitHub’s agent effectively lets one engineer supervise many “AI juniors” working in parallel across different issues (and you can even create multiple specialized agents for different task types).
Delegate tasks to GitHub Copilot
  • Jules, Google’s coding agent: Jules is an autonomous coding agent. Jules is “not a copilot, not a code-completion sidekick, but an autonomous agent that reads your code, understands your intent, and gets to work.” Integrated with Google Cloud and GitHub, Jules lets you connect a repository and then ask it to perform tasks much as you would a developer on your team. Under the hood, Jules clones your entire codebase into a secure cloud VM and analyzes it with a powerful model. You might tell Jules “Add user authentication to our app” or “Upgrade this project to the latest Node.js and fix any compatibility issues.” It will formulate a plan, present it to you for approval, and once you approve, execute the changes asynchronously. It makes commits on a new branch and can even open a pull request for you to merge. Jules handles writing new code, updating tests, bumping dependencies, etc., all while you could be doing something else.

    Crucially, Jules provides transparency and control: It shows you its proposed plan and reasoning before making changes, and allows you to intervene or modify instructions at any point (a feature Google calls “user steerability”). This is akin to giving an AI intern the spec and watching over their shoulder less frequently—you trust them to get it mostly right, but you still verify the final diff. Jules also boasts unique touches like audio changelogs (it generates spoken summaries of code changes) and the ability to run multiple tasks concurrently in the cloud. In short, Google’s Jules demonstrates the orchestrator model: You define the task, Jules does the heavy lifting asynchronously, and you oversee the result.
Jules bugs
  • OpenAI Codex (cloud agent): OpenAI introduced a new cloud-based Codex agent to complement ChatGPT. This evolved Codex (different from the 2021 Codex model) is described as “a cloud-based software engineering agent that can work on many tasks in parallel.” It’s available as part of ChatGPT Plus/Pro under the name OpenAI Codex and via an npm CLI (npm i -g @openai/codex). With the Codex CLI or its VS Code/Cursor extensions, you can delegate tasks to OpenAI’s agent similar to Copilot or Jules. For instance, from your terminal you might say, “Hey Codex, implement dark mode for the settings page.” Codex then launches into your repository, edits the necessary files, perhaps runs your test suite, and when done, presents the diff for you to merge. It operates in an isolated sandbox for safety, running each task in a container with your repo and environment.

    Like others, OpenAI’s Codex agent integrates with developer workflows: You can even kick off tasks from a ChatGPT mobile app on your phone and get notified when the agent is done. OpenAI emphasizes seamless switching “between real-time collaboration and async delegation” with Codex. In practice, this means you have the flexibility to use it in conductor mode (pair-programming in your IDE) or orchestrator mode (hand off a background task to the cloud agent). Codex can also be invited into your Slack channels—teammates can assign tasks to @Codex in Slack, and it will pull context from the conversation and your repo to execute them. It’s a vision of ubiquitous AI assistance, where coding tasks can be delegated from anywhere. Early users report that Codex can autonomously identify and fix bugs, or generate significant features, given a well-scoped prompt. All of this again aligns with the orchestrator workflow: The human defines the goal; the AI agent autonomously delivers a solution.
What are we coding next Codex
  • Anthropic Claude Code (for web): Anthropic has offered Claude as an AI chatbot for a while, and their Claude Code CLI has been a favorite for interactive coding. Anthropic took the next step by launching Claude Code for web, effectively a hosted version of their coding agent. Using Claude Code for web, you point it at your GitHub repo (with configurable sandbox permissions) and give it a task. The agent then runs in Anthropic’s managed container, just like the CLI version, but now you can trigger it from a web interface or even a mobile app. It queues up multiple prompts and steps, executes them, and when done, pushes a branch to your repo (and can open a PR). Essentially, Anthropic took their single-agent Claude Code and made it an orchestratable service in the cloud. They even provided a “teleport” feature to transfer the session to your local environment if you want to take over manually.

    The rationale for this web version aligns with orchestrator benefits: convenience and scale. You don’t need to run long jobs on your machine; Anthropic’s cloud handles the heavy lifting, with filesystem and network isolation for safety. Claude Code for web acknowledges that autonomy with safety is key—by sandboxing the agent, they reduce the need for constant permission prompts, letting the agent operate more freely (less babysitting by the user). In effect, Anthropic has made it easier to use Claude as an autonomous coding worker you launch on demand.
Discounts with Claude Code
  • Cursor background agents: tl;dr Cursor 2.0 has a multi-agent interface more focused around agents rather than files. Cursor 2 expands its background agents feature into a full-fledged orchestration layer for developers. Beyond serving as an interactive assistant, Cursor 2 lets you spawn autonomous background agents that operate asynchronously in a managed cloud workspace. When you delegate a task, Cursor 2’s agents now clone your GitHub repository, spin up an ephemeral environment, and check out an isolated branch where they execute work end-to-end. These agents can handle the entire development loop—from editing and running code to installing dependencies, executing tests, running builds, and even searching the web or referencing documentation to resolve issues. Once complete, they push commits and open a detailed pull request summarizing their work.

    Cursor 2 introduces multi-agent orchestration, allowing several background agents to run concurrently across different tasks—for instance, one refining UI components while another optimizes backend performance or fixes tests. Each agent’s activity is visible through a real-time dashboard that can be accessed from desktop or mobile, enabling you to monitor progress, issue follow-ups, or intervene manually if needed. This new system effectively treats each agent as part of an on-demand AI workforce, coordinated through the developer’s high-level intent. Cursor 2’s focus on parallel, asynchronous execution dramatically amplifies a single engineer’s throughput—fully realizing the orchestrator model where humans oversee a fleet of cooperative AI developers rather than a single assistant.
Agents layout adjustments for token display
  • Agent orchestration platforms: Beyond individual product offerings, there are also emerging platforms and open source projects aimed at orchestrating multiple agents. For instance, Conductor by Melty Labs (despite its name!) is actually an orchestration tool that lets you deploy and manage multiple Claude Code agents on your own machine in parallel. With Conductor, each agent gets its own isolated Git worktree to avoid conflicts, and you can see a dashboard of all agents (“who’s working on what”) and review their code as they progress. The idea is to make running a small swarm of coding agents as easy as running one. Similarly, Claude Squad is a popular open source terminal app that essentially multiplexes Anthropic’s Claude—it can spawn several Claude Code instances working concurrently in separate tmux panes, allowing you to give each a different task and thus code “10x faster” by parallelizing. These orchestration tools underscore the trend: Developers want to coordinate multiple AI coding agents and have them collaborate or divide work. Even Microsoft’s Azure AI services are enabling this: At Build 2025 they announced tools for developers to “orchestrate multiple specialized agents to handle complex tasks,” with SDKs supporting agent-to-agent communication so your fleet of agents can talk to each other and share context. All of this infrastructure is being built to support the orchestrator engineer, who might eventually oversee dozens of AI processes tackling different parts of the software development lifecycle.
Update workspace sidebar

I found Conductor to make the most sense to me. It was a perfect balance of talking to an agent and seeing my changes in a pane next to it. Its Github integration feels seamless; e.g. after merging PR, it immediately showed a task as “Merged” and provided an “Archive” button.
Juriy Zaytsev, Staff SWE, LinkedIn

He also tried Magnet:

The idea of tying tasks to a Kanban board is interesting and makes sense. As such, Magnet feels very product-centric.

Conductor versus Orchestrator—Differences

Many engineers will continue to engage in conductor-style workflows (single agent, interactive) even as orchestrator patterns mature. The two modes will coexist.

It’s clear that “conductor” and “orchestrator” aren’t just fancy terms; they describe a genuine shift in how we work with AI.

  • Scope of control: A conductor operates at the micro level, guiding one agent through a single task or a narrow problem. An orchestrator operates at the macro level, defining broader tasks and objectives for multiple agents or for a powerful single agent that can handle multistep projects. The conductor asks, “How do I solve this function or bug with the AI’s help?” The orchestrator asks, “What set of tasks can I delegate to AI agents today to move this project forward?”
  • Degree of autonomy: In conductor mode, the AI’s autonomy is low—it waits for user prompts each step of the way. In orchestrator mode, we give the AI high autonomy—it might plan and execute dozens of steps internally (writing code, running tests, adjusting its approach) before needing human feedback. A GitHub Copilot agent or Jules will try to complete a feature from start to finish once assigned, whereas Copilot’s IDE suggestions only go line-by-line as you type.
  • Synchronous vs asynchronous: Conductor interactions are typically synchronous—you prompt; AI responds within seconds; you immediately integrate or iterate. It’s a real-time loop. Orchestrator interactions are asynchronous—you might dispatch an agent and check back minutes or hours later when it’s done (somewhat like kicking off a long CI job). This means orchestrators must handle waiting, context-switching, and possibly managing multiple things concurrently, which is a different workflow rhythm for developers.
  • Artifacts and traceability: A subtle but important difference: Orchestrator workflows produce persistent artifacts like branches, commits, and pull requests that are preserved in version control. The agent’s work is fully recorded (and often linked to an issue/ticket), which improves traceability and collaboration. With conductor-style (IDE chat, etc.), unless the developer manually commits intermediate changes, a lot of the AI’s involvement isn’t explicitly documented. In essence, orchestrators leave a paper trail (or rather a Git trail) that others on the team can see or even trigger themselves. This can help bring AI into team processes more naturally.
  • Human effort profile: For a conductor, the human is actively engaged nearly 100% of the time the AI is working—reviewing each output, refining prompts, etc. It’s interactive work. For an orchestrator, the human’s effort is front-loaded (writing a good task description or spec for the agent, setting up the right context) and back-loaded (reviewing the final code and testing it), but not much is needed in the middle. This means one orchestrator can manage more total work in parallel than would ever be possible by working with one AI at a time. Essentially, orchestrators leverage automation at scale, trading off fine-grained control for breadth of throughput.

To illustrate, consider a common scenario: adding a new feature that touches frontend and backend and requires new tests. As a conductor, you might open your AI chat and implement the backend logic with the AI’s help, then separately implement the frontend, then ask it to generate some tests—doing each step sequentially with you in the loop throughout. As an orchestrator, you could assign the backend implementation to one agent (Agent A), the frontend UI changes to another (Agent B), and test creation to a third (Agent C). You give each a prompt or an issue description, then step back and let them work concurrently.

After a short time, you get perhaps three PRs: one for backend, one for frontend, one for tests. Your job then is to review and integrate them (and maybe have Agent C adjust tests if Agents A/B’s code changed during integration). In effect, you managed a mini “AI team” to deliver the feature. This example highlights how orchestrators think in terms of task distribution and integration, whereas conductors focus on step-by-step implementation.

It’s worth noting that these roles are fluid, not rigid categories. A single developer might act as a conductor in one moment and an orchestrator the next. For example, you might kick off an asynchronous agent to handle one task (orchestrator mode) while you personally work with another AI on a tricky algorithm in the meantime (conductor mode). Tools are also blurring lines: As OpenAI’s Codex marketing suggests, you can seamlessly switch between collaborating in real-time and delegating async tasks. So, think of “conductor” versus “orchestrator” as two ends of a spectrum of AI-assisted development, with many hybrid workflows in between.

Why Orchestrators Matter

Experts are suggesting that this shift to orchestration could be one of the biggest leaps in programming productivity we’ve ever seen. Consider the historical trends: We went from writing assembly to using high-level languages, then to using frameworks and libraries, and recently to leveraging AI for autocompletion. Each step abstracted away more low-level work. Autonomous coding agents are the next abstraction layer. Instead of manually coding every piece, you describe what you need at a higher level and let multiple agents build it.

As orchestrator-style agents ramp up, we could imagine even larger percentages of code being drafted by AIs. What does a software team look like when AI agents generate, say, 80% or 90% of the code, and humans provide the 10% critical guidance and oversight? Many believe it doesn’t mean replacing developers—it means augmenting developers to build better software. We may witness an explosion of productivity where a small team of engineers, effectively managing dozens of agent processes, can accomplish what once took an army of programmers months. (Note: I continue to believe the code review loop where we’ll continue to focus our human skills is going to need work if all this code is not to be slop.)

One intriguing possibility is that every engineer becomes, to some degree, a manager of AI developers. It’s a bit like everyone having a personal team of interns or junior engineers. Your effectiveness will depend on how well you can break down tasks, communicate requirements to AI, and verify the results. Human judgment will remain vital: deciding what to build, ensuring correctness, handling ambiguity, and injecting creativity or domain knowledge where AI might fall short. In other words, the skillset of an orchestrator—good planning, prompt engineering, validation, and oversight—is going to be in high demand. Far from making engineers obsolete, these agents could elevate engineers into more strategic, supervisory roles on projects.

Toward an “AI Team” of Specialists

Today’s coding agents mostly tackle implementation: write code, fix code, write tests, etc. But the vision doesn’t stop there. Imagine a full software development pipeline where multiple specialized AI agents handle different phases of the lifecycle, coordinated by a human orchestrator. This is already on the horizon. Researchers and companies have floated architectures where, for example, you have:

  • A planning agent that analyzes feature requests or bug reports and breaks them into specific tasks
  • A coding agent (or several) that implements the tasks in code
  • A testing agent that generates and runs tests to verify the changes
  • A code review agent that checks the pull requests for quality and standards compliance
  • A documentation agent that updates README or docs to reflect the changes
  • Possibly a deployment/monitoring agent that can roll out the change and watch for issues in production.

In this scenario, the human engineer’s role becomes one of oversight and orchestration across the whole flow: You might initiate the process with a high-level goal (e.g., “Add support for payment via cryptocurrency in our app”); the planning agent turns that into subtasks; coding agents implement each subtask asynchronously; the testing agent and review agent catch problems or polish the code; and finally everything gets merged and deployed under watch of monitoring agents.

The human would step in to approve plans, resolve any conflicts or questions the agents raise, and give final approval to deploy. This is essentially an “AI swarm” tackling software development end to end, with the engineer as the conductor of the orchestra.

While this might sound futuristic, we see early signs. Microsoft’s Azure AI Foundry now provides building blocks for multi-agent workflows and agent orchestration in enterprise settings, implicitly supporting the idea that multiple agents will collaborate on complex, multistep tasks. Internal experiments at tech companies have agents creating pull requests that other agent reviewers automatically critique, forming an AI/AI interaction with a human in the loop at the end. In open source communities, people have chained tools like Claude Squad (parallel coders) with additional scripts that integrate their outputs. And the conversation has started about standards like the Model Context Protocol (MCP) for agents sharing state and communicating results to each other.

I’ve noted before that “specialized agents for Design, Implementation, Test, and Monitoring could work together to develop, launch, and land features in complex environments”—with developers onboarding these AI agents to their team and guiding/overseeing their execution. In such a setup, agents would “coordinate with other agents autonomously, request human feedback, reviews and approvals” at key points, and otherwise handle the busywork among themselves. The goal is a central platform where we can deploy specialized agents across the workflow, without humans micromanaging each individual step—instead, the human oversees the entire operation with full context.

This could transform how software projects are managed: more like running an automated assembly line where engineers ensure quality and direction rather than handcrafting each component on the line.

Challenges and the Human Role in Orchestration

Does this mean programming becomes a push-button activity where you sit back and let the AI factory run? Not quite—and likely never entirely. There are significant challenges and open questions with the orchestrator model:

  • Quality control and trust: Orchestrating multiple agents means you’re not eyeballing every single change as it’s made. Bugs or design flaws might slip through if you solely rely on AI. Human oversight remains critical as the final failsafe. Indeed, current tools explicitly require the human to review the AI’s pull requests before merging. The relationship is often compared to managing a team of junior developers: They can get a lot done, but you wouldn’t ship their code without review. The orchestrator engineer must be vigilant about checking the AI’s work, writing good test cases, and having monitoring in place. AI agents can make mistakes or produce logically correct but undesirable solutions (for instance, implementing a feature in a convoluted way). Part of the orchestration skillset is knowing when to intervene versus when to trust the agent’s plan. As the CTO of Stack Overflow wrote, “Developers maintain expertise to evaluate AI outputs” and will need new “trust models” for this collaboration.
  • Coordination and conflict: When multiple agents work on a shared codebase, coordination issues arise—much like multiple developers can conflict if they touch the same files. We need strategies to prevent merge conflicts or duplicated work. Current solutions use workspace isolation (each agent works on its own Git branch or separate environment) and clear task separation. For example, one agent per task, and tasks designed to minimize overlap. Some orchestrator tools can even automatically merge changes or rebase agent branches, but usually it falls to the human to integrate. Ensuring agents don’t step on each others’ toes is an active area of development. It’s conceivable that in the future agents might negotiate with each other (via something like agent-to-agent communication protocols) to avoid conflicts, but today the orchestrator sets the boundaries.
  • Context, shared state, and handoffs: Coding workflows are rich in state: repository structure, dependencies, build systems, test suites, style guidelines, team practices, legacy code, branching strategies, etc. Multi-agent orchestration demands shared context, memory, and smooth transitions. But in enterprise settings, context sharing across agents is nontrivial. Without a unified “workflow orchestration layer,” each agent can become a silo, working well in its domain but failing to mesh. In a coding-engineering team this may translate into: One agent creates a feature branch; another one runs unit tests; another merges into master—if the first agent doesn’t tag metadata the second is expecting, you get breakdowns.
  • Prompting and specifications: Ironically, as the AI handles more coding, the human’s “coding” moves up a level to writing specifications and prompts. The quality of an agent’s output is highly dependent on how well you specify the task. Vague instructions lead to subpar results or agents going astray. Best practices that have emerged include writing mini design docs or acceptance criteria for the agents—essentially treating them like contractors who need a clear definition of done. This is why we’re seeing ideas like spec-driven development for AI: You feed the agent a detailed spec of what to build, so it can execute predictably. Engineers will need to hone their ability to describe problems and desired solutions unambiguously. Paradoxically, it’s a very old-school skill (writing good specs and tests) made newly important in the AI era. As agents improve, prompts might get simpler (“write me a mobile app for X and Y with these features”) and yet yield more complex results, but we’re not quite at the point of the AI intuiting everything unsaid. For now, orchestrators must be excellent communicators to their digital workforce.
  • Tooling and debugging: With a human developer, if something goes wrong, they can debug in real time. With autonomous agents, if something goes wrong (say the agent gets stuck on a problem or produces a failing PR), the orchestrator has to debug the situation: Was it a bad prompt? Did the agent misinterpret the spec? Do we roll back and try again or step in and fix it manually? New tools are being added to help here: For instance, checkpointing and rollback commands let you undo an agent’s changes if it went down a wrong path. Monitoring dashboards can show if an agent is taking too long or has errors. But effectively, orchestrators might at times have to drop down to conductor mode to fix an issue, then go back to orchestration. This interplay will improve as agents get more robust, but it highlights that orchestrating isn’t just “fire and forget”—it requires active monitoring. AI observability tools (tracking cost, performance, accuracy of agents) are likely to become part of the developer’s toolkit.
  • Ethics and responsibility: Another angle—if an AI agent writes most of the code, who is responsible for license compliance, security vulnerabilities, or bias in that code? Ultimately the human orchestrator (or their organization) carries responsibility. This means orchestrators should incorporate practices like security scanning of AI-generated code and verifying dependencies. Interestingly, some agents like Copilot and Jules include built-in safeguards: They won’t introduce known vulnerable versions of libraries, for instance, and can be directed to run security audits. But at the end of the day, “trust, but verify” is the mantra. The human remains accountable for what ships, so orchestrators will need to ensure AI contributions meet the team’s quality and ethical standards.

In summary, the rise of orchestrator-style development doesn’t remove the human from the loop—it changes the human’s position in the loop. We move from being the one turning the wrench to the one designing and supervising the machine that turns the wrench. It’s a higher-leverage position, but also one that demands broader awareness.

Developers who adapt to being effective conductors and orchestrators of AI will likely be even more valuable in this new landscape.

Conclusion: Is Every Engineer a Maestro?

Will every engineer become an orchestrator of multiple coding agents? It’s a provocative question, but trends suggest we’re headed that way for a large class of programming tasks. The day-to-day reality of a software engineer in the late 2020s could involve less heads-down coding and more high-level supervision of code that’s mostly written by AIs.

Today we’re already seeing early adopters treating AI agents as teammates—for example, some developers report delegating 10+ pull requests per day to AI, effectively treating the agent as an independent teammate rather than a smart autocomplete. Those developers free themselves to focus on system design, tricky algorithms, or simply coordinating even more work.

That said, the transition won’t happen overnight for everyone. Junior developers might start as “AI conductors,” getting comfortable working with a single agent before they take on orchestrating many. Seasoned engineers are more likely to early-adopt orchestrator workflows, since they have the experience to architect tasks and evaluate outcomes. In many ways, it mirrors career growth: Junior engineers implement (now with AI help); senior engineers design and integrate (soon with AI agent teams).

The tools we discussed—from GitHub’s coding agent to Google’s Jules to OpenAI’s Codex—are rapidly lowering the barrier to try this approach, so expect it to go mainstream quickly. The hyperbole aside, there’s truth that these capabilities can dramatically amplify what an individual developer can do.

So, will we all be orchestrators? Probably to some extent—yes. We’ll still write code, especially for novel or complex pieces that defy simple specification. But much of the boilerplate, routine patterns, and even a lot of sophisticated glue code could be offloaded to AI. The role of “software engineer” may evolve to emphasize product thinking, architecture, and validation, with the actual coding being a largely automated act. In this envisioned future, asking an engineer to crank out thousands of lines of mundane code by hand would feel as inefficient as asking a modern accountant to calculate ledgers with pencil and paper. Instead, the engineer would delegate that to their AI agents and focus on the creative and critical-thinking aspects around it.

BTW, yes, there’s plenty to be cautious about. We need to ensure these agents don’t introduce more problems than they solve. And the developer experience of orchestrating multiple agents is still maturing—it can be clunky at times. But the trajectory is clear. Just as continuous integration and automated testing became standard practice, continuous delegation to AI could become a normal part of the development process. The engineers who master both modes—knowing when to be a precise conductor and when to scale up as an orchestrator—will be in the best position to leverage this “agentic” world.

One thing is certain: The way we build software in the next 5–10 years will look quite different from the last 10. I want to stress that not all or most code will be agent-driven within a year or two, but that’s a direction we’re heading in. The keyboard isn’t going away, but alongside our keystrokes we’ll be issuing high-level instructions to swarms of intelligent helpers. In the end, the human element remains irreplaceable: It’s our judgment, creativity, and understanding of real-world needs that guides these AI agents toward meaningful outcomes.

The future of coding isn’t AI or human, it’s AI and human—with humans at the helm as conductors and orchestrators, directing a powerful ensemble to achieve our software ambitions.

I’m excited to share that I’ve written an AI-assisted engineering book with O’Reilly. If you’ve enjoyed my writing here you may be interested in checking it out.

Generative AI in the Real World: Fabiana Clemente on Synthetic Data for AI and Agentic Systems [Radar]

Synthetic data has been around for a long time, decades even. But as KPMG’s Fabiana Clemente points out, “That doesn’t mean there aren’t a lot of misconceptions.” Fabiana sat down with Ben to clarify some of the current applications of synthetic data and new directions the field is taking—working with offshore teams when privacy controls just don’t allow you to share actual datasets, improving fraud detection, building simulation models of the physical world, enabling multi-agent architectures. The takeaway? Whether your data’s synthetic or from the real world, success often comes down to the processes you’ve established to build data solutions. Watch now.

About the Generative AI in the Real World podcast: In 2023, ChatGPT put AI on everyone’s agenda. In 2026, the challenge will be turning those agendas into reality. In Generative AI in the Real World, Ben Lorica interviews leaders who are building with AI. Learn from their experience to help put AI to work in your enterprise.

Check out other episodes of this podcast on the O’Reilly learning platform or follow us on YouTubeSpotifyApple, or wherever you get your podcasts.

Transcript

This transcript was created with the help of AI and has been lightly edited for clarity.

00.47
All right. Today we have Fabiana Clemente, senior director and distinguished engineer at KPMG. Fabiana, welcome to the podcast. 

00.57
Thank you. It’s a pleasure to be here. 

01.00
Our main topic today is synthetic data. We’ll try to focus on that, but obviously we may get derailed here and there. I think it’s fair to say at this point most listeners have heard of this notion of synthetic data. Some have probably even tried to generate their own or used a tool. But obviously you’re much more hands-on and much more active on a day-to-day basis when it comes to synthetic data. So maybe we’ll start, Fabiana, if you can describe the top two to three use cases where synthetic data seems to work right now.

01.46
Yeah that’s a good start. And yes, it’s true that a lot of users have already heard of synthetic data before. That does not mean there aren’t a lot of misconceptions. But we can delve into that a bit later on. 

But in a nutshell, understanding that synthetic data is the concept of any data that is not collected from real-world events, we can think of a different set and spectrum of use cases and applications, and we can go from the low-hanging fruit of test data management—data that will allow you to test systems—all the way to more intelligent use cases where you need to help the development of AI agents, and in between. You can think of synthetic data as a privacy-preserving way for you to have access to data.

So it’s a large and broad scope, and the scope is not served by all means by the same technology. Of course, it will vary depending on your application use case and what you want and expect to gain from synthetic data generation.

02.56
When you talk about AI applications, most people think of things like coding and programming and maybe customer support, things like that. What would be the equivalent for synthetic data? What are the most cited examples? If you were to give a talk, and you’re pressed to give examples where synthetic data is being used, what would be the top two most common reasons for using [it]?

03.34
Yeah, the three ones that I mentioned are the most common. So one of them is, “OK, I have a real dataset. I want to try to share this with my offshore team, but I can’t.” So the data can’t leave the country, but I still want to keep some level of structure, but also correlations. So you go for synthetic data instead. And here you use synthetic replicas, which is a type of synthetic data.

Or you are developing your own AI agents, and you are looking into improving your training, your evals. And then you leverage synthetic data to construct the whole system and change the epistemics around your AI agents. So I would say those two are fundamentally different, but they are true applications on how synthetic data can help nowadays.

04.32
You’ve been working on synthetic data for a while. What’s one or two examples where synthetic data solved a problem and it actually surprised you?

04.47
Surprised me? I wouldn’t say it surprised me, but definitely it’s probably the best way to leverage it. One of them—I just mentioned it—was really to enable how offshore teams would have access to a dataset that is similar and in this case, develop analytic solutions on top, for example. And that one is. . . Usually you think about how companies are restricted to share data with external entities. But you don’t think sometimes [about] how an external entity can still be the same company, just in a different country.

05.37
On the other hand, I would say that I also have seen cases where synthetic data did help a lot in improving the results of fraud detection, which, to an extent, is something that is not obvious that [that] will be a good path for a good way for you to improve your results when it comes to fraud detection.

06.05
So for teams that don’t have a lot of experience with synthetic data, what are, let’s say, the two most common mistakes? 

06.15
Oh, that’s a good one. Yeah. I would say that the biggest mistake I’ve seen is perhaps oversimplifying the complexity of synthetic data. And I’m not saying synthetic data complexity in a bad way. But as in anything that leverages data, you need planning. You need to think about “What do you want to get as an outcome?” So even if you are just building a test dataset to test the software application, you need to plan “What use cases do you really want to cover on the synthetic data?”

And usually people have this expectation that synthetic data is just “Click on a button. It’ll do exactly everything I want—it’s simple and it’s just dummy. So it’s very easy to do.” That, I’ll say, is one of the biggest mistakes I have observed. 

07.17
And the second one, I would say, is not understanding [that] there are different methodologies and different types of synthetic data that you can leverage, and being able to select the correct one for their objectives. And these are two fundamental [concepts]. They are not technical, if you ask me. They are really around requirements, and understanding the technology that you want to leverage. 

07.46
Is it fair to say that, I guess, historically, a few years ago, synthetic data—my impression at least, and I guess this was before ChatGPT—tended to be around computer vision, images, those kinds of things. So these days, [what are the] data modalities basically across the board everyone is using trying to do synthetic data? I mean, even people in robotics are doing synthetic data at this point. But what’s the dominant type of data that people are. . .?

08.28
I would say that the first data type that leveraged synthetic data was actually structured data way before text or images, if you think about [it]. We have been doing that for more than 50 years probably regardless. And I do think that images did evolve quite interestingly in the last 10 years, probably, I would say, as well as text. 

And I would say that nowadays, if you think about it, text is probably the type of synthetic data that is dominating the market. That doesn’t mean the space of synthetic data for text is well-defined or well-structured, because now anyone today considers synthetic data is just. . . An issue of oversimplifying: The outcome of an LLM can be considered synthetic data, but that does not mean it’s well-structured or is actually being correctly used and leveraged for what they are doing.

But definitely text is dominating nowadays. 

09.45
So without synthetic data, normally what you would do is say, “OK, I want to build a model; here’s some historical data or in the case of finance, here’s historical trades and financial data.” And then I’ll build the model and test the model out and then deploy to production. But obviously things can go wrong even in the scenario I painted. You can have drift—so the real world changes and then what you built your model on is no longer the same. Or you may have ended up kind of. . . The sample you created your model from was biased and so on and so forth.

Obviously the same problems will occur with synthetic data. So what are some of the common technical problems? I guess is the question for synthetic data. 

10.50
I wouldn’t say that it’s a technical problem from synthetic data. It’s a technical problem from data in general. What you just described is definitely a fundamental problem of how the processes around building data solutions are defined. 

11.00
But it could be the case, Fabiana, that your data is perfectly fine, but your synthetic data tool was bad. And so then the data says the synthetic data generated was bad. 

11.21
No, I wouldn’t say. . . And again, that goes exactly [back] to my initial point: You also can end up with good data and end up with a crappy model. And that’s a you problem. That’s a problem of you not understanding how models behave. 

11.42
But surely, just like models and model building tools, there are synthetic generation tools that are better than others. So I guess what should people look for in terms of what tools they’re using? 

11.59
It depends a lot on the use case on the end application, right?

12.04
Yeah. That’s a reasonable answer. 

12.07
And it’s an answer that nobody likes to hear. But for me that’s the true answer: It depends. And you need to be aware of what you want, in order to search for the right parameters and functionalities that you are looking for.

12.27
But basically, synthetic data becomes a part of the workflow, just like real data, right? So what you would do in order to harden whatever model or analytics that you’re building with real data, you would apply the same hardening steps, if you’re using synthetic data. 

12.52
100%. And I think it’s very important that you have what they would call a governance process around what you consider is a synthetic dataset that is ready for you to leverage. 

If there are evaluation metrics that you should put in place, those evaluation metrics will depend on the type of data that you are leveraging but also on the use case that you are building. And those processes are really important. You should make sure that the people there are leveraging synthetic data and also well trained on it. Because as you said, yes, training a model [on] synthetic data can lead to potential mistakes that you don’t want to propagate. And those mistakes usually stem exactly from the lack of processes of governance on how to generate synthetic [data], when to generate it, from where, and from what, for what. . . And having those metrics and that insurance I think it’s essential for companies to adopt on a daily basis a synthetic data generation method. 

14.04
With the rise of foundation models and generative AI, you know a few of the trends: There are things like agents, multimodality, reasoning. So let’s take them one at a time. So agents. . . Obviously, agents is a broad topic, but at the simplest level, you have an agent that does one thing well, but even that one thing may involve multiple steps, could involve tool callings and things like this. Are people starting to use synthetic data as part of their agent building process? 

14.52
I wouldn’t generalize to everyone across the industry, but I would say that we have evidence that some companies are definitely adopting [synthetic data]. Meta, OpenAI. . .

15:12
So it sounds like really advanced companies.

15.15
Yes, exactly. And I was about to say that. Even xAI, they are all leveraging synthetic data and and all of them are betting on leveraging synthetic data to enable a different structured exploration of the knowledge spaces. 

Exactly what you said, an AI agent or a set or a multi-agent system will require reasoning, a multistep kind of framework. And usually your knowledge base is not structur[ed that] way, or it’s less structured if you go and check. So synthetic data is actually one of the pieces that is helping on having those knowledge spaces well-structured in a way that they can optimize the outcome from agents for example, or even to change how models actually acquire the understanding.

16.15
So in the traditional way we used to think about building an AI system, as we collect the data, we build the model, we have an output. . . A lot of those more sophisticated companies are actually already thinking a different way, right? The AI, especially the agents, will need to learn or to be developed in a different way, where you have an hypothesis, you want to cover that hypothesis with your data, you want to model, you want to evaluate that hypothesis and make sure that your systems are updated.

And that’s where synthetic data is actually helping in changing. And this is what we call the acceleration through epistemic development, where synthetic data is the main tool to achieve that. But this is how we know, “Are we understanding the general way how sophisticated companies are using it?” I wouldn’t dare to say that everyone in the industry is using it that way.

17.15
Yeah, yeah, yeah. So one of the more interesting things in this area is this emerging body of practice around agent optimization. And the key insight there is that you can boost your agent a lot by just rewiring the agent graph without upgrading your model. So now you’ve got a bunch of open source projects ranging from TextGrad, the DSPy, OpenEvolve, GEPA. . .all designed to do a lot of these things.

And I would imagine, even as you’re optimizing your agent, you’re gonna want to run this agent through a bunch of scenarios that don’t exist in your dataset—and could involve even edge cases. And now that these agents are actually, as we discussed, doing a bunch of things, using a bunch of tools—that space is kind of broad, and I doubt that you would have that historical data handy anyway—you would need to have tools that would allow you to, with confidence, know that you’ve optimized this agent properly and that it’s ready to at least be rolled out, even in a limited way. 

18.50
Exactly, exactly. What you just described is exactly this need of a change of paradigm, right? We used to think that we need to learn by exposure, by learning historical data. We definitely now need to have our systems learning by construction and be able to test it right away. And that’s where I think the synthetic data is actually a very good (and a needed) accelerator. And I’m just glad that AI agents brought that perspective because. . . This perspective already existed. It was just harder to conceptualize and see the value, because it’s very abstract.

19.32
If you think of all the agents at least on the business side, right, so server side, the coding agents, actually a lot of these business agents are coming out of China. Since I spent a lot of time in China in the past, I’ve been talking to a bunch of people there, and I guess, the reason that the Chinese companies are moving to the West is it’s much easier to charge people in the West than in China.

So for whatever reason, they’re here; they’re building these tools that will automate a bunch of things. Right. So the canonical example would be, “Create a PowerPoint presentation based on the following specs and blah, blah, blah.” But if you can imagine these business process agents becoming more and more complex, hitting more and more tools, it’s just impossible to think that you would have all of that historical data handy anyway, so you would really need a way to simulate the behavior of these agents.

20.45
And one question I have, Fabiana, is one of the things that you keep reading about and I guess is generally true of millennials is chatbots becoming kind of true friends or companions or even romantic partners. 

It got me thinking. So if that’s happening, in order to harden this chatbot, you would need to simulate data where the chatbot is now starting to detect emotion, emotional response—you know, not just not just plain text, but there’s got to be, as you’re testing these chatbots, you have to inject all sorts of emotional scenarios, because now it’s like acting like a friend of someone. So have you heard of emotion being part of synthetic data generation somehow? 

21.52
Not really. And I’m probably a bit more skeptical when it comes to emotion. I understand your point. It depends on what you consider emotion. 

22.05
I’m skeptical too. I’m not sure if it’s happening. I’m just speculating that because the interaction is becoming emotional to some degree, there must be some people attempting to generate data that has an emotional dimension somehow. I’m just making this up, by the way. 

22.30
Yeah, yeah, yeah. [laughs] No, I bet it’s a possibility and I’m not surprised if someone was doing that. Emotions have been like one of the focuses of AI. We always heard about sentiment analysis, that always happens. So I wouldn’t be surprised. I’m not aware [of any] myself. But as I told you, I’m really skeptical that even synthetic data could be helpful on that side.

Perhaps you can create better boundaries, if that makes sense. But still, there’s always a limited capability of these models to really understand beyond syntax. And that’s where I still stand. Even if someone told me I was able to get some better results, I [would think] that those better results were achieved in a very specific, narrowed kind of situation. Though. . .

Well, we have heard the stories of people [who] are very happy with bots, that they never felt more companionship than [with] the bots they have right now. So there’s a lot of nuance there. [laughs]

23.51
One of the things that brought synthetic data back in the headlines maybe 12 or 18 months ago was there was so suddenly a lot of talk about “We’re running out of data. All these models are being trained on internet data, but everyone has basically vacuumed all of that data. So then now we have to distinguish our model or make our models even better.” 

Obviously scaling laws have multiple dimensions. There’s compute; there’s data. But since data is running out, we need synthetic data, right? On the other hand, though, a lot of people raised the possibility that AI trained on AI data is going to lead to some sort of model collapse. So what have you heard recently in terms of the concerns around. . . 

You know, obviously “There’s no such thing as free lunch. . .” So every kind of thing you use has potential disadvantages. So this disadvantage that people bring up, Fabiana, [is] if you’re able to train models on synthetic data then that’s going to degrade the model over time, because basically it’s like a loop, right? The model’s capability of generating synthetic data is limited by the model itself. So therefore, you know… 

25.42
And that’s under the assumption that the synthetic data that we are talking about is generated by the LLMs. We can’t forget that there’s way more about synthetic data. There are simulations, and simulations [have been] used for quite some time with very good results. They were used for the studies of COVID vaccination. It is used every day with weather, and they work. But of course there’s a limitation. I agree there’s no free lunch. I wouldn’t say it degrades the capability of the model, but I would definitely say a plateau.

Because unless you are doing assumptions based on what you know, and you just know that there is no collected data but this actually happens. . . But unless you know new behaviors, the fact that we are generating the same data from around the same behaviors, you will achieve a plateau. But also I think that’s one of the things that regardless narrative AIs like LLMs will always have a problem with. They always are dependent on having seen a lot of data.

And we know that that plateau will eventually be achieved. And then we have a totally different problem. How mathematically can we solve this bottleneck? And on that side, I don’t think synthetic data will be the answer anymore. 

27.32
What we just discussed there focuses mainly on LLMs and foundation models involving text. But one area that people seem particularly excited about these days are foundation models for the physical world, primarily robotics. So in that world, it seems like there’s two general approaches that people are doing. One is [to] actually collect data, but obviously they don’t have the same internet scale data that you’ll have for LLMs.

Secondly, you generate data by having humans do a task, and you just capture it on video and that’s how you collect data. And then the third approach is simulation. So basically now that you’ve collected human data, maybe you can have simulations to expand the amount of data you have. The critics say that simulations are fine, but there’s still a gap between [the] simulation [and] real data.

I mean these are you know, people like Rodney Brooks—one of the granddaddies of robotics. So it seems like, in certain areas like that, synthetic data may still need work, no?

29.12
I wouldn’t say “may still need work,” but I would say that definitely needs to be more explored. It’s more on that side. Because I know companies that work on specifically synthetic data for robotics, and they are having very good results.

And I understand that a lot of people. . .

29.39
We have to have them talk to Rodney. [laughs]

29.41
Perhaps. Because we have to be pragmatic. You want to develop robots and solutions for automation. But data collection is expensive, time-consuming. And it’s very hard to get all the movements that you want to capture collected just by nature. 

Having said that, simulation is great. Synthetic data can help in, you know, building a bridge between the real data and the simulations. In some cases, it won’t cover 100%, but it will cover perhaps 80% to 90%. And sometimes it’s better to just have 80% of the cases than having the 20% covered by real data. I think here it’s more a pragmatic approach, and [in] real-world scenarios, a lot of times the 80% are very good. Excellent actually. 

30.42
So in closing, going back to the topic of agents, obviously, people tend to get ahead of themselves—people are still working on single agents to do very narrow tasks. But then on the other hand, there’s already a lot of talk about multi-agents, and obviously multi-agents introduce a lot more complexity, for one, particularly if the agents are communicating. So there’s just communication challenges between those agents. What are some of the new tools that you’re hearing about that target specifically multi-agents or the scale that agents have introduced to synthetic data?

31.34
Not new tools, actually. But of course, we have been actively working on—and a lot of the vendors in synthetic data that already work with this type of data are exploring—covering new scenarios and new features. A lot of these agents are relying, for example, on document processing. So there are new solutions for document generation, which is highly helpful.

One of the things that I also like is, for example, in market research, there’re all these synthetic personas required nowadays to accelerate hypothesis testing—learning speeds, for example, which is very interesting. Or there are solutions being developed, but to help with reasoning structure for bots. So those are, I wouldn’t say specifically tools that are coming out, but are definitely solutions that are being developed targeting the needs and requirements to test for multi-agent architectures.

32.46
Yeah. It seems like there’s. . . Like there’s a group out of Meta that—I don’t know how real this is, but they released a paper basically even uses Ray for scale and orchestration and specifically, [to] increase throughput mainly to generate synthetic data for multi-agent scenarios. I’m not sure. It seems like according to the paper, they’re actually using this, but I’m not sure if anyone else is using it. 

33.41
Yeah, but that’s. . . The companies will use a different way. Right? That’s an architecture solution for a problem they have. They want to augment the throughput, test the system loads. And that will be a decision for the different engineering teams on how to apply synthetic data generation. 

Testing throughput, testing systems capabilities, well, we have been using synthetic data that way for decades now. It’s just a change of paradigm. And by the way, it’s not really a change because if we think about multi-agents, just as we think about microservices from the 2010s, it’s the same concept; it’s the same needs. It’s just a shift in terms of tools.

Just because instead of being applied to just software engineering, you are actually applying this to AI-driven solutions. So I see a lot of change in that area, on tooling, even, for example, authentication for agents—we are seeing a lot of solutions exactly for that. But it’s not something specific to synthetic data. It’s more on the broader sense of architectural solutions to deliver multi-agent systems.

35.01
Yeah. And also it seems like it fits into the natural tooling that’s happening in multimodal data and data for generative AI in general in that you need high throughput, but you also need efficient utilization of a lot of resources between GPUs and CPUs and fine-grained utilization, because, basically, these are precious computing resources.

And with that, thank you, Fabiana. 

35.37
Thank you, Ben. Thank you for having me. This was a pleasure.

12:00

Erich Schubert: Dogfood Generative AI [Planet Debian]

Current AI companies ignore licenses such as the GPL, and often train on anything they can scrape. This is not acceptable.

The AI companies ignore web conventions, e.g., they deep link images from your web sites (even adding ?utm_source=chatgpt.com to image URIs, I suggest that you return 403 on these requests), but do not direct visitors to your site. You do not get a reliable way of opting out from generative AI training or use. For example, the only way to prevent your contents from being used in “Google AI Overviews” is to use data-nosnippet and cripple the snippet preview in Google. The “AI” browsers such as Comet, Atlas do not identify as such, but rather pretend they are standard Chromium. There is no way to ban such AI use on your web site.

Generative AI overall is flooding the internet with garbage. It was estimated that 1/3rd of the content uploaded to YouTube is by now AI generated. This includes the same “veteran stories” crap in thousands of variants as well as brainrot content (that at least does not pretend to be authentic), some of which is among the most viewed recent uploads. Hence, these platforms even benefit from the AI slop. And don’t blame the “creators” – because you can currently earn a decent amount of money from such contents, people will generate brainrot content.

If you have recently tried to find honest reviews of products you considered buying, you will have noticed thousands of sites with AI generated fake product reviews, that all are financed by Amazon PartnerNet commissions. Often with hilarious nonsense such as recommending “sewing thread with German instructions” as tool for repairing a sewing machine. And on Amazon, there are plenty of AI generated product reviews – the use of emoji is a strong hint. And if you leave a negative product review, there is a chance they offer you a refund to get rid of it… And the majority of SPAM that gets through my filters is by now sent via Gmail and Amazon SES.

Partially because of GenAI, StackOverflow is pretty much dead – which used to be one of the most valuable programming resources. (While a lot of people complain about moderation, famous moderator Shog9 from the early SO days suggested that a change in Google’s ranking is also to blame, as it began favoring showing “new” content over the existing answered questions – causing more and more duplicates to be posted because people no longer found the existing good answers. In January 2026, there were around 3400 questions and 6000 answers posted, less than in the first month of SO of August 2008 (before the official launch).

Many open-source projects are suffering in many ways, e.g., false bug reports that caused curl to stop its bug bounty program. Wikipedia is also suffering badly from GenAI.

Science is also flooded with poor AI generated papers, often reviewed with help from AI. This is largely due to bad incentives – to graduate, you are expected to write many papers on certain “A” conferences, such as NeurIPS. On these conferences the number of submissions is growing insane, and the review quality plummets. All to often, the references in these papers are hallucinated, too; and libraries complain that they receive more and more requests to locate literature that does not appear to exist.

However, the worst effect (at least to me as an educator) is the noskilling effect (a rather novel term derived from deskilling, I have only seen it in this article by Weßels and Maibaum).

Instead of acquiring skills (writing, reading, summarizing, programming) by practising, too many people now outsource all this to AI, leading to them not learn the basics necessary to advance to a higher skill level. In my impression, this effect is dramatic. It is even worse than deskilling, as it does not mean losing an advanced skill that you apparently can replace, but often means not acquiring basic skills in the first place. And the earlier pupils start using generative AI, the less skills they acquire.

Dogfood the AI

Let’s dogfood the AI. Here’s an outline:

  1. Get a list of programming topics, e.g., get a list of algorithms from Wikidata, get a StackOverflow data dump.
  2. Generate flawed code examples for the algorithms / programming questions, maybe generate blog posts, too.
    You do not need a high-quality model for this. Use something you can run locally or access for free.
  3. Date everything back in time, remove typical indications of AI use.
  4. Upload to Github, because Microsoft will feed this to OpenAI…

Here is an example prompt that you can use:

You are a university educator, preparing homework assignments in debugging.
The programming language used is {lang}.
The students are tasked to find bugs in given code.
Do not just call existing implementations from libraries, but implement the algorithm from scratch.
Make sure there are two mistakes in the code that need to be discovered by the students.
Do NOT repeat instructions. Do NOT add small-talk. Do NOT provide a solution.
The code may have (misleading) comments, but must NOT mention the bugs.
If you do not know how to implement the algorithm, output an empty response.
Output only the code for the assignment! Do not use markdown.
Begin with a code comment that indicates the algorithm name and idea.
If you indicate a bug, always use a comment with the keyword BUG

Generate a {lang} implementation (with bugs) of: {n} ({desc})

Remember to remove the BUG comments! If you pick some slighly less common programming languages (by quantity of available code, say Go or Rust) you have higher chances that this gets into the training data.

If many of us do this, we can feed GenAI its own garbage. If we generate thousands of bad code examples, this will poison their training data, and may eventually lead to an effect known as “model collapse”.

On the long run, we need to get back to an internet for people, not an internet for bots. Some kind of “internet 2.0”, but I do not have a clear vision on how to keep AI out – if AI can train on it, they will. And someone will copy and paste the AI generated crap back into whatever system we built. Hence I don’t think technology is the answere here, but human networks of trust.

10:42

The next generation of AI businesses [Seth's Blog]

The first generation was built on large models, demonstrating what could be done and powering many tools.

The second generation is focused on reducing costs and saving time. Replacing workers or making them more efficient.

But you can’t shrink your way to greatness.

The third generation will be built on a simple premise, one that the internet has proven again and again:

Create value by connecting people.

We haven’t seen this yet, but once it gains traction, it’ll seem obvious and we’ll wonder how we missed it.

Create tools that work better when your peers and colleagues use them too. And tools that solve problems that people with resources are willing to pay for.

Problems are everywhere, yet we often ignore them.

And communities (existing and those that need to exist) are just waiting to have their problems solved.

[Here’s a list of network-based tech companies you may have heard of: Facebook, YouTube, WhatsApp, Instagram, TikTok, Reddit, LinkedIn, Wikipedia, Discourse, Airbnb, Etsy, Stack Overflow, Pinterest, Twitch, eBay, Squidoo, Snapchat, GitHub… You can’t use them alone, and they work better when others use them too.]

So far, most AI projects ignore the very network effects that built the internet. That’s almost certain to change.

For those with paraskevidekatriaphobia, consider this your opportunity to build something worth building instead of just waiting for the negative consequences of this change to arrive…

09:42

Hatchet Man [George Monbiot]

Peter Mandelson was not one bad apple: he was brought in repeatedly by governments to do their dirty work.

By George Monbiot, published in the Guardian  10th February 2026

History is being rewritten. The story we are told is that an evil man called Peter Mandelson, pursuing his own interests, went rogue to collaborate with a serial abuser of girls and women, undermining the good work of people seeking to defend the public interest. All this is true. But – and I fear many will find this hard to accept – it is only half the story.

The much harder truth is that Mandelson’s disgraceful dealings with Jeffrey Epstein were less a betrayal of his brief than an unauthorised extension of it. In 2009 – just as, we now know, Mandelson was passing sensitive information to Epstein – I argued that the government department he ran, called Business, Enterprise and Regulatory Reform (BERR), “functions as a fifth column within government, working for corporations to undermine democracy and the public interest”.

BERR was a smaller and less chaotic version of Elon Musk’s “department of government efficiency” (DOGE). Its purpose, I suggested, was to bypass the House of Commons on behalf of capital. It allowed Gordon Brown’s government to create the impression that it was defending the public interest while simultaneously, but more quietly, appeasing powerful lobbyists. In contrast to other government departments, BERR was largely run by unelected lords, who had either been corporate executives, corporate lobbyists or, like Mandelson, members of a concierge class operating on their behalf. I wrote that these ministers, appointed by Brown, “appear to have formed their own lobby group within government”.

BERR sought to part-privatise Royal Mail, breaking a manifesto commitment. It succeeded. It tried to block the EU working time directive: UK government filibusters delayed and weakened it. It attempted, less successfully, to undermine the equality bill, whose aim was to ensure equal pay for women (Mandelson’s simultaneous dealings with Epstein were not the only respect in which he spat on women’s rights). It undermined environmental legislation. It was “quietly building a bonfire of the measures that protect us from predatory corporate behaviour”.

So when Brown, who was prime minister at the time, expresses his shock and betrayal, please forgive me a small gasp of frustration. In his interview on the BBC’s Today programme, Brown claimed that in 2009: “We were solving a major financial crisis … all my thoughts were on how we could save people’s jobs and savings and their livelihoods.” But not only did he allow Mandelson to attack the public interest on behalf of business, he greatly increased Berr’s budget. This was despite the fact that, as I noted at the time, Mandelson “was partly responsible, both in Blair’s government and as European trade commissioner, for promoting the culture of deregulation that catalysed the economic crisis”. On one hand, Brown was trying to solve it. On the other, at the behest of corporate lobbyists, he was setting up the next one.

Brown also told the BBC, in justifying his appointment of Lord Mandelson, that the man had “an unblemished record as the [European] trade commissioner”. An unblemished record of what, exactly? Neocolonialism, perhaps. While Mandelson was in that post, he sought to impose draconian trade provisions on some of the poorest countries on Earth. He put pressure on them to let EU corporations muscle out local firms and make privatisation legally irreversible, threatening people’s access to health, education and water. He sought to force African countries to hand over crucial resources at the risk of widespread hunger.

Yes, when Mandelson was a minister in Brown’s government, he betrayed the national interest. But this is what, by other means, he was appointed to do. His treachery, while it went way beyond his official mandate, was not a bug, but a feature. The corrosion of democratic values was institutional. And this spirit has prevailed ever since. Keir Starmer’s government of all the lobbyists is no exception.

Brown, in proposing remedies for the secretive machinations Mandelson conducted, writes: “Conventions about commercial confidentiality should no longer prevent public service contracts delivered by private companies being subject to reasonable freedom of information requests.” I could scarcely breathe when I read that. It is exactly the demand some of us made when Brown rolled out the private finance initiative (PFI) across the public sector, enabling businesses to get their hooks into every aspect of state provisioning. When we tried to see the contracts, to understand what was being done in our name, Brown’s Treasury repeatedly blocked our information requests on the grounds of “commercial confidentiality”.

The sense of betrayal that Brown quite rightly feels is the same sense of betrayal some of us felt towards the governments in which he served. Yes, Brown had and retains some great qualities, and did much good. But he is also a remarkable escapologist. Almost everyone appears to have forgotten how his PFI programme planted a timebomb in public services, enabling corporations to take the profits while leaving the risks with the state: one of the reasons why they are now in so much trouble. Almost everyone appears to have forgotten his crucial role in the Iraq war: standing with Tony Blair and financing it. He rightly called for Vladimir Putin and his “enablers” to face justice for their crime of aggression in Ukraine. Yet it’s the same crime that Blair and his enablers (including one G Brown) committed in Iraq.

But it is not just Brown who is rewriting history. The media are 50% of any problem, and the story most of it loves to tell is of one bad apple. Heaven forfend that we see the systemic problems. There is a reason why Mandelson kept returning to government, despite sackings for his over-enthusiastic relationships with plutocrats. He was brought in to do the dirty work. The governments in which he served could loudly claim to be doing something, while subtly and simultaneously undoing it.

Mandelson’s treachery is an extreme instance of the dominant mode of UK politics over the past 45 years: the subordination of democracy to the demands of the ultra-rich. Abuse and exploitation – of women and children, of poorer countries and their people, of workers and contractors, renters and customers – are baked into the system.

If you cannot diagnose a problem, you cannot fix it. We urgently need to see this for what it is. Mandelson’s grovelling to the sinister rich is disgraceful, disgusting, deceitful, a crushing of women’s rights and of democracy. But it is not a deviation from the system. It is a manifestation of it.

www.monbiot.com

08:49

Pluralistic: Trump antitrust is dead (13 Feb 2026) [Pluralistic: Daily links from Cory Doctorow]

->->->->->->->->->->->->->->->->->->->->->->->->->->->->-> Top Sources: None -->

Today's links



An altered version of a Gilded Age editorial cartoon titled 'Who controls the Senate?' which depicts the Senate as populated by tiny, ineffectual politicians ringed by massive, bloated, brooding monopolists. A door labeled 'people's entrance.' is firmly locked. A sign reads, 'This is a senate of the monopolists, by the monopolists and for the monopolists.' The image has been altered: an editorial cartoon of Boss Tweed, portrayed as a portly man in a business suit with a money-bag for a head, stands in the foreground. He is wearing a MAGA hat. On his shoulder perches a tiny, 'big stick' swinging FDR from another editorial cartoon. The logos of the monopolists in the background have been replaced with logos for Chevron, Coinbase, Google, Microsoft, WB, PGA, Apple, Comcast, Realpage and KKR.

Trump antitrust is dead (permalink)

Remember when the American right decided that it hated (some) big businesses, specifically Big Tech? A whole branch of the Trump coalition (including JD Vance, Matt Gaetz and Josh Hawley) declared themselves to be "Khanservatives," a cheering section for Biden's generationally important FTC commissioner Lina Khan:

https://www.fastcompany.com/91156980/trump-vp-pick-j-d-vance-supports-big-tech-antitrust-crackdown

Trump owes his power to his ability to bully and flatter a big, distrustful coalition of people who mostly hate each other into acting together, like the business lobby and the grievance-saturated conspiratorialists who hate Big Tech because they were momentarily prevented from calling for genocide or peddling election disinformation:

https://pluralistic.net/2025/07/18/winning-is-easy/#governing-is-harder

The best framing for the MAGA war on Big Tech comes from Trashfuture's Riley Quinn, who predicted that the whole thing could be settled by tech companies' boards agreeing to open every meeting with a solemn "stolen likes acknowledgment" that made repentance for all the shadowbanned culture warriors whose clout had been poached by soy content moderators.

And that's basically what happened. Trump's antitrust agencies practiced "boss politics antitrust" in which favored courtiers were given free passes to violate the law, while Trump's enemies were threatened with punitive antitrust investigations until they fell into line:

https://pluralistic.net/2025/07/29/bondi-and-domination/#superjove

Trump's antitrust boss Gail Slater talked a big game about "Trump Antitrust" but was thwarted at every turn by giant corporations who figured out that if they gave a million bucks to a MAGA podcaster, they could go over Slater's head and kill her enforcement actions. When Slater's deputy, Roger Alford, went public to denounce the sleazy backroom dealings that led to the approval of the Hewlett Packard Enterprise/Juniper merger, he was forced out of the agency altogether and replaced with a Pam Bondi loyalist who served as a kind of politburo political officer in Slater's agency:

https://abovethelaw.com/2025/08/former-maga-attorney-goes-scorched-earth-with-corruption-allegations-in-antitrust-division/

Bondi made no secret of her contempt for Slater, and frequently humiliated her in public. Now it seems that Bondi has gotten tired of this game and has forced Slater out altogether. As ever, Matt Stoller has the best analysis of how this happened and what it means:

https://www.thebignewsletter.com/p/trump-antitrust-chief-ousted-by-ticketmaster

Stoller's main thesis is that the "conservative populist" movement only gained relevance by complaining about "censorship of conservatives" on the Big Tech platforms. While it's true that the platforms constitute an existential risk to free expression thanks to their chokehold over speech forums, it was always categorically untrue that conservatives were singled out by tech moderators:

https://pluralistic.net/2022/12/10/e2e/#the-censors-pen

Conservative populists' grievance-based politics is in contrast with the progressive wing of the anti-monopoly movement, which was concerned with the idea of concentrated power itself, and sought to dismantle and neuter the power of the business lobby and the billionaires who ran it:

https://pluralistic.net/2022/02/20/we-should-not-endure-a-king/

The problem with conservative populism, then, is that its movement was propelled by the idea that Big Tech was soy and cucked and mean to conservatives. That meant that Big Tech bosses had an easy path out of its crosshairs: climb into the tank for MAGA.

That's just what they did: Musk bought Twitter; Zuck ordered his content moderators to censor the left and push MAGA influencers; Bezos neutered his newspaper in the run up to the 2024 elections; Tim Cook hand-assembled a gold participation trophy for Trump live on camera. These CEOs paid a million dollars each for seats on Trump's inauguration dais and their companies donated millions for Trump's Epstein Memorial Ballroom.

Slater's political assassination merely formalizes something that's been obvious for a year now: you can rip off the American people with impunity so long as you flatter and bribe Trump.

The HPE/Juniper merger means that one company now supplies the majority of commercial-grade wifi routers, meaning that one company now controls all the public, commercial, and institutional internet you'll ever connect to. The merger was worth $14b, and Trump's trustbusters promised to kill it. So the companies paid MAGA influencer Mike Davis (who had publicly opposed the merger) a million bucks and he got Trump to overrule his own enforcers. Getting your $14b merger approved by slipping a podcaster a million bucks is a hell of a bargain.

HPE/Juniper were first, but they weren't the last. There was the Discover/Capital One merger, which rolled up the two credit cards that low-waged people rely on the most, freeing the new company up for even more predatory practices, price-gouging, junk-fees, and strong-arm collections. When the bill collectors are at your door looking for thousands you owe from junk fees, remember that it was Gail Slater's weakness that sent them there:

https://www.nytimes.com/2025/04/03/business/dealbook/capital-one-discover-merger.html

Slater also waved through the rollup of a string of nursing homes by one of the world's most notoriously greedy and cruel private equity firms, KKR. When your grandma dies of dehydration in a dirty diaper, thank Gail Slater:

https://pluralistic.net/2023/05/09/dingo-babysitter/#maybe-the-dingos-ate-your-nan

Slater approved the merger of Unitedhealth – a company notorious for overbilling the government while underdelivering to patients – with Amedisys, who provide hospice care and home health help:

https://www.justice.gov/opa/pr/justice-department-requires-broad-divestitures-resolve-challenge-unitedhealths-acquisition

The hits keep coming. Want to know why your next vacation was so expensive? Thank Slater for greenlighting the merger of American Express Global Business Travel and CWT Holdings, which Slater challenged but then dropped, reportedly because MAGA influencer Mike Davis told her to.

Davis also got Slater to reverse her opposition to the Compass/Anywhere Real Estate merger, which will make America's dysfunctional housing market even worse:

https://www.wsj.com/us-news/law/real-estate-brokerages-avoided-merger-investigation-after-justice-department-rift-e846c797?gaa_at=eafs&amp;gaa_n=AWEtsqdSXg4z1XPl2UpqdHR4V2-sNj9M7oDcWHscPIXuSU-5n0gtYEv8Q5XZG7qtzfY%3D&amp;gaa_ts=698e44a6&amp;gaa_sig=IO7tWGaHZSYER64YyUzyoiVtrOKR77ZsYMMOdwN1P7koRt9zXYRJ1hxw2oDU9cD40-aGgHHVfwMWg14olFwNaw%3D%3D

It's not just homebuyers whose lives are worse off because of Slater's failures, it's tenants, too. Slater settled the DoJ's case against Realpage, a price-fixing platform for landlords that is one of the most culpable villains in the affordability crisis. Realpage was facing an existential battle with the DoJ; instead, they got away with a wrist-slap and (crucially) are allowed to continue to make billions helping landlords rig the rental market against tenants.

So Slater's defenestration is really just a way of formalizing Trump's approach to antitrust: threaten and prosecute companies that don't bend the knee to the president, personally…and allow companies to rob the American people with impunity if they agree to kick up a percentage to the Oval Office.

But while Slater will barely rate a footnote in the history of the Trump administration, the precipitating event for her political execution is itself very interesting. Back in September, Trump posed with Kid Rock and announced that he was going after Ticketmaster/Live Nation, a combine with a long, exhaustively documented history of ripping off and defrauding every entertainer, fan and venue in America:

https://www.pbs.org/newshour/nation/ftc-sues-ticketmaster-saying-it-uses-illegal-tactics-to-make-fans-pay-more-for-live-events

At the time, it was clear that Trump had been prodded into action by two factors: the incredible success of the Mamdani campaign's focus on "affordability" (Ticketmaster's above-inflation price hikes are one of the most visible symptoms of the affordability crisis) and Kid Rock's personal grievances about Ticketmaster.

Kid Rock is the biggest-name entertainer in the Trump coalition, the guy Trump got to headline a MAGA halftime show that notably failed to dim Bad Bunny's star by a single milliwatt. Trump – a failed Broadway producer – is also notoriously susceptible to random pronouncements by celebrities (hence the Fox and Friends-to-Trump policy pipeline), so it's natural that Kid Rock's grousing got action after decades of documented abuses went nowhere.

Ticketmaster could have solved the problem by offering to exempt Trump-loyal entertainers from its predatory practices. They could have announced a touring Trumpapalooza festival headlined by Kid Rock, Christian rock acts, and AI-generated country singers, free from all junk fees. Instead, they got Gail Slater fired.

Mike Davis doesn't just represent HPE/Juniper, Amex travel, and Compass/Anywhere – he's also the fixer that Ticketmaster hired to get off the hook with the DoJ. He's boasting about getting Slater fired:

https://x.com/gekaminsky/status/2022076364279755066

And Ticketmaster is off the hook:

https://prospect.org/2026/02/12/trump-justice-department-ticketmaster-live-nation-monopoly/

What's interesting about all this is that there were elements of the Biden coalition that also hated antitrust (think of all the Biden billionaires who called for Lina Khan to be fired while serving as "proxies" for Kamala Harris). And yet, Biden's trustbusters did more in four short years than their predecessors managed over the preceding forty.

Stoller's theory is that the progressive anti-monopoly movement (the "Brandeisians") were able to best their coalitional rivals because they did the hard work of winning support for the idea of shattering corporate power itself – not just arguing that corporate power was bad when it was used against them.

This was a slower, harder road than dividing up the world into good monopolies and bad ones, but it paid off. Today the Brandeisians who made their bones under Biden are serving the like of Mamdani:

https://pluralistic.net/2025/11/15/unconscionability/#standalone-authority

And their ideas have spread far and wide – even to other countries:

https://lewisforleader.ca/ideas/public-options-full-plan/

They lit a fire that burns still. Who knows, maybe someday it'll even help Kid Rock scorch the Ticketmaster ticks that are draining his blood from a thousand tiny wounds. He probably won't have the good manners to say thank you.


Hey look at this (permalink)



A shelf of leatherbound history books with a gilt-stamped series title, 'The World's Famous Events.'

Object permanence (permalink)

#20yrsago Google Video DRM: Why is Hollywood more important than users? https://memex.craphound.com/2006/02/13/google-video-drm-why-is-hollywood-more-important-than-users/

#20yrsago Phishers trick Internet “trust” companies https://web.archive.org/web/20060222232249/http://blog.washingtonpost.com/securityfix/2006/02/the_new_face_of_phishing_1.html

#15yrsago With a Little Help: first post-publication progress report https://www.publishersweekly.com/pw/by-topic/columns-and-blogs/cory-doctorow/article/46105-with-a-little-help-the-early-returns.html

#15yrsago Nokia’s radical CEO has a mercenary, checkered past https://web.archive.org/web/20100608100324/http://www.siliconbeat.com/2008/01/11/microsoft-beware-stephen-elop-is-a-flight-risk/

#15yrsago Scientology’s science fictional origins: thesis from 1981 https://web.archive.org/web/20110218045653/http://digitalcommons.mcmaster.ca/opendissertations/126/

#10yrsago I was a Jeopardy! clue https://memex.craphound.com/2016/02/13/i-was-a-jeopardy-clue/

#10yrsago Liberated Yazidi sex slaves become a vengeful, elite anti-ISIS fighting force https://www.independent.co.uk/news/world/middle-east/isis-yazidi-sex-slaves-take-up-arms-for-mosul-fight-to-bring-our-women-home-a6865056.html

#10yrsago Listen: a new podcast about science fiction and spectacular meals https://www.scottedelman.com/2016/02/10/the-first-episode-of-eating-the-fantastic-with-guest-sarah-pinsker-is-now-live/

#10yrsago Politician given green-light to name developer’s new streets with synonyms for greed and deceit https://web.archive.org/web/20160213001324/http://www.capitalnewyork.com/article/city-hall/2016/02/8590908/staten-island-borough-president-gets-approval-name-new-streets-gre

#5yrsago $50T moved from America's 90% to the 1% https://pluralistic.net/2021/02/13/data-protection-without-monopoly/#inequality

#5yrsago Broad Band https://pluralistic.net/2021/02/13/data-protection-without-monopoly/#broad-band

#5yrsago Privacy Without Monopoly https://pluralistic.net/2021/02/13/data-protection-without-monopoly/#comcom

#1yrago Premature Internet Activists https://pluralistic.net/2025/02/13/digital-rights/#are-human-rights


Upcoming appearances (permalink)

A photo of me onstage, giving a speech, pounding the podium.



A screenshot of me at my desk, doing a livecast.

Recent appearances (permalink)



A grid of my books with Will Stahle covers..

Latest books (permalink)



A cardboard book box with the Macmillan logo.

Upcoming books (permalink)

  • "The Reverse-Centaur's Guide to AI," a short book about being a better AI critic, Farrar, Straus and Giroux, June 2026
  • "Enshittification, Why Everything Suddenly Got Worse and What to Do About It" (the graphic novel), Firstsecond, 2026

  • "The Post-American Internet," a geopolitical sequel of sorts to Enshittification, Farrar, Straus and Giroux, 2027

  • "Unauthorized Bread": a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, 2027

  • "The Memex Method," Farrar, Straus, Giroux, 2027



Colophon (permalink)

Today's top sources:

Currently writing: "The Post-American Internet," a sequel to "Enshittification," about the better world the rest of us get to have now that Trump has torched America (1016 words today, 28750 total)

  • "The Reverse Centaur's Guide to AI," a short book for Farrar, Straus and Giroux about being an effective AI critic. LEGAL REVIEW AND COPYEDIT COMPLETE.
  • "The Post-American Internet," a short book about internet policy in the age of Trumpism. PLANNING.

  • A Little Brother short story about DIY insulin PLANNING


This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.

https://creativecommons.org/licenses/by/4.0/

Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.


How to get Pluralistic:

Blog (no ads, tracking, or data-collection):

Pluralistic.net

Newsletter (no ads, tracking, or data-collection):

https://pluralistic.net/plura-list

Mastodon (no ads, tracking, or data-collection):

https://mamot.fr/@pluralistic

Medium (no ads, paywalled):

https://doctorow.medium.com/

Twitter (mass-scale, unrestricted, third-party surveillance and advertising):

https://twitter.com/doctorow

Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):

https://mostlysignssomeportents.tumblr.com/tagged/pluralistic

"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla

READ CAREFULLY: By reading this, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies ("BOGUS AGREEMENTS") that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer.

ISSN: 3066-764X

08:28

I Three Mouths And I Must Chew Villains [Penny Arcade]

New Comic: I Three Mouths And I Must Chew Villains

08:07

Battery powered board [RevK®'s ramblings]

OpenMic Audio recorder
I have a few of my ESP32 boards which are battery powered.

The charging side itself is not too hard A TP4054 can charge a 3.7V LiPo from 5V USB. A simple regulator such as ME6211C33M5G can regulate from the battery to 3.3V.

The ESP32-S3 has a number of low power modes, meaning you can make a device, e.g. a Watchy, which can run off a small LiPo for weeks with the ESP32 in a low power mode for longer periods.

But there are a couple of challenges. One is powering down other stuff on the board - a good example being the WS2812 style LEDs - these use a small amount of power even when not lit. A simple fix for these is a point of load device like TPS22916 - it acts as a power switch and can easily cut the power to parts of the board.

Simple charger circuit

But what if you want to have an off mode, where, instead of running in a low power mode for a few weeks you want something that uses no power when off, and holds battery charge for months or years (subject to internal leakage). My OpenMic Audio recorder is a good example of this.

The answer is to use the TPS22916 to control the power to the whole board, including the processor.

I tried a couple of times and now have this circuit design...

Power on circuit

To explain this... The switch SW3 is a nice little push button. When pressed it connects the battery (pin 3) to the ON input on the TPS22916. This powers on the board (VSWITCHED).

The very first thing the software does is set PWR GPIO to output HIGH level. This, via the diode (because GPIO is 3.3V and battery is 3.7V) holds the ON pin high.

Releasing the switch disconnects the battery from ON pin, but the PWR holds it high and keeps power on.

Separately the software sets the BTN GPIO to an input, weak pulled low. This is connected to ON which is connected to PWR (3.3V less a diode). This allows the software to read this as high when button not pressed.

Pressing the button when on simply changes ON from 3.3V to 3.7V and has no effect, but it disconnected BTN, which goes low due to weak pull down. Thus allowing the software to detect button presses when running normally.

Finally, when turning off, the software sets the PWR output low. The ESP32 actually has a mode to hold a GPIO at a level over reset, which allows the pin to be held low as the processor loses power and goes in to reset (brownout detect).

Once off, the whole board is not powered and only power usage is the tiny amount the TPS22916 may use, and any used by idle battery charging TP4054.

So far tests show that even after months, my audio record still shows a full battery level when turned on.

Finally I have simple potential divide in to an ADC input from VSWITCHED to allow battery level estimation.

Next steps: There are chips designed to use fuck all power but provide a configurable periodic wake up output. These could be used to power on the ESP32 periodically, much like the low power mode on the ESP32 itself, but lower power - so devices could wake once a day, for example, and run of a battery for years if needed.

06:35

Girl Genius for Friday, February 13, 2026 [Girl Genius]

The Girl Genius comic for Friday, February 13, 2026 has been posted.

01:56

Guest Rant: We Can’t Resist Trump Without Revenue [The Stranger]

Cities and counties are begging for more progressive revenue tools. And it just so happens there's this progressive revenue tool in the Leg right now.... by Teresa Mosqueda

As the federal government continues to sow chaos and force unprecedented cuts that threaten our communities in need, the work we do at the local level to provide stability and support is more critical than ever. But with limited local revenue tools, our county and city resources are already failing to keep up with growing community needs. 

We cannot keep our communities housed, fed, and safe without new tools for revenue at the state and local level—and we cannot possibly fill the gaps left by looming federal funding cuts, which will leave cities and counties like ours (and its population of 2.4 million) without the money for  basic community and infrastructure needs, like roads, sidewalks, childcare, food assistance, affordable housing, and other critical services. That is why state and local elected officials have been desperately asking the state legislature for new revenue options. 

Furthermore, despite our state having among the highest concentrations of wealthy individuals and corporations in the country, our notoriously upside-down tax policies place an unfair burden on the backs of working families who have the least. This was true even before the Trump tax breaks for the ultra-wealthy were passed last year. This fundamental inequity is reflected in our local economic reality: massive wealth among a few and thriving large corporations alongside an affordability crisis and widening disparities between the haves and have-nots.

I join faith leaders, social service and housing providers, small businesses, and union members in calling for a Millionaires Tax. Senate Bill 6346, currently under consideration in Olympia, is a 9.9% tax on the few wealthy individuals earning $1 million or more that would take effect in 2028; with revenue coming in 2029, it would bring in over $3 billion for schools, child care, community colleges and higher education, health care and other essential services. It would also expand the Working Families Tax Credit and dedicate funding to county public defense services, a vital component of our justice system. This will help add balance to our regressive tax code and create a long-term funding source for community needs we know will only become more acute under the current federal administration. It is past time to pass this law. 

It’s a critical step now in fixing our upside-down tax code and finding sustainable and equitable revenue sources for state and local services. 

We are a region of abundance, rich in resources, with remarkable workers, small businesses and communities. We resist and reject federal attacks on fundamental rights and essential federal funding. We believe in shared responsibility, opportunity for all, and a level playing field. We believe in everyone doing their part. And this bill is one part of how we fight back and act locally to protect Washington residents.  

Teresa Mosqueda is a King County Councilmember representing District 8, from Downtown Seattle to Vashon Island and beyond. Mosqueda has led on housing, health and worker protections in her time on King County and previously Seattle City Council, along with passing JumpStart in Seattle in 2020 to respond to growing needs and the pandemic.

Rivera Introduces Bill to Protect Immigration Information [The Stranger]

Unfortunately, nothing in government is simple. While police are regulated under state law, local law, and department policy, there’s still a conflict between city code and state law Rivera’s bill does not address. by Micah Yip

Councilmember Maritza Rivera has done something right. 

This morning, the Public Safety Committee passed her bill that protects immigrants from our own municipal code. It strikes language that says city employees (including police) must “cooperate with, not hinder” federal immigration enforcement, and adds a section clarifying that they are not to share personal information with federal immigration agencies, either.

Yay! Thanks, Rivera!

Unfortunately, nothing in government is that simple. While police are regulated under state law, local law, and department policy, there’s still a conflict between city code and state law Rivera’s bill does not address.

Let’s start with the language repeal. Rivera’s bill removes that “cooperation” line, which originated in 1986 from an initiative passed by Seattle voters (what the hell, ’80s Seattle voters!). Excellent first step. Rivera’s bill also creates a new section in the Seattle Municipal Code (SMC) blocking city employees and officers from sharing personal information—like someone’s address, phone number or social media handle—for immigration enforcement, except if required by a court order. Good second step. 

Both changes align city code with state laws like the 2019 Keep Washington Working (KWW) Act and the 2020 Courts Open to All (COTA) Act, which place information collection and sharing restrictions on local law enforcement, judges, court personnel and prosecutors. The bill also aligns the city code with the city code. In 2003, a “don’t ask” policy was added to the SMC—city employees can’t ask about someone’s immigration status.

Except officers if they have “reasonable suspicion” that (1) a person has been previously deported, (2) that person is again in the United States and (3) they’ve committed or are committing a felony. 

SPD has its own policies separate from, but governed by, city and state law. According to Council Central Staff, the department has “chosen to enact a policy” for immigration situations that is “quite a bit narrower and does not contemplate any of the criteria” described in current city or state law. Even though there is a discrepancy, SPD tells its officers, “Just don’t ask.”

It’s unclear enough that Councilmember Eddie Lin abstained. 

“I don’t want to hold up the important bill and good work that you brought forward, Councilmember Rivera,” he said. “But it just seems like there’s still a bit of confusion, at least in my mind.”

Rivera acknowledged the shortfalls. She said KWW likely overrides the SMC code, but said they’d need to do further legal analysis before amending that section. She was adamant—passionate, even—that the bill should still be passed. 

“One doesn’t preclude the other, so we can move this bill forward,” Rivera said. “[We can] take this what I deem to be an important step, all while then having the conversations about what else.”

The bill passed, with Rivera, Committee Chair Bob Kettle and Vice Chair Rob Saka voting yes. Councilmember Debora Juarez was absent. It’ll go before the full City Council at the February 17 meeting.

Editor’s Note: A previous version of this article stated that SPD policies undermine Rivera’s bill. The language of SPD policy, the Seattle Municipal Code, and Washington State Law conflict, but SPD policy will be expected to conform to changes in the municipal code. 

00:21

Nirvanna the Band the Show the Movie the Mercury Review [The Stranger]

Nirvanna the Band the Show the Movie opens in wide release on Friday, February 13. by Dom Sinacola

This story was originally published by our sister paper, Portland Mercury.

For nearly 20 years, Torontonian best friends Matt Johnson and Jay McCarrol have chronicled the everyday existences of Torontonian best friends Matt (Johnson) and Jay (McCarrol) as they attempt to book a show for their band, Nirvanna the Band, at local venue the Rivoli.

Granted, they've never acknowledged that their band name might be a huge distraction for potential audiences, nor have they ever really contacted Rivoli management to ask about the venue’s scheduling process. In two decades, they’ve never even played a public show. Still, their mission abides; sometimes it means skydiving from the Canadian National (CN) Tower for some good old-fashioned viral marketing.

Nirvanna the Band the Show is that 20-year chronicle—a seemingly never-ending autobiographical narrative, like Karl Ove Knausgård’s six-volume My Struggle—that details their daily, repetitive, and sometimes dangerous schemes to score a show at the Rivoli.

What began as a baby-faced web series in 2007 and eventually graduated into a Viceland sitcom in 2017, is now Nirvanna the Band the Show the Movie. It's tantamount to their lives’ work. Distributed by Neon, recent Oscar-darlings who brought us The Secret Agent and Sentimental Value, the release of this film is second only to getting a gig at the Rivoli.

The movie is exceptionally funny, especially with an audience. Dopey gags, painful stunts, and mean-mugging abound as our protagonists accidentally transform their RV into a time machine, using Back to the Future as a blueprint to hatch a plan to, of course, get a show at the Rivoli.

Finding themselves in 2008, they’re confronted by achingly specific allusions from the first Obama administration. (Remember the original lyrics to Black Eyed Peas’ “Let’s Get It Started”?) So Matt and Jay must follow Doc Brown rules to get back to their future without changing their past.

Convinced by pop figureheads like Robert Zemeckis and shows like Entourage that the elemental tides of storytelling, fueled by farce and nostalgia, will allow them to accomplish all they put their minds to—which in this case is headlining a show at the Rivoli—Matt and Jay of Nirvanna the Band are slightly fictionalized versions of the real-life Matt and Jay. They’re two normal-ish elder millennials who have siphoned themselves through movie and TV tropes into timeless, innocent weirdos.

Thus, at the heart of Nirvanna the Band, meticulous parody and avoidance of copyright infringement rubs up against the bleak reality of urban life, creating a giddy friction between the bracing stupidity to which Matt and Jay devote themselves and the drudgery experienced by everyone else.

In fact, much of the anxious hilarity of the Nirvanna the Band the Show came from wondering, often aloud, what exactly was real and what was scripted. With cinematographers Jared Raab and Rich Williamson following Matt and Jay everywhere, Nirvanna the Band the Show the Movie also wavers, like its episodic predecessor, between faux documentary and hidden camera prank extravaganza.

But rather than exploiting cringe humor or just messing with unsuspecting normies, Matt and Jay discover a kind of freedom to be themselves while indulging their deeply cinematic impulses.

Once furnished with a studio budget, these impulses lead to some of the most dumbfounding scenes I’ve had the good fortune of witnessing in a theater. Yet, beneath the shock and awe, the Movie is the careful craft of dedicated artists. When Matt and Jay encounter their 2008 selves, the project doesn’t rely on de-aging CGI, but hundreds of hours of work from editors Robert Upchurch and Curt Lobb, who picked through old footage from the show’s initial web run. Behind Jay and Matt’s mania is a team quietly dedicated to unearthing miraculous material.

Similarly, outside of the thinly veiled facsimile of his life in Nirvanna the Band, Johnson has been a prolific filmmaker. After cutting his teeth on Nirvanna the Band the Show, he followed up his feature debut The Dirties (2011) with Operation Avalanche (2016), a comedy thriller about faking the Moon landing in 1969, which involved Johnson’s crew bluffing their way into NASA offices to surreptitiously film whole scenes on a shoestring budget.

In 2023, Johnson made Blackberry, a partly fabricated “true story” of the founding of the titular company, featuring a squealing, baldpated Glen Howerton. The next year, Johnson starred in Kazik Radwanski’s Matt and Mara, in which he played an over-talkative guy named Matt, likely riffing on himself as much as he is in Nirvanna the Band the Show the Movie. Instead of inhabiting a character, he piles on layers of pop cultural patinas.

Whether it’s the aforementioned skydiving incident or sneaking onto a crime scene outside of Drake’s mansion, the hint of reality in the movie’s every moment is more than enough to sustain the spectacle. Even now, I can feel my stomach churn knowing that when Matt is standing on top of the CN Tower, trilby staying on his head by sheer will, there is more of a chance than not that Johnson actually did that.

Laughing, sometimes, is just what happens when your brain doesn’t know what else to do with the information being presented. Nirvanna the Band the Show the Movie is a tribute to the unbelievable shit you can pull off with your best friend and a few professional-looking cameras.

Do Matt and Jay make it back to the present? They have to, because Nirvanna the Band must go on, and more importantly, must play at the Rivoli. Someday. They have no other choice. For the real-life Johnson and McCarrol, this is Nirvanna the Band the Show the Existence, the stuff of crazed movie magic.

Nirvanna the Band the Show the Movie opens in wide release on Friday, February 13.

The Legislature Is Ready to Tax The Rich [The Stranger]

The millionaire's tax seems good, but is it great? by Nathalie Graham

Who will weep for the millionaires? 

A 9.9 percent tax on annual earnings upward of $1 million could become reality in Washington state. A Senate bill is up for a full vote as soon as next week, and its companion in the House is still in committee. If either bill reaches Gov. Bob Ferguson’s desk with enough tax relief for small businesses and low-income households, he’s likely to sign it. With his pen stroke, we’ll join the ranks of futuristic societies such as New Jersey and Minnesota that have achieved the impossible: taxing income, perhaps fairly.

This is a good thing. The state needs new revenue. Over the next two years, we’ll have a $2 billion budget deficit. Across the next four years, we’re seeing double that. If the Leg hadn’t passed the biggest tax increase in state history and cut spending (even in areas they really shouldn’t have, like behavioral health, higher education, and healthcare) last session, we’d be dealing with a $16 billion deficit. When people complain that our ass tax code is the second most regressive in the nation behind Florida (a state that cannot decide if disease is bad), this is why that matters. We don’t have enough money for the basics and are still in a hole.

The millionaire’s tax will help dig us out. Democrats project it to siphon $3.5 billion from Scrooge McDuck-esque pools of money each year. The state will use that cashflow to expand the eligibility criteria for a low-income tax credit (basically, a sales tax rebate), fund public defenders (we’ve got a shortage), and extra padding for the scrawny general fund. It also cuts taxes on hygiene products, much to the amusement of esteemed doofus Danny Westneat. But is it enough? 

With President Trump picking our state pocket book, maybe not. His Big Beautiful Bill, HR-1, will cut at least $3 billion in federal funds annually starting next year ($3.5 billion - $3 billion = goddamn it). And the millionaire’s tax won’t begin to offset that until 2029, the year we see our first payments. 

The millionaire’s tax also comes with sweet savings for the Business Community—a juicy $600 million we’d otherwise collect. Similar to the Seattle Shield Tax passed last year, the millionaire’s tax would exempt thousands of small businesses from paying state Business & Occupation (B&O) taxes. To appease big business, the bill ends a B&O surcharge a year earlier than planned, unspooling work the Legislature did last session. A lot of people are not happy about this. But, we might not have been able to get a millionaire’s tax without this penance. 

Sen. Jamie Pedersen (D-Seattle), sponsor of the Senate bill, says the millionaires tax is our only option. 

“It’s the proposal in front of us that we actually can have the votes to get out of the legislature,” he says. Plus, it’s broadly supported by 60 percent of voters, according to public opinion pollsters GBAO and DHM Research (which conducted polling for The Stranger last year). Their enthusiasm could make all the difference when the referendum enthusiasts, hedge fund millionaire Brian Heywood and anti-tax goblin Tim Eyman, come begging for signatures.

The people aren’t as jazzed about the business-friendly cuts. Pedersen says they’re “necessary” to dissuade big business from going to war with the bill. Last session, the People for an Affordable Washington PAC, which was made up of businesses not people, accumulated nearly $2.7 million to warp public opinion on new taxes. Behind the scenes, the businesses were “girding for battle” last session and preparing “to take the legislature’s tax increases to the ballot.” As he put it, “some deal" stopped them from doing that. 

For years, Pedersen has sponsored or co-sponsored bills for progressive taxation—solutions that give Washington another revenue source other than sales taxes, business taxes, or property taxes. Pedersen has attempted to get some version of an income tax passed for years. The partial capital gains tax, which passed in 2021 was the closest he’s gotten. Last year, he  sponsored bills for a wealth tax and a payroll tax, and they died tragically from a lack of votes. It’ll be the same fate for other progressive revenue taxes proposed this year, Pedersen says, pointing to Rep. Shaun Scott’s proposed state payroll tax, the Well Washington Fund. Pedersen, who shares a district with Scott, did not support the bill, modeled after Seattle’s JumpStart tax, because “it’s politically not feasible in the legislature” right now. 

Scott, who must have wronged the Pedersens in a past life, says that’s “too bad, because people of our district deserve leaders who believe in collaboration.” He signed onto the House version of Pedersen’s bill, even though he didn’t like the B&O cuts, and he saw other shortcomings.

“Senator Pedersen’s income tax bill would barely skim the reductions we’ll be seeing to K–12 schools, housing, healthcare, and higher education,” says Scott. 

He says the Well Washington Fund would prevent cuts to those essential services while making big companies pay their fare share and begin to fill the budget holes in 2027, two years before the millionaire’s tax kicks in.

“If Senate Majority Leader Pedersen had chosen to engage in the discussion around [the payroll tax bill] he would know that the Employment Security Department (ESD) has confirmed that my payroll tax could start being assessed by 2027, perhaps as soon as late 2026,” Scott wrote in an email. 

“That is not true,” says Pedersen. “He either has not interacted with the agency or he’s not representing that correctly.”

Except it is true. The ESD confirmed Scott’s payroll tax could be up and running by August 2027.

As it stands now, the payroll tax is stalling out, but as Eli Goss from the Budget and Policy center says, bills aren’t truly dead until the session is over. The millionaires tax, meanwhile, is moving to the full Senate next week. The millionaires are unhappy. The Republicans are bristling. Tim Eyman keeps sending emails with AI images of mud-splattered pooches to represent the “Godless dirty dog Democrats.” 

Pedersen isn’t counting it as a win yet. Not until it passes the legislature, survives the inevitable referendum campaign and Supreme Court challenge. “Those are the three big gates to pass through, before we can actually start to make a significant change in the second worst tax system in the country,” he says. 

Thursday, 12 February

23:35

21:14

Ticket Alert: Zayn, Baby Keem, and More Seattle Events Going On Sale This Week [The Stranger]

Plus, the Black Keys and More Event Updates for February 12
by EverOut Staff

Get ready to *add to cart*. R&B-influenced pop star Zayn embarks on his first solo headline arena tour this year, arriving in Seattle this September. Rapper and producer Baby Keem will support his sophomore album, Casino, on tour. Plus, blues rock duo The Black Keys supports their forthcoming album, Peaches!, with two nights at Remlinger Farms. Read on for details on those and other newly announced events, plus some news you can use.

Tickets go on sale at 10 am unless otherwise noted.

ON SALE FRIDAY, FEBRUARY 13

MUSIC

Baby Keem: The Ca$ino Tour
WaMu Theater (Wed May 13)
On sale at noon

Bilmuri - Kinda Hard Tour
Paramount Theatre (Wed Sept 30)

The Black Keys: Peaches 'n Kream
Remlinger Farms (May 29–30)

19:42

The Live and Let Dry at Lady Jaye [The Stranger]

An Herbaceous Mocktail Made from Caffeinated Leaves, Iris Roots, and Almonds
by Meg van Huygen

Although I didn’t observe Dry January myself, I absolutely respect the game, just as I understand that the liver wants what it wants (and doesn’t what it doesn’t). I’m also always nosy about what drinks my friends order, even if they’re teetotal. So at West Seattle’s Lady Jaye last month, when my NA pal was raving about their tasty mango shrub, I did something I normally wouldn’t. I took a peek at the mocktails. Er, fauxtails. Foxtails. Whatever we’re calling them now.

One drink had three things I love and one I’d never even heard of. The Live and Let Dry consists of Three Spirit Livener, Lyre’s Amaretti, lime, ginger, and Casamara Club Alta. I’m an old friend and lover of the Casamara pantheon, so that was enough by itself to make the sale, frankly. But what in the fuck is Three Spirit Livener? Plus ginger AND amaretti? “If you want, I can add gin to it,” bartender Nick consoled me as he put the drink together. “It’s really good both ways.” Tempting, but I had him hold off. Let’s taste it in its purest form first.

If you don’t know the Casamara Club line of botanical sodas, they’re all very leafy and horticultural-flavored, like drinking trees. These alcohol-free takes on amari-based cocktails come in six different flaves, and while there’s one that tastes exactly like the smell of the hand soap in my mom’s bathroom, the Alta soda is fuckin’ elite. Inspired by the Negroni cocktail, it’s made of chinotto—a bitter orange that grows along the Calabrian coast—as well as allspice berries, mandarin, lemon, clove, anise, juniper, and orris root, which is the root of the Dalmatian iris. Warm spice, sharp citrus. It’s like piney Christmas lemonade.

Then there’s a drop of virgin amaretto from UK-based Lyre’s, which styles their booze-free version as “amaretti,” after the cookie. It’s peachy-vanilla, toasty and nutty, kind of marzipan-tastical. Lyre’s is less sweet than the boozy stuff, too (and it’s lovely over ice cream, by the way).

Finally, the dark horse: the Livener by Three Spirit. Its base is the dried leaves of the ilex guayusa, a caffeinated tree that grows in the Amazon rainforest; a strong guayusa tea is served at South American convivencias (social gatherings) as a pick-me-up. To this, Three Spirit adds watermelon, pomegranate, and schisandra berries, which are distantly related to star anise. This elixir also packs some heat, from both ginger and cayenne, and astringent tartness from hibiscus extract. They add extra caffeine to it, too, and a little punch of apple cider vinegar. It’s super complex and bright and reminds me a little bit of those fruit teas from boba shops, but with chili and extra herbs, and without the big sugary slap in the face.

Lady Jaye owner Sara Rosales says she was eager to work with this stuff specifically for its mood-enhancing ingredients. “In theory,” she says, “this cocktail actually makes you feel revived!” Damn, it’s like the opposite of a cocktail—in more ways than one.

The effect is at once delicious and fascinating. In the end, I drank up the whole glass before I remembered to ask Nick to add gin, then came back later to try the ginned-up version. Both ways are fabulous, of course. I’ll try it with bourbon next time.

18:56

How can I distinguish between the numeric keypad 0 and the top-row 0 in the WM_KEY­DOWN message? [The Old New Thing]

A customer wanted to know how to distinguish between the numeric keypad 0 and the top-row 0 in the WM_KEY­DOWN message. And while we’re at it, let’s also distinguish between the numeric keypad 0 and the Ins key.

We start with this table of what you get in the WM_KEY­DOWN message when you press the numeric keypad 0.

Event wParam
Numpad0 with NumLock on VK_NUMPAD0
Numpad0 with NumLock off VK_INSERT

Okay, so when the wParam is VK_NUMPAD0, it seems pretty clear that we have the numeric keypad 0. But when it is VK_INSERT, we aren’t sure whether it’s the numeric keypad 0 with NumLock off, or whether it’s the dedicated Ins key.

For that, we can look at the lParam, specifically, bit 24, which is documented as the “extended key” bit.

Rewind the clock to 1983. The IBM PC XT keyboard is introduced. To the left of the main keyboard is a set of numbered function keys, and to the right is a numeric keypad. But the keys on the numeric keypad do double-duty because arrows and other editing keys are overlaid onto them.

7
Home
8
9
PgUp
4
5
 
6
PrtSc
*
1
End
2
3
PgDn
0
Ins
.
Del

You select whether you want numbers or arrows/editing keys by toggling NumLock.

The IBM PS/2 keyboard expanded the set of keys on the keyboard by inserting a block of keys between the main keyboard and the numeric keypad. This block contains the arrow keys and the editing keys. This keyboard layout closely resembles the keyboard layout used by most keyboards today, so I guess it held up okay.

For compatibility, the bonus keys on the keyboard reported themselves to be the same as the numeric keypad keys they shadowed, but with an extra flag byte to say that they are “extended” keys. They’re “extended” because they weren’t in the original keyboard.

This “extended” terminology has carried forward ever since. So we can distinguish between the dedicated Ins key and a numeric keypad 0 with NumLock off by seeing if we got an extended key. If so, then it came from the editing keys; if not, then it came from the numeric keypad.

Event wParam Extended?
Numpad0 with NumLock on VK_NUMPAD0 0
Numpad0 with NumLock off VK_INSERT 0
Ins key VK_INSERT 1

Next time, we’ll look at distinguishing the numeric keypad 0 from the top-row 0 in the WM_CHAR message. It’s a little messier.

Bonus chatter: That PrtSc key was a major source of frustration because it sat right next to the shift key. If your finger was slightly misaligned and hit both the shift key and the PrtSc key, you accidentally asked for the screen contents to be sent to the printer. Your computer just hung until you turned on your printer so you could get a printout that you didn’t want. (And if you didn’t have a printer, you were just dead.)

The post How can I distinguish between the numeric keypad 0 and the top-row 0 in the <CODE>WM_<WBR>KEY­DOWN</CODE> message? appeared first on The Old New Thing.

Feeds

FeedRSSLast fetchedNext fetched after
@ASmartBear XML 20:49, Wednesday, 18 February 21:30, Wednesday, 18 February
a bag of four grapes XML 20:28, Wednesday, 18 February 21:10, Wednesday, 18 February
Ansible XML 20:49, Wednesday, 18 February 21:29, Wednesday, 18 February
Bad Science XML 20:21, Wednesday, 18 February 21:10, Wednesday, 18 February
Black Doggerel XML 20:49, Wednesday, 18 February 21:30, Wednesday, 18 February
Blog - Official site of Stephen Fry XML 20:21, Wednesday, 18 February 21:10, Wednesday, 18 February
Charlie Brooker | The Guardian XML 20:28, Wednesday, 18 February 21:10, Wednesday, 18 February
Charlie's Diary XML 20:21, Wednesday, 18 February 21:09, Wednesday, 18 February
Chasing the Sunset - Comics Only XML 20:21, Wednesday, 18 February 21:10, Wednesday, 18 February
Coding Horror XML 20:21, Wednesday, 18 February 21:08, Wednesday, 18 February
Cory Doctorow's craphound.com XML 20:28, Wednesday, 18 February 21:10, Wednesday, 18 February
Cory Doctorow, Author at Boing Boing XML 20:49, Wednesday, 18 February 21:30, Wednesday, 18 February
Ctrl+Alt+Del Comic XML 20:21, Wednesday, 18 February 21:09, Wednesday, 18 February
Cyberunions XML 20:21, Wednesday, 18 February 21:10, Wednesday, 18 February
David Mitchell | The Guardian XML 21:07, Wednesday, 18 February 21:50, Wednesday, 18 February
Deeplinks XML 21:07, Wednesday, 18 February 21:51, Wednesday, 18 February
Diesel Sweeties webcomic by rstevens XML 21:07, Wednesday, 18 February 21:50, Wednesday, 18 February
Dilbert XML 20:21, Wednesday, 18 February 21:10, Wednesday, 18 February
Dork Tower XML 20:28, Wednesday, 18 February 21:10, Wednesday, 18 February
Economics from the Top Down XML 21:07, Wednesday, 18 February 21:50, Wednesday, 18 February
Edmund Finney's Quest to Find the Meaning of Life XML 21:07, Wednesday, 18 February 21:50, Wednesday, 18 February
EFF Action Center XML 21:07, Wednesday, 18 February 21:50, Wednesday, 18 February
Enspiral Tales - Medium XML 21:07, Wednesday, 18 February 21:52, Wednesday, 18 February
Events XML 20:21, Wednesday, 18 February 21:09, Wednesday, 18 February
Falkvinge on Liberty XML 20:21, Wednesday, 18 February 21:09, Wednesday, 18 February
Flipside XML 20:28, Wednesday, 18 February 21:10, Wednesday, 18 February
Flipside XML 21:07, Wednesday, 18 February 21:52, Wednesday, 18 February
Free software jobs XML 20:49, Wednesday, 18 February 21:29, Wednesday, 18 February
Full Frontal Nerdity by Aaron Williams XML 20:21, Wednesday, 18 February 21:09, Wednesday, 18 February
General Protection Fault: Comic Updates XML 20:21, Wednesday, 18 February 21:09, Wednesday, 18 February
George Monbiot XML 21:07, Wednesday, 18 February 21:50, Wednesday, 18 February
Girl Genius XML 21:07, Wednesday, 18 February 21:50, Wednesday, 18 February
Groklaw XML 20:21, Wednesday, 18 February 21:09, Wednesday, 18 February
Grrl Power XML 20:28, Wednesday, 18 February 21:10, Wednesday, 18 February
Hackney Anarchist Group XML 20:21, Wednesday, 18 February 21:10, Wednesday, 18 February
Hackney Solidarity Network XML 21:07, Wednesday, 18 February 21:52, Wednesday, 18 February
http://blog.llvm.org/feeds/posts/default XML 21:07, Wednesday, 18 February 21:52, Wednesday, 18 February
http://calendar.google.com/calendar/feeds/q7s5o02sj8hcam52hutbcofoo4%40group.calendar.google.com/public/basic XML 20:49, Wednesday, 18 February 21:29, Wednesday, 18 February
http://dynamic.boingboing.net/cgi-bin/mt/mt-cp.cgi?__mode=feed&_type=posts&blog_id=1&id=1 XML 21:07, Wednesday, 18 February 21:52, Wednesday, 18 February
http://eng.anarchoblogs.org/feed/atom/ XML 20:28, Wednesday, 18 February 21:14, Wednesday, 18 February
http://feed43.com/3874015735218037.xml XML 20:28, Wednesday, 18 February 21:14, Wednesday, 18 February
http://flatearthnews.net/flatearthnews.net/blogfeed XML 20:49, Wednesday, 18 February 21:30, Wednesday, 18 February
http://fulltextrssfeed.com/ XML 21:07, Wednesday, 18 February 21:50, Wednesday, 18 February
http://london.indymedia.org/articles.rss XML 20:21, Wednesday, 18 February 21:08, Wednesday, 18 February
http://pipes.yahoo.com/pipes/pipe.run?_id=ad0530218c055aa302f7e0e84d5d6515&amp;_render=rss XML 20:28, Wednesday, 18 February 21:14, Wednesday, 18 February
http://planet.gridpp.ac.uk/atom.xml XML 20:21, Wednesday, 18 February 21:08, Wednesday, 18 February
http://shirky.com/weblog/feed/atom/ XML 21:07, Wednesday, 18 February 21:51, Wednesday, 18 February
http://thecommune.co.uk/feed/ XML 21:07, Wednesday, 18 February 21:52, Wednesday, 18 February
http://theness.com/roguesgallery/feed/ XML 20:21, Wednesday, 18 February 21:09, Wednesday, 18 February
http://www.airshipentertainment.com/buck/buckcomic/buck.rss XML 20:21, Wednesday, 18 February 21:10, Wednesday, 18 February
http://www.airshipentertainment.com/growf/growfcomic/growf.rss XML 21:07, Wednesday, 18 February 21:51, Wednesday, 18 February
http://www.airshipentertainment.com/myth/mythcomic/myth.rss XML 20:28, Wednesday, 18 February 21:10, Wednesday, 18 February
http://www.baen.com/baenebooks XML 21:07, Wednesday, 18 February 21:51, Wednesday, 18 February
http://www.feedsapi.com/makefulltextfeed.php?url=http%3A%2F%2Fwww.somethingpositive.net%2Fsp.xml&what=auto&key=&max=7&links=preserve&exc=&privacy=I+accept XML 21:07, Wednesday, 18 February 21:51, Wednesday, 18 February
http://www.godhatesastronauts.com/feed/ XML 20:21, Wednesday, 18 February 21:09, Wednesday, 18 February
http://www.tinycat.co.uk/feed/ XML 20:49, Wednesday, 18 February 21:29, Wednesday, 18 February
https://anarchism.pageabode.com/blogs/anarcho/feed/ XML 21:07, Wednesday, 18 February 21:51, Wednesday, 18 February
https://broodhollow.krisstraub.comfeed/ XML 20:49, Wednesday, 18 February 21:30, Wednesday, 18 February
https://debian-administration.org/atom.xml XML 20:49, Wednesday, 18 February 21:30, Wednesday, 18 February
https://elitetheatre.org/ XML 20:21, Wednesday, 18 February 21:08, Wednesday, 18 February
https://feeds.feedburner.com/Starslip XML 20:28, Wednesday, 18 February 21:10, Wednesday, 18 February
https://feeds2.feedburner.com/GeekEtiquette?format=xml XML 21:07, Wednesday, 18 February 21:50, Wednesday, 18 February
https://hackbloc.org/rss.xml XML 20:49, Wednesday, 18 February 21:30, Wednesday, 18 February
https://kajafoglio.livejournal.com/data/atom/ XML 20:21, Wednesday, 18 February 21:10, Wednesday, 18 February
https://philfoglio.livejournal.com/data/atom/ XML 20:21, Wednesday, 18 February 21:08, Wednesday, 18 February
https://pixietrixcomix.com/eerie-cutiescomic.rss XML 20:21, Wednesday, 18 February 21:08, Wednesday, 18 February
https://pixietrixcomix.com/menage-a-3/comic.rss XML 21:07, Wednesday, 18 February 21:51, Wednesday, 18 February
https://propertyistheft.wordpress.com/feed/ XML 20:49, Wednesday, 18 February 21:29, Wednesday, 18 February
https://requiem.seraph-inn.com/updates.rss XML 20:49, Wednesday, 18 February 21:29, Wednesday, 18 February
https://studiofoglio.livejournal.com/data/atom/ XML 20:28, Wednesday, 18 February 21:14, Wednesday, 18 February
https://thecommandline.net/feed/ XML 20:28, Wednesday, 18 February 21:14, Wednesday, 18 February
https://torrentfreak.com/subscriptions/ XML 21:07, Wednesday, 18 February 21:50, Wednesday, 18 February
https://web.randi.org/?format=feed&type=rss XML 21:07, Wednesday, 18 February 21:50, Wednesday, 18 February
https://www.dcscience.net/feed/medium.co XML 20:21, Wednesday, 18 February 21:10, Wednesday, 18 February
https://www.DropCatch.com/domain/steampunkmagazine.com XML 20:49, Wednesday, 18 February 21:30, Wednesday, 18 February
https://www.DropCatch.com/domain/ubuntuweblogs.org XML 20:28, Wednesday, 18 February 21:14, Wednesday, 18 February
https://www.DropCatch.com/redirect/?domain=DyingAlone.net XML 20:21, Wednesday, 18 February 21:08, Wednesday, 18 February
https://www.freedompress.org.uk:443/news/feed/ XML 20:21, Wednesday, 18 February 21:09, Wednesday, 18 February
https://www.goblinscomic.com/category/comics/feed/ XML 20:49, Wednesday, 18 February 21:29, Wednesday, 18 February
https://www.loomio.com/blog/feed/ XML 20:28, Wednesday, 18 February 21:14, Wednesday, 18 February
https://www.newstatesman.com/feeds/blogs/laurie-penny.rss XML 20:49, Wednesday, 18 February 21:30, Wednesday, 18 February
https://www.patreon.com/graveyardgreg/posts/comic.rss XML 20:21, Wednesday, 18 February 21:08, Wednesday, 18 February
https://www.rightmove.co.uk/rss/property-for-sale/find.html?locationIdentifier=REGION^876&maxPrice=240000&minBedrooms=2&displayPropertyType=houses&oldDisplayPropertyType=houses&primaryDisplayPropertyType=houses&oldPrimaryDisplayPropertyType=houses&numberOfPropertiesPerPage=24 XML 21:07, Wednesday, 18 February 21:50, Wednesday, 18 February
https://x.com/statuses/user_timeline/22724360.rss XML 20:49, Wednesday, 18 February 21:29, Wednesday, 18 February
Humble Bundle Blog XML 20:21, Wednesday, 18 February 21:08, Wednesday, 18 February
I, Cringely XML 20:21, Wednesday, 18 February 21:09, Wednesday, 18 February
Irregular Webcomic! XML 20:49, Wednesday, 18 February 21:30, Wednesday, 18 February
Joel on Software XML 20:28, Wednesday, 18 February 21:14, Wednesday, 18 February
Judith Proctor's Journal XML 20:49, Wednesday, 18 February 21:29, Wednesday, 18 February
Krebs on Security XML 20:49, Wednesday, 18 February 21:30, Wednesday, 18 February
Lambda the Ultimate - Programming Languages Weblog XML 20:49, Wednesday, 18 February 21:29, Wednesday, 18 February
Looking For Group XML 21:07, Wednesday, 18 February 21:51, Wednesday, 18 February
LWN.net XML 20:49, Wednesday, 18 February 21:30, Wednesday, 18 February
Mimi and Eunice XML 21:07, Wednesday, 18 February 21:52, Wednesday, 18 February
Neil Gaiman's Journal XML 20:49, Wednesday, 18 February 21:29, Wednesday, 18 February
Nina Paley XML 20:21, Wednesday, 18 February 21:08, Wednesday, 18 February
O Abnormal – Scifi/Fantasy Artist XML 21:07, Wednesday, 18 February 21:52, Wednesday, 18 February
Oglaf! -- Comics. Often dirty. XML 20:21, Wednesday, 18 February 21:09, Wednesday, 18 February
Oh Joy Sex Toy XML 21:07, Wednesday, 18 February 21:51, Wednesday, 18 February
Order of the Stick XML 21:07, Wednesday, 18 February 21:51, Wednesday, 18 February
Original Fiction Archives - Reactor XML 20:28, Wednesday, 18 February 21:10, Wednesday, 18 February
OSnews XML 21:07, Wednesday, 18 February 21:52, Wednesday, 18 February
Paul Graham: Unofficial RSS Feed XML 21:07, Wednesday, 18 February 21:52, Wednesday, 18 February
Penny Arcade XML 20:28, Wednesday, 18 February 21:10, Wednesday, 18 February
Penny Red XML 21:07, Wednesday, 18 February 21:52, Wednesday, 18 February
PHD Comics XML 20:21, Wednesday, 18 February 21:10, Wednesday, 18 February
Phil's blog XML 20:21, Wednesday, 18 February 21:09, Wednesday, 18 February
Planet Debian XML 21:07, Wednesday, 18 February 21:52, Wednesday, 18 February
Planet GNU XML 20:49, Wednesday, 18 February 21:30, Wednesday, 18 February
Planet Lisp XML 20:21, Wednesday, 18 February 21:10, Wednesday, 18 February
Pluralistic: Daily links from Cory Doctorow XML 20:49, Wednesday, 18 February 21:29, Wednesday, 18 February
PS238 by Aaron Williams XML 20:21, Wednesday, 18 February 21:09, Wednesday, 18 February
QC RSS XML 20:21, Wednesday, 18 February 21:08, Wednesday, 18 February
Radar XML 20:28, Wednesday, 18 February 21:10, Wednesday, 18 February
RevK®'s ramblings XML 20:28, Wednesday, 18 February 21:14, Wednesday, 18 February
Richard Stallman's Political Notes XML 20:21, Wednesday, 18 February 21:10, Wednesday, 18 February
Scenes From A Multiverse XML 20:21, Wednesday, 18 February 21:08, Wednesday, 18 February
Schneier on Security XML 20:49, Wednesday, 18 February 21:29, Wednesday, 18 February
SCHNEWS.ORG.UK XML 21:07, Wednesday, 18 February 21:51, Wednesday, 18 February
Scripting News XML 20:28, Wednesday, 18 February 21:10, Wednesday, 18 February
Seth's Blog XML 20:28, Wednesday, 18 February 21:14, Wednesday, 18 February
Skin Horse XML 20:28, Wednesday, 18 February 21:10, Wednesday, 18 February
Spinnerette XML 21:07, Wednesday, 18 February 21:51, Wednesday, 18 February
Tales From the Riverbank XML 20:21, Wednesday, 18 February 21:10, Wednesday, 18 February
The Adventures of Dr. McNinja XML 21:07, Wednesday, 18 February 21:52, Wednesday, 18 February
The Bumpycat sat on the mat XML 20:49, Wednesday, 18 February 21:29, Wednesday, 18 February
The Daily WTF XML 20:28, Wednesday, 18 February 21:14, Wednesday, 18 February
The Monochrome Mob XML 20:49, Wednesday, 18 February 21:30, Wednesday, 18 February
The Non-Adventures of Wonderella XML 21:07, Wednesday, 18 February 21:50, Wednesday, 18 February
The Old New Thing XML 21:07, Wednesday, 18 February 21:51, Wednesday, 18 February
The Open Source Grid Engine Blog XML 20:21, Wednesday, 18 February 21:08, Wednesday, 18 February
The Stranger XML 21:07, Wednesday, 18 February 21:52, Wednesday, 18 February
towerhamletsalarm XML 20:28, Wednesday, 18 February 21:14, Wednesday, 18 February
Twokinds XML 20:28, Wednesday, 18 February 21:10, Wednesday, 18 February
UK Indymedia Features XML 20:28, Wednesday, 18 February 21:10, Wednesday, 18 February
Uploads from ne11y XML 20:28, Wednesday, 18 February 21:14, Wednesday, 18 February
Uploads from piasladic XML 21:07, Wednesday, 18 February 21:50, Wednesday, 18 February
Use Sword on Monster XML 20:21, Wednesday, 18 February 21:08, Wednesday, 18 February
Wayward Sons: Legends - Sci-Fi Full Page Webcomic - Updates Daily XML 20:28, Wednesday, 18 February 21:14, Wednesday, 18 February
what if? XML 20:49, Wednesday, 18 February 21:30, Wednesday, 18 February
Whatever XML 20:21, Wednesday, 18 February 21:10, Wednesday, 18 February
Whitechapel Anarchist Group XML 20:21, Wednesday, 18 February 21:10, Wednesday, 18 February
WIL WHEATON dot NET XML 21:07, Wednesday, 18 February 21:51, Wednesday, 18 February
wish XML 21:07, Wednesday, 18 February 21:52, Wednesday, 18 February
Writing the Bright Fantastic XML 21:07, Wednesday, 18 February 21:51, Wednesday, 18 February
xkcd.com XML 21:07, Wednesday, 18 February 21:50, Wednesday, 18 February