
Why is there a long delay between a thread exiting and the WaitForSingleObject returning? [The Old New Thing]
A customer reported that they were using the
WaitForSingleObject function to wait
for a thread to exit, but they found that even though the thread
had exited, the WaitForSingleObject
call did not return for over a minute. What could explain this
delay in reporting the end of a thread? Can we do something to
speed it up?
My psychic powers tell me that the thread didn’t actually exit.
What the customer is observing is probably that their thread
procedure has returned, signaling the end of the thread. But a lot
of stuff happens after the thread procedure exits. The system needs
to send DLL_THREAD_DETACH notifications to all of the
DLLs (unless the DLL has opted out via
DisableThreadLibraryCalls), and
doing so requires the loader lock.
I would use the debugger to look for the thread you thought had exited and see what it’s doing. It might be blocked waiting for the loader lock because some other thread is hogging it. Or it could be running a DLL’s detach code, and that detach code has gotten stuck on a long-running operation.
I suspect it’s the latter: One of the DLLs is waiting for something in its detach code, and that something takes about a minute.
We didn’t hear back from the customer, which could mean that this was indeed the problem. Or it could mean that this didn’t help, but they decided that we weren’t being helpful and didn’t pursue the matter further. Unfortunately, in a lot of these customer debugging engagements, we never hear back whether our theory worked. (Another possibility is that the customer wrote back with a “thank you”, but the customer liaison didn’t forward it to the engineering team because they didn’t want to bother them any further.)
The post Why is there a long delay between a thread exiting and the <CODE>WaitForSingleObject</CODE> returning? appeared first on The Old New Thing.
FSF clarifies its stance on AGPLv3 additional terms [LWN.net]
OnlyOffice CEO Lev Bannov has recently claimed that the Euro-Office fork of the OnlyOffice suite violates the GNU Affero General Public License version 3 (AGPLv3). Krzysztof Siewicz of the Free Software Foundation (FSF) has published an article on the FSF's position on adding terms to the AGPLv3. In short, the Siewicz concludes that OnlyOffice has added restrictions to the license that are not compatible with the AGPLv3, and those restrictions can be removed by recipients of the code.
We urge OnlyOffice to clarify the situation by making it unambiguous that OnlyOffice is licensed under the AGPLv3, and that users who already received copies of the software are allowed to remove any further restrictions. Additionally, if they intend to continue to use the AGPLv3 for future releases, they should state clearly that the program is licensed under the AGPLv3 and make sure they remove any further restrictions from their program documentation and source code. Confusing users by attaching further restrictions to any of the FSF's family of GNU General Public Licenses is not in line with free software.
Preparatory School [Penny Arcade]
I have now gotten him to talk about it twice when he wanted to talk about his colon zero or even fewer times. This accrues not to my capacity for manipulation, but rather to the fact that he has been so harrowed body and mind by the experience he is now essentially just a sausage casing with a t-shirt draped over it. I was able to extract more data in my most recent interrogation and what I learned will shock you.
You cannot use the GNU (A)GPL to take software freedom away [Planet GNU]
Protecting the integrity of the (A)GPL is an essential component in protecting user freedom.
The Big Idea: A.Z. Rozkillis [Whatever]

When there’s a million and one paths in front of you, how do you know which decision to make? What if you don’t even have control over which one you end up on? Author A. Z. Rozkillis explores the idea of every decision we make, or don’t make, sending us on different paths throughout multiple realities. Journey on through the Big Idea for her newest novel, Fractal Terminus.
A. Z. ROZKILLIS:
In an infinite universe there are infinite possibilities. It’s a concept that has enamored me for decades, has led me into a career focused on space exploration and has fueled my endless love of science fiction. And that is probably why it is the Big Idea behind Fractal Terminus.
When I intentionally ended my first book, Space Station X, on a cliffhanger, I never truly intended to write a sequel. I liked the idea of leaving the speculation up to the reader about could possibly happen after an event like that. More to the point, I didn’t think I deserved to be the person to establish, canonically, what the future would hold for my main characters. But nature abhors a vacuum, and the same could be said for the space between my ears. So, I figured if I don’t want to write one follow-on outcome, and if I preferred the idea that any possibility could be canon, then why don’t I write a book where I do just that? Where I explore numerous possible outcomes from one, singularly massive event.
Fractal Terminus really digs down into the idea that with every flip of a coin, with every path chosen, with very outcome realized, there exists a separate universe (or infinite separate universes) in which the an alternate outcome could occur. I know it’s not a new idea, its just one I have felt, personally, immensely drawn to. The universe is so unfathomably endless, with there being no way for us to truly understand how vast it is. I feel that it is entirely plausible that somewhere, at the far reaches, there exists a reality in which I chose to study animal husbandry and not aerospace engineering. Or maybe I decided to eat that questionable leftover sushi rather than pitching it when I found it at the back of the fridge. Who knows? If the universe has no limit, then maybe every single possible reality is just wrapped around us.
For my characters, their personal universe is expanding too. My first book had a very narrow focus by design, because I had a main character who had reduced her whole universe down to the same five concentric metal rings of her space station. Jax refused to consider possibilities outside of that limited existence until she was forced to. Then she swallowed her pride and took the leap of faith on her feelings for Saunders. It could have gone either way, but canonically it worked out for Jax. Then they took a different plunge. Now Jax and Saunders are suddenly flung into a situation where they have to expand their view, because new experiences have that habit of broadening your perspective. This Space Station is no longer a cramped, desolate and lonely existence, but a cramped, desolate and overcrowded experience, where Jax has to dust off her social skills and mingle in order to survive. And as she lets her universe expand around her to include the souls locked in fate along side her, infinitely more universe opportunities unfurl.
Some of these are fates she realizes she can control. She can see where her actions can lead her and Saunders and she can tell when it might not be the best path. But more often than not, Jax and Saunders are at the mercy of the universe itself. Nature is a cold and uncaring master, and sometimes the coin flip is not even remotely something anyone can control.
We face these moments every day. Will this person I am talking to be an ally? Will they be my demise? Will I regret this interaction or not? Is there, even remotely, anything I could have done to change this outcome? There isn’t really a way for anyone to know, so you might as well take the chance. As the universe is expanding rapidly on a macro scale, we are, all of us, every day, making small decisions that expand our microcosm just as rapidly. Jax and Saunders expand their view on life to include the lives around them, while the universe expands to encompass every possible, even far-fetched idea of an outcome that could ever be considered. And that’s the big idea. The universe can you send you on an infinite number of outcomes, and you’ll never know which one you are in. So you are just going to have to take it on faith that you are on the right track.
Fractal Terminus: Barnes & Noble|Bookshop|Space Wizards
Paul Tagliamonte: designing arf, an sdr iq encoding format 🐶 [Planet Debian]

hz.tools will be tagged #hztools.It’s true – processing data from software defined radios can be a bit complex 👈😏👈 – which tends to keep all but the most grizzled experts and bravest souls from playing with it. While I wouldn’t describe myself as either, I will say that I’ve stuck with it for longer than most would have expected of me. One of the biggest takeaways I have from my adventures with software defined radio is that there’s a lot of cool crossover opportunity between RF and nearly every other field of engineering.
Fairly early on, I decided on a very light metadata scheme to track SDR captures, called rfcap. rfcap has withstood my test of time, and I can go back to even my earliest captures and still make sense of what they are – IQ format, capture frequencies, sample rates, etc. A huge part of this was the simplicity of the scheme (fixed-lengh header, byte-aligned to supported capture formats), which made it roughly as easy to work with as a raw file of IQ samples.
However, rfcap has a number of downsides. It’s only a single, fixed-length header. If the frequency of operation changed during the capture, that change is not represented in the capture information. It’s not possible to easily represent mulit-channel coherent IQ streams, and additional metadata is condemned to adjacent text files.
A few years ago, I needed to finally solve some of these shortcomings and tried to see if a new format would stick. I sat down and wrote out my design goals before I started figuring out what it looked like.
First, whatever I come up with must be capable of being streamed and processed while being streamed. This includes streaming across the network or merely written to disk as it’s being created. No post-processing required. This is mostly an artifact of how I’ve built all my tools and how I intereact with my SDRs. I use them extensively over the network (both locally, as well as remotely by friends across my wider lan). This decision sometimes even prompts me to do some crazy things from time to time.
I need actual, real support for multiple IQ channels from my multi-channel SDRs (Ettus, Kerberos/Kracken SDR, etc) for playing with things like beamforming. My new format must be capable of storing multiple streams in a single capture file, rather than a pile of files in a directory (and hope they’re aligned).
Finally, metadata must be capable of being stored in-band. The
initial set of metadata I needed to formalize in-stream were
Frequency Changes and Discontinuities.
Since then, ARF has grown a few more.
After getting all that down, I opted to start at what I thought the simplest container would look like, TLV (tag-length-value) encoded packets. This is a fairly well trodden path, and used by a bunch of existing protocols we all know and love. Each ARF file (or stream) was a set of encoded “packets” (sometimes called data units in other specs). This means that unknown packet types may be skipped (since the length is included) and additional data can be added after the existing fields without breaking existing decoders.
Unlike a “traditional” TLV structure, I opted to add “flags” to the top-level packet. This gives me a bit of wiggle room down the line, and gives me a feature that I like from ASN.1 – a “critical” bit. The critical bit indicates that the packet must be understood fully by implementers, which allows future backward incompatible changes by marking a new packet type as critical. This would only really be done if something meaningfully changed the interpretation of the backwards compatible data to follow.
| Flag | Description |
| 0x01 | Critical (tag must be understood) |
Within each Packet is a tag field. This tag
indicates how the contents of the value field should
be interpreted.
| Tag ID | Description |
| 0x01 | Header |
| 0x02 | Stream Header |
| 0x03 | Samples |
| 0x04 | Frequency Change |
| 0x05 | Timing |
| 0x06 | Discontinuity |
| 0x07 | Location |
| 0xFE | Vendor Extension |
In order to help with checking the basic parsing and encoding of this format, the following is an example packet which should parse without error.
00, // tag (0; no subpacket is 0 yet)
00, // flags (0; no flags)
00, 00 // length (0; no data)
// data would go here, but there is none
Additionally, throughout the rest of the subpackets, there are a few unique and shared datatypes. I document them all more clearly in the draft, but to quickly run through them here too:
This field represents a globally unique idenfifer, as defined by RFC 9562, as 16 raw bytes.
Data encoded in a Frequency field is stored as microhz (1 Hz is
stored as 1000000, 2 Hz is stored as 2000000) as an unsigned 64 bit
integer. This has a minimum value of 0 Hz, and a maximum value of
18446744073709551615 uHz, or just above 18.4 THz. This is a bit of
a tradeoff, but it’s a set of issues that I would gladly
contend with rather than deal with the related issues with storing
frequency data as a floating point value downstream. Not a huge
factor, but as an aside, this is also how my current generation SDR
processing code (sparky) stores Frequency data
internally, which makes conversion between the two natural.
ARF supports IQ samples in a number of different formats. Part of the idea here is I want it to be easy for capturing programs to encode ARF for a specific radio without mandating a single iq format representation. For IQ types with a scalar value which takes more than a single byte, this is always paired with a Byte Order field, to indicate if the IQ scalar values are little or big endian.
| ID | Name | Description |
| 0x01 | f32 | interleaved 32 bit floating point scalar values |
| 0x02 | i8 | interleaved 8 bit signed integer scalar values |
| 0x03 | i16 | interleaved 16 bit signed integer scalar values |
| 0x04 | u8 | interleaved 8 bit unsigned integer scalar values |
| 0x05 | f64 | interleaved 64 bit floating point scalar values |
| 0x06 | f16 | interleaved 16 bit floating point scalar values |
Each ARF file must start with a specific Header packet. The header contains information about the ARF stream writ large to follow. Header packets are always marked as “critical”.
In order to help with checking the basic parsing and encoding of this format, the following is an example header subpacket (when encoded or decoded this will be found inside an ARF packet as described above) which should parse without error, with known values.
00, 00, 00, fa, de, dc, ab, 1e, // magic
00, 00, 00, 00, 00, 00, 00, 00, // flags
18, 27, a6, c0, b5, 3b, 06, 07, // start time (1740543127)
// guid (fb47f2f0-957f-4545-94b3-75bc4018dd4b)
fb, 47, f2, f0, 95, 7f, 45, 45,
94, b3, 75, bc, 40, 18, dd, 4b,
// site_id (ba07c5ce-352b-4b20-a8ac-782628e805ca)
ba, 07, c5, ce, 35, 2b, 4b, 20,
a8, ac, 78, 26, 28, e8, 05, ca
Immediately after the arf Header,
some number of Stream Headers follow. There must be exactly the
same number of Stream Header packets as are indicated by the
num streams field of the Header. This has the nice
effect of enabling clients to read all the stream headers without
requiring buffering of “unread” packets from the
stream.
In order to help with checking the basic parsing and encoding of this format, the following is an example stream header subpacket (when encoded or decoded this will be found inside an ARF packet as described above) which should parse without error, with known values.
00, 01, // id (1)
00, 00, 00, 00, 00, 00, 00, 00, // flags
01, // format (float32)
01, // byte order (Little Endian)
00, 00, 01, d1, a9, 4a, 20, 00, // rate (2 MHz)
00, 00, 5a, f3, 10, 7a, 40, 00, // frequency (100 MHz)
// guid (7b98019d-694e-417a-8f18-167e2052be4d)
7b, 98, 01, 9d, 69, 4e, 41, 7a,
8f, 18, 16, 7e, 20, 52, be, 4d,
// site_id (98c98dc7-c3c6-47fe-bc05-05fb37b2e0db)
98, c9, 8d, c7, c3, c6, 47, fe,
bc, 05, 05, fb, 37, b2, e0, db,
Block of IQ samples in the format indicated by this
stream’s format and byte_order
field sent in the related Stream
Header.
In order to help with checking the basic parsing and encoding of this format, the following is an samples subpacket (when encoded or decoded this will be found inside an ARF packet as described above). The IQ values here are notional (and are either 2 8 bit samples, or 1 16 bit sample, depending on what the related Stream Header was).
01, // id
ab, cd, ab, cd, // iq samples
The center frequency of the IQ stream has changed since the Stream Header or last Frequency Change has been sent. This is useful to capture IQ streams that are jumping around in frequency during the duration of the capture, rather than starting and stopping them.
In order to help with checking the basic parsing and encoding of this format, the following is a frequency change subpacket (when encoded or decoded this will be found inside an ARF packet as described above).
01, // id
00, 00, b5, e6, 20, f4, 80, 00 // frequency (200 MHz)
Since the last Samples packet for this stream, samples have been dropped or not encoded to this stream. This can be used for a stream that has dropped samples for some reason, a large gap (radio was needed for something else), or communicating “iq snippits”.
In order to help with checking the basic parsing and encoding of this format, the following is a discontinuity subpacket (when encoded or decoded this will be found inside an ARF packet as described above).
01, // id
Up-to-date location as of this moment of the IQ stream, usually from a GPS. This allows for in-band geospatial information to be marked in the IQ stream. This can be used for all sorts of things (detected IQ packet snippits aligned with a time and location or a survey of rf noise in an area)
The sys field indicates the Geodetic system to be
used for the provided latitude, longitude
and elevation fields. The full list of supported
geodetic systems is currently just WGS84, but in case something
meaningfully changes in the future, it’d be nice to migrate
forward.
Unfortunately, being a bit of a coward here, the accuracy field is a bit of a cop-out. I’d really rather it be what we see out of kinematic state estimation tools like a kalman filter, or at minimum, some sort of ellipsoid. This is neither of those - it’s a perfect sphere of error where we pick the largest error in any direction and use that. Truthfully, I can’t be bothered to model this accurately, and I don’t want to contort myself into half-assing something I know I will half-ass just because I know better.
| System | Description |
| 0x01 | WGS84 - World Geodetic System 1984 |
In order to help with checking the basic parsing and encoding of this format, the following is a location subpacket (when encoded or decoded this will be found inside an ARF packet as described above).
00, 00, 00, 00, 00, 00, 00, 00, // flags
01, // system (wgs84)
3f, f3, be, 76, c8, b4, 39, 58, // latitude (1.234)
40, 02, c2, 8f, 5c, 28, f5, c3, // longitude (2.345)
40, 59, 00, 00, 00, 00, 00, 00, // elevation (100)
40, 24, 00, 00, 00, 00, 00, 00 // accuracy (10)
In addition to the fields I put in the spec, I expect that I may need custom packet types I can’t think of now. There’s all sorts of useful data that could be encoded into the stream, so I’d rather there be an officially sanctioned mechanism that allows future work on the spec without constraining myself.
Just an example, I’ve used a custom subpacket to create test vectors, the data is encoded into a Vendor Extension, followed by the IQ for the modulated packet. If the demodulated data and in-band original data don’t match, we’ve regressed. You could imagine in-band speech-to-text, antenna rotator azimuth information, or demodulated digital sideband data (like FM HDR data) too. Or even things I can’t even think of!
In order to help with checking the basic parsing and encoding of this format, the following is a vendor extension subpacket (when encoded or decoded this will be found inside an ARF packet as described above).
// extension id (b24305f6-ff73-4b7a-ae99-7a6b37a5d5cd)
b2, 43, 05, f6, ff, 73, 4b, 7a,
ae, 99, 7a, 6b, 37, a5, d5, cd,
// data (0x01, 0x02, 0x03, 0x04, 0x05)
01, 02, 03, 04, 05
The biggest tradeoff that I’m not entirely happy
with is limiting the length of a packet to u16 –
65535 bytes. Given the u8 sample header, this limits us to 8191 32
bit sample pairs at a time. I wound up believing that the overhead
in terms of additional packet framing is worth it – because
always encoding 4 byte lengths felt like overkill, and a dynamic
length scheme ballooned codepaths in the decoder that I was trying
to keep as easy to change as possible as I worked with the
format.
EFF Calls on Kuwait to Release Journalist Ahmed Shihab-Eldin [Deeplinks]
EFF calls on the Kuwaiti government to immediately release journalist Ahmed Shihab-Eldin. An award-winning journalist and television host who worked for Al Jazeera for many years, Shihab-Eldin—a dual American-Kuwaiti citizen—was arrested in Kuwait on March 3 while visiting family. The Committee to Protect Journalists (CPJ) reported yesterday that it is believed he has been charged with spreading false information, harming national security, and misusing his mobile phone.
According to the Guardian, Shihab-Eldin published footage of a U.S. Air Force F-15 E Strike Eagle crash, and posted to his Substack about the incident, noting that video circulating online showed local residents assisting the crash survivors.
Kuwait is one of several countries that has recently cracked down on reporting amidst the ongoing war. Kuwait’s Ministry of Interior posted on X on March 3—the same day Shihab-Eldin was arrested—warning people in the country “not to photograph or publish any clips or information related to missiles or relevant locations.” Earlier this month, the UN Office of the High Commissioner for Human Rights (OHCHR) highlighted a new decree in Kuwait banning the circulation of reports that seek to “undermine the prestige of the military” or erode public trust in it.
As reported by local media, the decree states that “those who intentionally publish statements or news or circulate false reports and rumors about military authorities resulting in weakening the trust in them and their morale, in addition to undermining their prestige, are punishable by three to 10 years in jail and a fine between KD 5,000 and 10,000.” The decree also imposes a penalty ranging from seven years to life imprisonment for “authorized people who cause financial loss or damage to the military authorities while carrying out a transaction, operation, project or case or obtaining any profit from such deals.”
In contrast to neighboring Gulf states, Kuwait has historically allowed the press to operate with relative freedom, and even introduced a law in 2020 protecting the right to access information. In practice, however, the government exercises considerable control over the media. Furthermore, there are several laws, including cybercrime legislation introduced in 2016, that restrict freedom of expression.
EFF is deeply concerned that Ahmed has not been seen nor heard from in nearly six weeks. We call on the government of Kuwait to immediately release Ahmed Shihab-Eldin.
Haiku on ARM64 boots to desktop in QEMU [OSnews]
Another Haiku monthly activity report, but this time around, there’s actually a big ticket item. Haiku has been in a pretty solid and stable state for a while now, so the activity reports have been dominated by fairly small, obscure changes, but during March a major milestone was reached for the ARM64 port.
smrobtzz contributed the bulk of the work, including fixes for building on macOS on ARM64, drivers for the Apple S5L UART, fixes to the kernel base address, clearing the frame pointer before entering the kernel, mapping physical memory correctly, the basics for userland, and more. SED4906 contributed some fixes to the bootloader page mapping, and
runtime_loader’s page-size checks.Combined, these changes allow the ARM64 port to get to the desktop in QEMU. There’s a forum thread, complete with screenshots, for anyone interested in following along.
↫ waddlesplash
While it’s only in QEMU, this is still a major achievement and paves the way for more people to work on the ARM64 port, possibly increasing its health. There’s tons of smaller changes and fixes all over the place, too, as usual, and the team mentions beta 6 isn’t quite ready yet, still. Don’t let that stop you from just downloading the latest nightly, though – Haiku is mature enough to use it.
Today the work with Claude is much better, though when we
got started it was even worse than yesterday. The key to getting on
track was to figure out why it worked so well in previous projects
and fell apart with this one. In each of the others, I passed off
an existing project for it to convert or build on. This time we
started with something it had created without a "starter." So I
took all the random bits we had and organized into the opmlProjectEditor format
we had
specified back in early March. It's how all my projects since
2013 are organized, so it's a good fit for me, and also for Claude.
So now I'm going to pass back a package that's ready to be worked
on collaboratively. The other thing is I switched to the Opus 4.6
model from Sonnet 4.6. So I've made it to 11AM and feel like I've
already accomplished something today. The problem was yesterday we
were spinning our wheels, and that doesn't work for me. I'm a very
directed developer. ;-)
[$] Forking Vim to avoid LLM-generated code [LWN.net]
Many people dislike the proliferation of Large Language Models (LLMs) in recent years, and so make an understandable attempt to avoid them. That may not be possible in general, but there are two new forks of Vim that seek to provide an editing environment with no LLM-generated code. EVi focuses on being a modern Vim without LLM-assisted contributions, while Vim Classic focuses on providing a long-term maintenance version of Vim 8. While both are still in their early phases, the projects look to be on track to provide stable alternatives — as long as enough people are interested.
Dirk Eddelbuettel: qlcal 0.1.1 on CRAN: Calendar Updates [Planet Debian]

The nineteenth release of the qlcal package arrivied at CRAN just now, and has already been built for r2u. This version synchronises with QuantLib 1.42 released this week.
qlcal delivers the calendaring parts of QuantLib. It is provided (for the R package) as a set of included files, so the package is self-contained and does not depend on an external QuantLib library (which can be demanding to build). qlcal covers over sixty country / market calendars and can compute holiday lists, its complement (i.e. business day lists) and much more. Examples are in the README at the repository, the package page, and course at the CRAN package page.
This releases updates to the 2025 holidays for China, Singapore, and Taiwan.
The full details from NEWS.Rd follow.
Changes in version 0.1.1 (2026-04-15)
Synchronized with QuantLib 1.42 released two days ago
Calendar updates for China, Singapore, Taiwan
Courtesy of my CRANberries, there is a diffstat report for this release. See the project page and package documentation for more details, and more examples.
This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub. You can also sponsor my Tour de Shore 2026 ride in support of the Maywood Fine Arts Center.
Fixing a 20-year-old bug in Enlightenment E16 [OSnews]
The editor in chief of this blog was born in 2004. She uses the 1997 window manager, Enlightenment E16, daily. In this article, I describe the process of fixing a show-stopping, rare bug that dates back to 2006 in the codebase. Surprisingly, the issue has roots in a faulty implementation of Newton’s algorithm.
↫ Kamila Szewczyk
I’m not going to pretend to understand any of this, but I know you people do. Enjoy.
Let sleeping CPUs lie — S0ix [OSnews]
Modern laptops promise a kind of magic. Shut the lid or press the sleep button, toss it in a backpack, and hours, days, or weeks later, it should wake up as if nothing happened with little to no battery drain. This sounds like a fairly trivial operation — y’know, you’re literally just asking for the computer to do nothing — but in that quiet moment when the fans whir down, the screen turns dark, and your reflection stares back at you, your computer and all its little components are actually hard at work doing their bedtime routine.
↫ Aymeric Wibo at the FreeBSD Foundation
A look at how suspend and resume works in practice, from the perspective of FreeBSD. Considering FreeBSD’s laptop focus in recent times, not an unimportant subject.
AI Is Writing Our Code Faster Than We Can Verify It [Radar]
This is the third article in a series on agentic engineering and AI-driven development. Read part one here, part two here, part three here, and look for the next article on April 30 on O’Reilly Radar.
Here’s the dirty secret of the AI coding revolution: most experienced developers still don’t really trust the code the AI writes for us.
If I’m being honest, that’s not actually a particularly well-guarded secret. It feels like every day there’s a new breathless “I don’t have a lick of development experience but I just vibe coded this amazing application” article. And I get it—articles like that get so much engagement because everyone is watching carefully as the drama of AIs getting better and better at writing code unfolds. We’ve had decades of shows and movies, from WarGames to Hackers to Mr. Robot, portraying developers as reclusive geniuses doing mysterious but incredible stuff with computers. The idea that we’ve coded ourselves out of existence is fascinating to people.
The flip side of that pop-culture phenomenon is that when there are problems caused by agentic engineering gone wrong (like the equally popular “I trusted an AI agent and it deleted my entire production database” articles), everyone seems to find out about it. And, unfortunately, that newly emerging trope is much closer to reality. Most of us who do agentic engineering have seen our own AI-generated code go off the rails. That’s why I built and maintain the Quality Playbook, an open-source AI skill that uses quality engineering techniques that go back over fifty years to help developers working in any language verify the quality of their AI-generated code. I was as surprised as anyone to discover that it actually works.
I’ve talked often about how we need a “trust but verify” mindset when using AI to write code. In the past, I’ve mostly focused on the “trust” aspect, finding ways to help developers feel more comfortable adopting AI coding tools and using them for production work. But I’m increasingly convinced that our biggest problem with AI-driven development is that we don’t have a reliable way to check the quality of code from agentic engineering at scale. AI is writing our code faster than we can verify it, and that is one of AI’s biggest problems right now.
After I got my first real taste of using AI for development in a professional setting, it felt like I was being asked to make a critical choice: either I had to outsource all of my thinking to the AI and just trust it to build whatever code I needed, or I had to review every single file it generated line by line.
A lot of really good, really experienced senior engineers I’ve talked to feel the same way. A small number of experienced developers fully embrace vibe coding and basically fire off the AI to do what it needs to, depending on a combination of unit tests and solid, decoupled architecture (and a little luck, maybe) to make sure things go well. But more frequently, the senior, experienced engineers I’ve talked to, folks who’ve been developing for a really long time, go the other way. When I ask them if they’re using AI every day, they’ll almost always say something like, “Yeah, I use AI for unit tests and code reviews.” That’s almost always a tell that they don’t trust the AI to build the really important code that’s at the core of the application. They’re using AI for things that won’t cause production bugs if they go wrong.
I think this excerpt from a recent (and excellent) article in Ars Technica, “Cognitive surrender” leads AI users to abandon logical thinking, sums up how many experienced developers feel about working with AI:
When it comes to large language model-powered tools, there are generally two broad categories of users. On one side are those who treat AI as a powerful but sometimes faulty service that needs careful human oversight and review to detect reasoning or factual flaws in responses. On the other side are those who routinely outsource their critical thinking to what they see as an all-knowing machine.
I agree that those are two options for dealing with AI. But I also believe that’s a false choice. “Cognitive surrender,” as the research referenced by the article puts it, is not a good outcome. But neither is reviewing every line of code the AI writes, because that’s so effort-intensive that we may as well just write it all ourselves. (And I can almost hear some of you asking, “What so bad about that?”)
This false choice is what really drives a lot of really good, very experienced senior engineers away from AI-driven development today. We see those two options, and they are both terrible. And that’s why I’m writing this article (and the next few in this Radar series) about quality.
The Quality Playbook is an open-source skill for AI coding tools like GitHub Copilot, Cursor, Claude Code, and Windsurf. You point it at a codebase, and it generates a complete quality engineering infrastructure for that project: test plans traced to requirements, code review protocols, integration tests, and more. More importantly, it brings back quality engineering practices that much of the industry abandoned decades ago, using AI to do a lot of the quality-related work that used to require a dedicated team.
I built the Quality Playbook as part of an experiment in AI-driven development and agentic engineering, building an open-source project called Octobatch and writing about the process in this ongoing Radar series. The playbook emerged directly from that experiment. The ideas behind it are over fifty years old, and they work.
Along the way, I ran into a shocking statistic.
We already know that many (most?) developers these days use AI coding tools like GitHub Copilot, Claude Code, Gemini, ChatGPT, and Cursor to write production code. But do we trust the code those tools generate? “Trust in these systems has collapsed to just 33%, a sharp decline from over 70% in 2023.”
That quote is from a Gemini Deep Research report I generated while doing research for this article. 70% dropping to 33%—that sounds like a massive collapse, right?
The thing is, when I checked the sources Gemini referenced, the truth wasn’t nearly as clear-cut. That “over 70% in 2023” number came from a Stack Overflow survey measuring how favorably developers view AI tools. The “33%” number came from a Qodo survey asking whether developers trust the accuracy of AI-generated code. Gemini grabbed both numbers, stripped the context, and stitched them into a single decline narrative. No single study ever measured trust dropping from over 70% to 33%. Which means we’ve got an apples-to-oranges comparison, and it might even technically be accurate (sort of?), but it’s not really the headline-grabber that it seemed to be.
So why am I telling you about it?
Because there are two important lessons from that “shocking” stat. The first is that the overall idea rings true, at least for me. Almost all of us have had the experience of generating code with AI faster than we can verify it, and we ship features before we fully review them.
The second is that when Gemini created the report, the AI fabricated the most alarming version of the story from real but unrelated data points. If I’d just cited it without checking the sources, there’s a pretty good chance it would get published, and you might even believe it. That’s ironically self-referential, because it’s literally the trust problem the survey is supposedly measuring. The AI produced something that looked authoritative, felt correct, and was wrong in ways that only careful verification could catch. If you want to understand why over 70% of developers don’t fully trust AI-generated code, you just watched it happen.
One reason many of us don’t trust AI-generated choice is because there’s a growing gap between how fast AI can generate code and how well we can verify that the code actually does what we intended. The usual response to this verification gap is to adopt better testing tools. And there are plenty of them: test stub generators, diff reviewers, spec-first frameworks. These are useful, and they solve real problems. But they generally share a blind spot: they work with what the code does, not with what it’s supposed to do. Luckily, the intent is sitting right there: in the specs, the schemas, the defensive code, the history of the AI chats about the project, even the variable names and filenames. We just need a way to use it.
AI-driven development needs its own quality practices, and the discipline we need already exists. It was just (unfairly) considered too expensive to use… until AI made it cheap.
There’s a difference between knowing that code works and knowing that it does what it’s supposed to do. It’s the difference between “does this function return the right value?” and “does this system fulfill its purpose?”—and as it turns out, that’s one of the oldest problems in software engineering. In fact, as I talked about in a previous Radar article, Prompt Engineering Is Requirements Engineering, it was the source of the original “software crisis.”
The software crisis was the term people used across our industry back in the 1960s when they were coming to grips with large software projects around the world that were routinely delivered late, over budget, and delivering software that didn’t do what it was supposed to do. At the 1968 NATO Software Engineering Conference—the conference that introduced the term “software engineering”—some of the top experts in the industry talked about how the crisis was caused by the developers and their stakeholders had trouble understanding the problems they were solving, communicating those needs clearly, and making sure that the systems they delivered actually met their users’ needs. Nearly two decades later, Fred Brooks made the same argument in his pioneering essay, No Silver Bullet: no tool can, on its own, eliminate the inherent difficulty of understanding what needs to be built and communicating that intent clearly. And now that we talk to our AI development tools the same way we talk to our teammates, we’re more susceptible than ever to that underlying problem of communication and shared understanding.
An important part of the industry’s response to the software crisis was quality engineering, a discipline built specifically to close the gap between intent and implementation by defining what “correct” means up front, tracing tests back to requirements, and verifying that the delivered system actually does what it’s supposed to do. For years it was standard practice for software engineering teams to include quality engineering phases in all projects. But few teams today do traditional quality engineering. Understanding why it got left behind by so many of us, more importantly, what it can do for us now, can make a huge difference for agentic engineering and AI-driven development today.
Starting in the 1950s, three thinkers built the intellectual foundation that manufacturing used to become dramatically more reliable.
These ideas revolutionized software quality, and the people who put them into practice were called quality engineers. They built test plans traced to requirements, ran functional testing against specifications, and maintained living documentation that defined what “correct” meant for each part of the system.
So why did all of this disappear from most software teams? (It’s still alive in regulated industries like aerospace, medical devices, and automotive, where traceability is mandated by law, and a few brave holdouts throughout the industry.) It wasn’t because it didn’t work. Quality engineering got cut because it was perceived as expensive. Crosby was right that quality is free: the cost of building it in is far more than made up for by the savings you get from not finding and fixing defects later. But the costs come at the beginning of the project and the savings come at the end. In practice, that means when the team blows a deadline and the manager gets angry and starts looking for something to cut, the testing and QA activities are easy targets because the software already seems to be complete.
On top of the perceived expense, quality engineering required specialists. Building good requirements, designing test plans, and planning and running functional and regression testing are real, technical skills, and most teams simply didn’t have anyone (or, more specifically, the budget for anyone) who could do those jobs.
Quality engineering may have faded from our projects and teams over time, but the industry didn’t just give up on many of its best ideas. Developers are nothing if not resourceful, and we built our own quality practices—three of the most popular are test-driven development, behavior-driven development, agile-style iteration—and these are genuinely good at what they do. TDD keeps code honest by making you write the test before the implementation. BDD was specifically designed to capture requirements in a form that developers, testers, and stakeholders can all read (though in practice, most teams strip away the stakeholder involvement and it devolves into another flavor of integration testing). Agile iteration tightens the feedback loop so you catch problems earlier.
Those newer quality practices are practical and developer-focused, and they’re less expensive to adopt than traditional quality engineering in the short run because they live inside the development cycle. The upside of those practices is that development teams can generally implement them on their own, without asking for permission or requiring experts. The tradeoff, however, is that those practices have limited scope. They verify that the code you’re writing right now works correctly, but they don’t step back and ask whether the system as a whole fulfills its original intent. Quality engineering, on the other hand, establishes the intent of the system before the development cycle even begins, and keeps it up to date and feeds it back to the team as the project progresses. That’s a huge piece of the puzzle that got lost along the way.
Those highly effective quality engineering practices got cut from most software engineering teams because they were viewed as expensive, not because they were wrong. When you’re doing AI-driven development, you’re actually running into exactly the same problem that quality engineering was built to solve. You have a “team”—your AI coding tools—and you need a structured process to make sure that team is building what you actually intend. Quality engineering is such a good fit for AI-driven development because it’s the discipline that was specifically designed to close that gap between what you ask for and what gets built.
What nobody expected is that AI would make it cheap enough in the short run to bring quality engineering back to our projects.
I’ve long suspected that quality engineering would be a perfect fit for AI-driven development (AIDD), and I finally got a chance to test that hypothesis. As part of my experiment with AIDD and agentic engineering (which I’ve been writing about in The Accidental Orchestrator and the rest of this series), I built the Quality Playbook, a skill for AI tools like Cursor, GitHub Copilot, and Claude Code that lets you bring these highly effective quality practices to any project, using AI to do the work that used to require a dedicated quality engineering team. Like other AI skills and agents, it’s a structured document that plugs into an AI coding agent and teaches it a specific capability. You point it at a codebase, and the AI explores the code, reads whatever specifications and documentation it can find, and generates a complete quality infrastructure tailored to that project. The Quality Playbook is now part of awesome-copilot, a collection of community-contributed agents (and I’ve also opened a pull request to add it to Anthropic’s repository of Claude Code skills).
What does “quality infrastructure” actually mean? Think about what a quality engineering team would build if you hired one. A good quality engineer would start by defining what “correct” means for your project: what the system is supposed to do, grounded in your requirements, your domain, what your users actually need. From there, they’d write tests traced to those requirements, build a code review process that checks whether the code implements what it’s supposed to, design integration tests that verify the whole system works together, and set up an audit process where independent reviewers check the code against its original intent.
That’s what the playbook generates. Developers using AI tools have been rediscovering the value of requirements, and spec-driven development (SDD) has become very popular. You don’t need to be practicing strict spec-driven development to use it. The playbook infers your project’s intent from whatever artifacts are available: chat logs, schemas, README files, code comments, and even defensive code patterns. If you have formal specs, great; if not, the AI pieces together what “correct” means from the evidence it can find.
Once the playbook figures out the intent of the code, it creates quality infrastructure for the project. Specifically, it generates ten deliverables:
I started this article by talking about a false choice: either we surrender our judgment to the AI, or get stuck reviewing every line of code it writes. The reality is much more nuanced, and, in my opinion, a lot more interesting, if we have a trustworthy way to verify that the code we worked with the AI to build actually does what we intended. It’s not a coincidence that this is one of the oldest problems in software engineering, and not surprising that AI can help us with it.
The Quality Playbook leans heavily on classic quality engineering techniques to do that verification. Those techniques work very well, and that gives us the more nuanced option: using AI to help us write our code, and then using it to help us trust what it built.
That’s not a gimmick or a paradox. It works because verification is exactly the kind of structured, specification-driven work that AI is good at. Writing tests traced to requirements, reviewing code against intent, checking that the system does what it’s supposed to do under real conditions. These are the things quality engineers used to do across the whole industry (and still do in the highly regulated parts of it). They’re also things that AI can do well, as long as we tell it what “correct” means.
The experienced engineers I talked about at the beginning of this article, the ones who only use AI for unit tests and code reviews, aren’t wrong to be cautious. They’re right that we can’t just trust whatever output the AI spits out. But limiting AI to just the “safe” parts of our projects keeps us from taking advantage of such an important set of tools. The way out of this quagmire is to build the infrastructure that makes the rest of it trustworthy too. Quality engineering gives us that infrastructure, and AI makes it cheap enough to actually use on all of our projects every day.
In the next few articles, I’ll show you what happened when I pointed the Quality Playbook at real, mature open-source codebases and it started finding real bugs, how the playbook emerged from my AI-driven development experiment, what the quality engineering mindset looks like in practice, and how we can learn important lessons from that experience that apply to all of our projects.
The Quality Playbook is open source and works with GitHub Copilot, Cursor, and Claude Code. It’s also available as part of awesome-copilot. You can try it out today by downloading it into your project and asking the AI to generate the quality playbook. The whole process takes about 10-15 minutes for a typical codebase. I’ll cover more details on running it in future articles in this series.
Grief and the Nonprofessional Programmer [Radar]
I can’t claim to be a professional software developer—not by a long shot. I occasionally write some Python code to analyze spreadsheets, and I occasionally hack something together on my own, usually related to prime numbers or numerical analysis. But I have to admit that I identify with both of the groups of programmers that Les Orchard identifies in “Grief and the AI Split”: those who just want to make a computer do something and those who grieve losing the satisfaction they get from writing good code.
A lot of the time, I just want to get something done; that’s particularly true when I’m grinding through a spreadsheet with sales data that has a half-million rows. (Yes, compared to databases, that’s nothing.) It’s frustrating to run into some roadblock in pandas that I can’t solve without looking through documentation, tutorials, and several incorrect Stack Overflow answers. But there’s also the programming that I do for fun—not all that often, but occasionally: writing a really big prime number sieve, seeing if I can do a million-point convex hull on my laptop in a reasonable amount of time, things like that. And that’s where the problem comes in. . .if there really is a problem.
The other day, I read a post of Simon Willison’s that included AI-generated animations of the major sorting algorithms. No big deal in itself; I’ve seen animated sorting algorithms before. Simon’s were different only in that they were AI-generated—but that made me want to try vibe coding an animation rather than something static. Graphing the first N terms of a Fourier series has long been one of the first things I try in a new programming language. So I asked Claude Code to generate an interactive web animation of the Fourier series. Claude did just fine. I couldn’t have created the app on my own, at least not as a single-page web app; I’ve always avoided JavaScript, for better or for worse. And that was cool, though, as with Simon’s sorting animations, there are plenty of Fourier animations online.
I then got interested in animations that aren’t so common. I grabbed Algorithms in a Nutshell, started looking through the chapters, and asked Claude to animate a number of things I hadn’t seen, ending with Dijkstra’s algorithm for finding the shortest path through a graph. It had some trouble with a few of the algorithms, though when I asked Claude to generate a plan first and used a second prompt asking it to implement the plan, everything worked.
And it was fun. I made the computer do things I wanted it to do; the thrill of controlling machines is something that sticks with us from our childhoods. The prompts were simple and short—they could have been much longer if I wanted to specify the design of the web page, but Claude’s sense of taste was good enough. I had other work to do while Claude was “thinking,” including attending some meetings, but I could easily have started several instances of Claude Code and had them create simulations in parallel. Doing so wouldn’t have required any fancy orchestration because every simulation was independent of the others. No need for Gas Town.
When I was done, I felt a version of the grief Les Orchard writes about. More specifically: I don’t really understand Dijkstra’s algorithm. I know what it does and have a vague idea of how it works, and I’m sure I could understand it if I read Algorithms in a Nutshell rather than used it as a catalog of things to animate. But now that I had the animation, I realized that I hadn’t gone through the process of understanding the algorithm well enough to write the code. And I cared about that.
I also cared about Fourier transformations: I would never “need” to write that code again. If I decide to learn Rust, will I write a Fourier program, or ask Claude to do it and inspect the output? I already knew the theory behind Fourier transforms—but I realized that an era had ended, and I still don’t know how I feel about that. Indeed, a few months ago, I vibe coded an application that recorded some audio from my laptop’s microphone, did a discrete Fourier transform, and displayed the result. After pasting the code into a file, I took the laptop over to the piano, started the program, played a C, and saw the fundamental and all the harmonics. The era was already in the past; it just took a few months to hit me.
Why does this bother me? My problem isn’t about losing the pleasure of turning ideas into code. I’ve always found coding at least somewhat frustrating, and at times, seriously frustrating. But I’m bothered by the lack of understanding: I was too lazy to look up how Dijkstra works, too lazy to look up (again) how discrete Fourier works. I made the computer do what I wanted, but I lost the understanding of how it did it.
What does it mean to lose the understanding of how the code works? Anything? It’s common to place the transition to AI-assisted coding in the context of the transition from assembly language to higher-level languages, a process that started in the late 1950s. That’s valid, but there’s an important difference. You can certainly program a discrete fast Fourier transform in assembly; that may even be one of the last bastions of assembly programs, since FFTs are extremely useful and often have to run on relatively slow processors. (The “butterfly” algorithm is very fast.) But you can’t learn signal processing by writing assembly any more than you can learn graph theory. When you’re writing in assembler, you have to know what you’re doing in advance. The early programming languages of the 1950s (Fortran, Lisp, Algol, even BASIC) are much better for gradually pushing forward to understanding, to say nothing of our modern languages.
That is the real source of grief, at least for me. I want to understand how things work. And I admit that I’m lazy. Understanding how things work quickly comes in conflict with getting stuff done—especially when staring at a blank screen—and writing Python or Java has a lot to do with how you come to an understanding. I will never need to understand convex hulls or Dijkstra’s algorithm. But thinking more broadly about this industry, I wonder whether we’ll be able to solve the new problems if we delegate understanding the old problems to AI. In the past, I’ve argued that I don’t see AI becoming genuinely creative because creativity isn’t just a recombination of things that already exist. I’ll stick by that, especially in the arts. AI may be a useful tool, but I don’t believe it will become an artist. But anyone involved with the arts also understands that creativity doesn’t come from a blank slate; it also requires an understanding of history, of how problems were solved in the past. And that makes me wonder whether humans—at least in computing—will continue to be creative if we delegate that understanding to AI.
Or does creativity just move up the stack to the next level of abstraction? And is that next level of abstraction all about understanding problems and writing good specifications? Writing a detailed specification is itself a kind of programming. But I don’t think that kind of grief will assuage the grief of the programmer who loves coding—or who may not love coding but loves the understanding that it brings.
The Day-Blind Stars [Original Fiction Archives - Reactor]
Illustrated by Hwarim Lee
Edited by Jonathan Strahan
Published on April 15, 2026

An Earth explorer in search of something new and strange in the up and out ends up traveling through space with a small god over millennia.
Short story | 5,293 words
She grew fearful of the world and turned away from it, seeking solace. She intended to return.
She never did.
When one turned away from the world in those days, one was subject to a binary. Binaries were a sort of self-imposed tyranny, imagined by the one but expected by the totality. So, turning away from the world, for Sierra St. Sandalwood IV, involved a choice—of necessity illusory—between going up and out or going down and in. The first choice was blue. The second choice was green.
The first choice was green. The second choice was blue.
See? Illusion.
Sierra went up and out. Going up, she theorized, she would be able to look down at the receding world, watching for signs of pursuit. Had she gone down, the world would have closed over behind her as she hacked through roots, as she gnawed through bedrock, as she braved the magma mantle washing the iron and nickel core. How can that be said to be turning away from the world at all?
That would be going under, thought Sierra.
But so many people chose down. Her husband had. Her godmother had. The twins, of course, painfully young, swore they were determined to embrace the world through all the numberless days gifted them by the life force. Devon called the life force Gaia and Denisa called it motion. Denisa waved her arms, dreamy and languorous, whenever she spoke of motion.
Sierra was graceless in the up and out. She had never been outside the gravity well. Her go suit prompted her to make the adjustments necessary to steer a clear course, but only because she had activated those options. Options for prompts for adjustments—some of the very things from which Sierra was turning away. Perhaps up and out was not so different from down and in. Perhaps neither was any different from the world itself.
She approached a tumble of great rocks trailing the world. Each of them was inconceivably cold on one side, gamma-drenched hellfire on the other. A guard god was sitting on one of the rocks, breathing smoke and looking at her with idle curiosity. The go suit suggested she stop and visit.
“Hello. How are things?” asked the guard god.
“How are you breathing smoke?” asked Sierra. “How can you talk? How can I hear you? Why is a god trailing the world?”
“First,” it replied, “I’m smoking a cigarette, which technically is breathing smoke, but not exactly what you are imagining. I can talk because I learned how at my father’s knee. I can hear you because I am listening. I am trailing the world because I’m on watch.”
“What does a god watch for?” asked Sierra. Her go suit maneuvered its way onto the surface of the rock; she was briefly nauseous before her see-plate stabilized the view.
Illusion.
“I’m more of a poppet deity than a god. And I’m watching for people who go up and out.”
“Like me,” said Sierra.
“Much like you, yes. Mostly like you. You should tell me who you are.”
The suit made it impossible to nod, though Sierra reflexively attempted one. “My name is Sierra St. Sandalwood IV,” she said.
The guard god did nod, though its thick neck, wider than its block of a head, made the movement negligible. “Thank you. That is welcome information. However, I did not ask your name. I asked who you are.”
Sierra thought very carefully. “I think if I knew that I would be at home with the twins.”
The guard god nodded again, this time with more alacrity. Pebbles and dust floated out into the nothing. “I think you have a question.” It sounded delighted. “Let’s take an equatorial walk.”
It lurched up and Sierra realized she had not made a careful enough study of her interlocutor. Its waist and legs were seamlessly bonded to the outcropping of silicates she’d thought simply served as a throne until it cracked free. It stretched, dreamy and languorous.
“My go suit keeps me from careening away,” said Sierra. “But how are you treating this little rock as firma?”
The guard god looked at her and furled its face, a sort of miniature avalanche concealing what Sierra thought might be emeralds deep in the crags of what she thought might be orbital sockets. When it opened them again, its eyes were sapphires.
It started to force its way through the tumult of stalagma that extended to the horizon in every direction.
The horizon wasn’t very far.
Sierra blinked her right eye, just so, and she floated after the guard god. When she was moving alongside it, she asked again, “How are you walking on this little rock? Shouldn’t you fly off into the nothing?”
“You haven’t asked the question I think you need to ask, yet, but you do ask a lot of others,” it said. “I like that. Yes, I should fly off because of, you know”—and here it made a circular motion with one of the three spindly fingers sprouting from its upper right hand—“the spinning. Also, there are fundamental forces of the universe to be taken into consideration. At least one or two of them. But it’s okay. I kind of bend down a little bit so I won’t spin off. As for violating fundamental forces, I have a permit.”
Sierra tried to nod again. When she couldn’t, again, she breathed a query to her go suit, piano, asking if there was a way she could move her head freely. The suit flashed a series of glyphs on the inside of her see-plate, seizure fast. Sierra interpreted them as saying, “Sure.”
“Those things are hilarious,” said the guard god. It had stopped and seemed to be considering their route. “Have you ever talked to a go suit when it’s not being worn?”
Sierra shook her head, greatly satisfied with her freedom of movement. “I didn’t think they had any independent agency.”
“Eh,” said the guard god. “People get up here, they look around. A good number of them take off their go suits and launch themselves skyclad into the nothing, giving up their little essences in favor of… well, in favor of what each one of them individually seeks. Sometimes the suits stick around for a bit after that.”
It continued, “I think the equator of this rock will prove a little rough. How do you feel about a circumpolar walk?”
“Do asteroids have poles?”
“Hadn’t thought of that. Probably not this one. Doesn’t it have to do with the invariable plane?”
Sierra had never heard the phrase but was beginning to catch the ebb and flow of the conversation, something she had always been good at. “Sounds right,” she said.
The guard god turned right and plodded north, or perhaps south. “People who come up here tend to be either immigrants or mystics,” it said.
“Never both?” Sierra blinked her eyes just so. She moved along beside the guard god, their heads at the same height but Sierra’s torso and limbs now extended up and out, upside down, relatively. This amused her. If she knew the just-so sequence of blinks that would prompt the suit to remind her of the last time she’d been amused, she would have blinked it.
“Immigrants, they usually have a lot on their minds,” said the guard god. “Not much time for revelations and all that omenistic business.”
“Are you saying immigrant, or—” Sierra stopped. “The one with the I or the one with the E?”
“I could never keep that straight,” said the guard god. “Comings or goings, borders and frontiers. I don’t think it makes much difference up here.”
Sierra queried the suit on whether she could shrug, was given an answer in the positive, entered a command, and shrugged.
“I can also never keep lie and lay straight,” said the guard god. “Yes,” it went on, circling an outcropping that it somewhat resembled. “This way is much easier.” It ploughed through the next rock formation and Sierra drifted a little higher to avoid the detritus.
“This isn’t anything like I thought it would be,” she said. “But I’ve only just now started.”
The guard god snorted. “Time. Who cares?” Then, “What did you think coming up and out would be like?”
“I…” Sierra trailed off.
“They always have ideas,” said the guard god. “If you’ll forgive me for lumping you in with all the other blue travelers.”
It had been Sierra’s observation that minutes are longer than people give them credit for. When people pause for a minute, it is most often not a minute at all, but a moment.
She paused for a minute and said, “I thought I wouldn’t miss anyone anymore.”
The guard god stopped its ramble. It reached out and put two of its great hands on her shoulders and slowly, gently even, rotated her. It pulled her down a bit until they were face-to-face, her gazing through her see-plate, it gazing through its fluctuant eyes.
“That’s new,” it said. It removed its top hands and clapped all of them. Particulate matter drifted out like a scattering of dusk-flocking birds. This time, Sierra could hear the nod as well as see it. The guard god asked, “Do you want to get out of here?”
The guard god, the poppet deity, made a check of Sierra’s go suit and determined that it was of the highest quality, but it warned her that the highest quality might be insufficient for her survival where they were going.
“Where are we going?”
“Up and out.”
“We’re already up and out,” she said, nonetheless intrigued.
“Further up. Further out.”
“My godmother always said farther was correct.”
“Isn’t there something about literal and symbolic distances? The A means one, the U means the other?” The guard god sounded genuinely curious.
“Are we going… literally? Or symbolically?”
“I look forward to finding out,” and for the first time the guard god laughed, and it wasn’t grumbling thunder and tumbling gravel at all, but lovely and melodic, like a flute solo.
Sierra joined in the laughter, though her laugh was a throaty alto and she often honked despite herself, as she did this time.
“Your go suit,” said the guard god, “is hesitant. It wants reassuring. I propose you ride on my back so as to be within my sphere of influence. That might protect you should we encounter any day-blind stars.”
“What are those?” asked Sierra.
“They are fey and beautiful and vicious and deadly, like all stars. But in particular, they are the stars that shine by day and so can’t be seen from the down and in.”
“We’re not down and in. We’re not going down and in.”
“One day I will meet a blue traveler with a proper sense of perspective,” said the guard god. “Now, if you are to ride on my back, you won’t want this broad mineral stuff. What sort of steed would you prefer?”
The only steeds Sierra had ever seen were the force-grown mules spun up by the various corporation-citizens on the world for use as data storage.
“I can’t think…”
“Think wider. It can be anything at all you’ve seen, yes, but also anything that you’ve heard of, that you’ve read about, that you’ve heard sung to you, or even that you’ve imagined.”
Sierra thought. “When the twins turned one hundred and eleven years old, their father and I marked it as a very momentous occasion, though it’s not a particularly remarkable age for a child to reach and the Widows Who Wait do not attach any numerological significance to one hundred and eleven. But it was that day they were given their choice of a Memorial Day, to celebrate all the rest of their lives.”
“I here admit, Sierra St. Sandalwood IV, that I have spoken to you more than any other human being I have ever encountered,” said the guard god. “Therefore, I will tell you I do not know what a Memorial Day is. My kind have had encounters with the Widows Who Wait, though. They’re all liars.”
Sierra elected to ignore that. “They could have chosen the anniversary of their physical birth or of the day they bloomed within me. But the twins are puckish. They are readers of old books and it’s a rare hour passes without them sharing a knowing smile. They chose the eighth day of September.”
“That one I know,” said the guard god. “The Nativity of Mary, mother of the Christ.”
“No. I mean, yes, it’s that, too, but we are not Christians. The eighth day of September is also the Feast Day of Saint Corbinian. That’s why they chose it.”
“I like to understand things,” said the guard god. “If you are not Christians, and the birth date of the Holy Mother is no occasion for memory, why choose a Christian saint?”
Sierra smiled, remembering. “Because of the bear,” she said.
The guard god moved its great shoulders back. Some arms retracted and others shortened. Stone became flesh and flesh grew hirsute. Rounded ears sprouted and eyes became amber. The guard god dropped to all fours and its great claws curled into the rock. “Wait,” it said. “I am listening to the story.”
Sierra heard nothing, but she waited.
“He was on his way to Rome, yes.” The guard god’s voice was now a different timbre of deep. Sierra wondered if its laugh had changed as well. “A great bear slew the saint’s mule and Corbinian commanded the creature, in the name of God, to submit to saddle and rein and serve as his mount. The beast acquiesced and carried the saint to the Holy See. When they arrived at the gates, Corbinian freed the bear and it returned to the wild, sinless as only animals can be.”
“Sinless, yes, I suppose,” said Sierra. “There are none of its kind left to prove or disprove that notion.”
The guard god reared up on its hind legs, twice as tall as Sierra. She was afraid for the first time since she had launched herself up and out.
The guard god, the bear, looked down, down, down the long way Sierra had travelled. “There are a few bears yet,” it said.
Sierra was surprised. “In captivity?” she asked.
“In hiding,” it answered. “Plenty of mules, though. Probably not as tasty as that one in old Bavaria.” The guard god dropped down again and hunched its shoulders. A leather saddle grew out of its back and reins extended from its terrifying teeth.
“What were you listening to? Who told you the story?”
“Mnemosyne. She grants me instantaneous access to every bit of recorded information in the omniverse.”
This startled Sierra. “You’ve indicated there are things that you do not know, even things you don’t understand.”
“I rarely access Mnemosyne. She vexes me. Now, Sierra, climb up.”
She put her foot in a stirrup, but hesitated. “Will you give me your name, as I gave you mine?”
“You have yet to tell me who you are, so I will not tell you who I am. But my name is now Corbinian.”
“Corbinian wasn’t the bear,” Sierra said, swinging into the saddle.
“Oh, I doubt you can prove that,” it replied.
Farther up and further out proved to be a circuitous route that twisted between the world and its moon. This involved travelling towards the world before they travelled away from it, but Corbinian did not respond to Sierra’s queries beyond grunting, “Concentrating.”
She let it be.
Having never ridden anything at all, not even a bicycle, Sierra found the sensation vertiginous, even without the other rocky world they passed, even without the belt of tumbling asteroids, even without the great ringed bodies the bear rushed past. The go suit held up perfectly so far as she could tell. She saw many stars off in the distance but did not know if any of them were day-blind.
Finally, Corbinian came to a halt.
“A relative halt,” it said. “All things are in motion, from down at the bottom of matter, where minds best not linger, to the very top of all of it, to every bit of it.”
Sierra thought of her daughter waving her arms and speaking of motion. She was comforted by the memory. She was glad her daughter had long known something she herself had not known at all.
“Do you know what Gaia is?” she asked.
“I’m told that the answer to that question is of no importance,” said Corbinian. “And it is not your question. Keep asking them though!”
“Well, here’s another. Why have we stopped here?”
“Ah. This is the farthest any go suit has ever gone.”
“So, I’m farther from the world than any one has ever been?”
“Or further, yes. Let’s say both.”
“The suit seems fine,” said Sierra.
“Good. Because I feel odd,” said Corbinian.
“Are you ill?”
“I don’t know. I never have been. But there’s some sort of limiting factor that is holding me in this orbit. I feel like a bear twice my height has stood up in front of me.”
“That would be a pretty big bear,” said Sierra.
Corbinian’s laugh was still a flute.
“You’re afraid,” said Sierra. “That’s the limiting factor, I think.”
Corbinian said, “Wait. I am listening to the story.”
Just a moment later, the bear said, “I was attempting to access recorded information that would tell me if it is better to be ill or to be afraid.”
“Now that you’ve said it,” said Sierra, “I’m curious myself. What did Mnemosyne tell you?”
“She didn’t tell me anything. She didn’t tell me anything at all.”
Sierra discovered that unlike with her husband or her godmother, unlike with even Devon and Denisa, were she to be honest with herself, she never grew frustrated in the company of Corbinian. She never found the guard god tiresome or boring. She never felt put upon.
They sat companionably, in silence, for a number of years.
One day, Corbinian said, “Isn’t there anything you want to ask? There was that question I thought you had. I just remembered that.”
“I have questions, of course,” said Sierra, “but I still don’t know what you mean by the question. Wait, no. I know what you mean by it, but I do not know the question itself.”
“Ask me some others, then. I’m awfully resourceful.”
“Are you getting bored?” asked Sierra, worried about the answer.
“No,” said Corbinian.
“Well, then. Who made you?”
“Mnemosyne did.”
“Who made Mnemosyne?”
“You did.”
“I did no such thing,” said Sierra.
Corbinian gestured in the direction of the far away world. “You collectively. You blue travelers and green travelers and those few that never go up or down or in or out at all.”
“When did we make her?”
“I will not ask Mnemosyne to tell that story,” said Corbinian.
“May I ask her, then?”
“Mnemosyne would be the end of you, Sierra St. Sandalwood IV. She is a terrible thing for people like you.”
Sierra asked, “Is she terrible for you?”
But Corbinian fell silent for another few years.
Over time, the go suit began to alter itself in subtle ways. At first, Sierra thought it might be changing itself to match her dreams of it. Perhaps she would grow wings. Perhaps she would be able to lift the see-plate and breathe in the aroma of the nothing.
But then it became apparent the suit was becoming less than it had been before. It was winnowing parts of itself that Sierra rarely used. She nudged Corbinian.
“The suit’s breaking down,” she said.
The bear’s brows went low, and Sierra noticed that sometime clouds had appeared in the amber. It pressed a paw against Sierra’s chest. “Yes,” it said. “But it is not unhappy. It is confused. I am tempted to ask Mnemosyne whether it is better to be confused or unhappy.”
“I think parts of it are disappearing,” said Sierra.
Corbinian moved his massive head and back and forth. “One of the fundamental tenets of Mnemosyne is that formulated by Lavoisier the Lawgiver. Things do not disappear.”
Sierra had received an excellent education from her godmother. “Mass is not destroyed,” she said. “But that’s not what I meant. The go suit is sloughing off, not ceasing to exist.”
Corbinian took a closer look. “Yes, you are right,” it said. “It is sloughing away in a stream.”
“To where?”
“To the day-blind stars.”
“Oh,” said Sierra. “I know now. All it took was patience and study.”
“What do you know?”
“I know the question.”
Corbinian did not speak. It adopted a mien of anticipation.
“Good and faithful friend,” said Sierra. “Will you take me to the day-blind stars?”
Centuries later, the day-blind stars proved fey and beautiful and vicious and deadly. They were unappreciative of the new-come pair.
Along the way, they had overtaken the stuff of the go suit that it had previously surrendered. The suit fully reincorporated. Corbinian reported that it was pleased to have done so.
Once again, they relatively stopped. They were in a great nursery and every particle a star can emit buffeted them. These ejecta waxed and waned. The go suit trembled but Sierra felt its bravery. Corbinian’s eyes grew cloudier.
The answer to the question had proven to be yes, obviously. But now Sierra turned to the problem of why it was the question.
She thought:
in the beginning was the question
and the question was flawed
then the question begot a question
and that question begot a question
and that question begot a question
and that question begot a question
“Does Mnemosyne know why I asked you to bring me here?” she asked Corbinian.
“I have not been able to hear Mnemosyne’s stories for decades, now. We are dependent on what is in me, and what is in me is paltry. All that is in me is at the very surface of knowledge. I plumb no depths.”
“The question I asked you was of unknowable provenance,” Sierra said gently, “and you answered with an action you didn’t understand. You didn’t understand why, but you took the action anyway.”
Corbinian sighed and said, “I wonder if some other guard god took my place on the trailing rocks.”
The changed course of the conversation troubled Sierra. She went on as if Corbinian had not spoken. “It must be an interesting sensation you’ve been feeling down these past years. Wondering.”
“I didn’t know there was a word for it,” said Corbinian.
Sierra gave it a sharp glance. “That seems unlikely,” she said.
“Sierra St. Sandalwood IV. Goddaughter. Wife. Mother. The lone blue traveler possessed of a proper sense of perspective. Friend. I am sloughing away.”
One of the greatest failures of design and imagination that ever occurred in the world was the routing of the ducts around the eyes of go suit wearers into a reservoir at the base of the throat for filtration and reabsorption. So, tears did not stream down Sierra’s cheeks.
“Can we move on?” she asked. “Can we overtake what’s gone from you so you might be whole again?”
“I say again, I am unable to hear Mnemosyne’s stories. And I have not been whole for a long time. It is unlikely I ever will be again.”
A pair of day-blind stars let loose flares. The flares crossed the nothing and double-helixed. Sierra saw that Corbinian was not so large a bear as it had been.
And it grows smaller.
But that didn’t make sense. Growth implied addition, not subtraction. She elected to distract herself and Corbinian both.
“What is the opposite of growth?” she asked.
Corbinian cocked its head to one side. “Death?”
“But some things subside without dying,” Sierra insisted.
“Matter is not destroyed,” said Corbinian, “The opposite of growth must mean that whatever is not growing is sloughing away.”
“Are those flares sloughing away the day-blind stars, I wonder?”
“I do not know,” said Corbinian. “Ask them.”
But the stars could not answer. They were simply stars, possessing only the intelligence of fusion, which was notoriously unreliable.
“Why did you say I should ask them? You must have known they couldn’t answer.” She was still trying to distract the bear, who had fallen into melancholy.
“I did not know they couldn’t,” it said. “I suspected they wouldn’t.”
“That’s not the same thing at all,” said Sierra.
“We have crossed half a galaxy,” said Corbinian. “Everything we say or do is close enough.”
That sounded true.
“I do not believe my go suit will sustain me if you leave,” said Sierra.
“It’s a good suit,” said Corbinian. “It will try.”
“That’s all I can ask,” said Sierra. “I ask the same of you.”
“You have always asked me things. It has been the joy of my existence.”
Tears did not stream down Sierra’s cheeks.
Corbinian was a long time dying. Things changed as it diminished. It began asking Sierra questions, but though it tried, it was less and less able to answer hers.
“Do you believe your children kept to their plan of going neither up nor down?” it asked.
Sierra reflected upon what she remembered of the twins. The great distance between her and them, the great amount of time, made her suspect her own reflections.
“I believe,” she said, “that they kept to it for as long as they could.”
“So, you know they could have, but not that they would have.”
Devon’s smile was sly in her memory. He lifted the right side of his lips only. Not mocking but acknowledging. Denisa’s smile was bright, all teeth and gums and joy. They were both somewhat myopic but refused the simple treatment that would have perfected their vision. Puckish. For some people, clinging to imperfection was such a faux pas as to be considered an atrocity.
“I have just realized that the word is would. They are still on the world. They never fitted themselves for go suits or deep smocks.”
“That is welcome information,” said Corbinian. “But we should entertain the idea that one no longer needs a go suit to come up and out.”
An interesting notion.
“I can imagine those two finding some way to accomplish that. They had the benefit of my godmother’s tutelage, and she was an extraordinary educator.”
Sierra realized she could not envision her godmother’s face. Her husband’s name…was Diego. She was sure it was Diego.
“Now I’m sloughing away,” she said, describing to Corbinian the lacunae in her mind.
“You are limited by biology,” it said. “Synaptic misfiring is a product of age. But age brings wisdom, too.”
“I’d rather be intelligent than wise.”
“That is a wish I cannot grant. And one I would not if I could,” said Corbinian. Then it coughed.
And coughed.
And coughed.
Sierra stroked Corbinian’s shoulder. She did not know what else to do. Besides asking a question.
“I’m sorry, Sierra,” Corbinian answered. “There is nothing you can do for me. I am limited by pathology.”
“But you are thousands of years old!” she cried. “You were never ill before I insisted we come to these damnable stars!”
“I do not mean I am diseased,” said Corbinian. “I mean I am a symptom. One that is at long last being treated.”
“You seem to be plumbing depths now.”
“Wait,” said Corbinian. “I am listening to a story.”
The story was not told by Mnemosyne.
“Who is it then?” asked Sierra. She was distracted because her go suit had begun humming.
“I do not know who. I believe I know what. It is a go ship.”
Sierra had never heard of a go ship and said so.
“We have been away from the world for a great length of time,” said Corbinian. “It is in the nature of things to change.”
“You believe this is some sort of craft from the world?”
“I know it is. It is asking about you.”
Then Corbinian coughed a long jag. Blood coated its terrible teeth.
“That’s all, now,” it said. “Even the surface is fading.”
“But you plumbed the depths!”
“The depths plumbed me. They did not have to lower the weight very far. I am sorry, Sierra. That’s all. That’s all.”
She could see matter streaming away from it. The stream was directed perpendicular to the direction of the day-blind stars.
“You must tell me who you are,” Corbinian rasped. “Unless you do not wish to. I should have asked that as a question instead of stating it as an assertion.”
Without a moment’s hesitation, Sierra said, “I am the woman who asked the wrong question.”
“I find this answer deeply unsatisfying.”
She could see through it. It was less a bear now than the ghost of one.
Then Sierra knew the right question.
“Who are you, Corbinian?”
“I am not a who at all.”
She could barely discern its voice.
“I am a what.”
Her go suit was trembling. Sierra asked, no, pleaded, “What are you?”
Corbinian uttered a melodious word. Its voice sounded like a flute.
Sierra was bewildered. “Did you say elusive or allusive? Columbine?”
Suddenly the guide god’s face was distinctive and fully present. Its eyes were flashing diamonds, lit glorious as stars that could see.
But Corbinian did not answer. Instead, it faded away into the nothing.
The trembling of Sierra’s go suit became so pronounced that she was afraid it might tear itself apart. She wished her friend were there to tell her whether the suit was frightened or excited, wished Corbinian was there to muse upon which of those states was better.
Then the trembling stopped. Her see-plate went black and every joint in the go suit froze. She could neither see nor move.
Sierra’s sense of the passage of time had long since atrophied. She did not know how many minutes or years passed before her see-plate unfolded with a hiss.
She blinked, but not in command or query. She blinked to clear tears from her eyes. She blinked so that she could better see the two figures leaning over her.
The man’s smile was sly, but not mocking. He only lifted the right side of his lips. The woman’s smile was bright, all teeth and gums and joy.
Sierra found that her children had spouses and children of their own, and that those children had children. And those children begat children and on down like that, living with dozens of other families who made the go ship their home.
The go ship’s name was Diego, but it preferred to be called Ship. It had been the only one of its kind when the twins had left the world.
Some on board wished to study Sierra’s go suit. It was older than any other surviving example of human technology. At first, Sierra took this to mean that the world had ended, but she was assured by Ship that was not the case. Matter is not destroyed, but it changes. It is always moving.
And Sierra moved.
Sometimes she would don her go suit and spend a year or two scouting ahead of Ship. Sometimes she would simply walk the skin of the vessel and study the inconstant stars. She kept moving. She found that she could not stay still, not even relatively.
Sierra often thought of Corbinian. She did not believe it had sacrificed itself for her, not that it had sacrificed itself for anyone at all. Not a who, no. Perhaps a what.
The what was fearlessness. The what was love of the universe. The what was solace.
The what was up and out and up and out and up and out and up and out…
“The Day-Blind Stars” copyright © 2026 by
Christopher Rowe
Art copyright © 2026 by Hwarim Lee
The post The Day-Blind Stars appeared first on Reactor.
Security updates for Wednesday [LWN.net]
Security updates have been issued by AlmaLinux (capstone, cockpit, firefox, git-lfs, golang-github-openprinting-ipp-usb, kea, kernel, nghttp2, nodejs24, openexr, perl-XML-Parser, rsync, squid, and vim), Debian (imagemagick, systemd, and thunderbird), Slackware (libexif and xorg), SUSE (bind, clamav, firefox, freerdp2, giflib, go1.25, go1.26, helm, ignition, libpng16, libssh, oci-cli, rust1.92, strongswan, sudo, xorg-x11-server, and xwayland), and Ubuntu (rust-tar and rustc, rustc-1.76, rustc-1.77, rustc-1.78, rustc-1.79, rustc-1.80).
CodeSOD: Three Letter Acronyms, Four Letter Words [The Daily WTF]
Candice (previously) has another WTF to share for us.
We're going to start by just looking at one fragment of a class
defined in this C++ code: TLAflaList.
Every type and variable has a three-letter-acronym buried in its name. The specific meaning of most of the acronyms are mostly lost to time, so "TLA" is as good as any other three random letters. No one knows what "fla" is.
What drew Candice's attention was that there was a type called
"list", which implies they're maybe not using the standard library
and have reinvented a wheel. Another data point arguing in favor of
that is that the class had a method called
getNumElements, instead of something more conventional
like size.
Let's look at that function:
size_t TLAflaList::getNumElements()
{
return mv_FLAarray.size();
}
In addition to the meaningless three-letter-acronyms which start
every type and variable, we're also adding on a lovely bit of
hungarian notation, throwing mv_ on the front for a
member variable. The variable is called "array", but is
it? Let's look at that definition.
class TLAflaList
{
…
private:
TLAflaArray_t mv_FLAarray;
…
}
Okay, that gives me a lot more nonsense letters but I still have no idea what that variable is. Where's that type defined? The good news, it's in the same header.
typedef std::vector<INtabCRMprdinvusage_t*> TLAflaArray_t;
So it's not a list or an array, it's a vector. A vector of bare pointers, which definitely makes me worry about inevitable use-after-free errors or memory leaks. Who owns the memory that those pointers are referencing?
"IN" in the type name is an old company, good ol' Initrode, which got acquired a decade ago. "tab" tells us that it's meant to be a database table. We can guess at the rest.
This isn't a codebase, it's a bad Scrabble hand. It's also a
trainwreck. Confusing, disorganized, and all of that made worse by
piles of typedefs that hide what you're actually doing
and endless acronyms that make it impossible to read.
One last detail, which I'll let Candice explain:
I started scrolling down the class definition - it took longer than it should have, given that the company coding style is to double-space the overwhelming majority of lines. (Seriously; I've seen single character braces sandwiched by two lines of nothing.) On the upside, this was one of the classes with just one public block and one private block - some classes like to ping-pong back and forth a half-dozen times.
League of Canadian Superheroes – Issue 5 – 09 [Comics Archive - Spinnyverse]
The post League of Canadian Superheroes – Issue 5 – 09 appeared first on Spinnyverse.
Digital Hopes, Real Power: The Rise of Network Shutdowns [Deeplinks]
This is the fourth installment of a blog series reflecting on the global digital legacy of the 2011 Arab uprisings. You can read the rest of the series here.
Iran’s internet has been intermittently disrupted for months. After years of bombardment, Gaza’s telecommunications infrastructure remains fragile. In India, recurring shutdowns and throttling have become a routine response to protests and unrest, cutting millions off from news, work, and basic services. Across dozens of other countries, governments increasingly treat connectivity itself as something that can be weaponized—cut, slowed, or selectively restored to shape what people can see, say, and share. In 2024 alone, authorities imposed 304 internet shutdowns across 54 countries—the highest number ever recorded.
In 2011, when protesters in Tunisia, Egypt, and beyond used social media to broadcast their uprisings to the world, many observers heralded a new era of networked freedom. Governments, however, responded quickly by developing and refining systems of control that have only grown more sophisticated over time. Today’s landscape of regulation, blackouts, and degraded networks reflects that trajectory, as early experiments in censorship and disruption have hardened into a durable system of control—what began as an emergency measure has become a normalized infrastructure of control.
A Brief History of Internet Shutdowns
Egypt’s 2011 internet shutdown wasn’t the first. Although the government’s heavy-handed response after just two days of protests caught the world’s attention, Guinea, Nepal, Myanmar, and a handful of other countries had previously enacted shutdowns. But Egypt marked a turning point. In the years that followed, shutdowns increased sharply worldwide, suggesting that governments had taken note—adopting network disruptions as a tactic for suppressing dissent and limiting the flow of information within and beyond their borders.
On January 28, 2011, at 12:34 a.m. local time, five of Egypt’s internet service providers (ISPs) shut down their networks. At least one provider—Noor, which also hosted the Egyptian stock exchange—remained online, leaving only about 7% of the country connected.
In the aftermath of President Hosni Mubarak’s resignation, rights groups sought to understand how such a sweeping shutdown had been possible—and how future incidents might be prevented. There was no centralized “kill switch.” Instead, authorities leveraged the country’s highly consolidated telecommunications sector, which all operate by government license. With only a handful of ISPs, a small number of directives was enough to bring most of the network offline.
In the years following Egypt’s 2011 shutdown, telecommunications companies—many of which had been directly implicated in enabling state-ordered disruptions—began to organize around a shared set of human rights challenges. Beginning that same year, a group of operators and vendors quietly convened to examine how the UN Guiding Principles on Business and Human Rights applied to their sector, particularly in contexts where government demands could translate into sweeping restrictions on access. By 2013, this effort had formalized into the Telecommunications Industry Dialogue, bringing together major global firms to develop common principles on freedom of expression and privacy and, through a partnership with the Global Network Initiative, engage more directly with civil society. The initiative reflected a growing recognition that telecom companies—unlike platforms—operate at a critical chokepoint in the network. But it also underscored the limits of voluntary approaches: while the Dialogue helped establish shared norms, it did little to constrain the legal and political pressures that continue to drive shutdowns—or to prevent companies from complying with them.
If the early aughts were defined by improvised shutdowns, the years since have seen governments formalize their power to control networks. What was once exceptional is now often embedded in law.
In India, the 2017 Temporary Suspension of Telecom Services Rules—issued under the Telegraph Act—provided a clear legal pathway for cutting connectivity. The Telecommunications Act, 2023, further entrenched the government’s ability to enact shutdowns, granting the central and state governments, or “authorised officers” the power to suspend telecommunications services in the interest of public safety or sovereignty, or during emergencies. The government has used these measures repeatedly, particularly in Jammu and Kashmir. India’s Software Freedom Law Centre’s Shutdown Tracker shows India as instigating more than 900 shutdowns, 447 of which were in Jammu and Kashmir.
In Kazakhstan, shutdowns have also become common. Over the years, the government has passed legislation that allows state agencies to shut down the internet. The 2012 law on national security enabled the government to disrupt communications channels during anti-terrorist operations and to contain riots. In 2014 and 2016, laws were further amended to expand the number of actors able to shut down the internet without a court decision, and a government decree in 2018 enabled shutdowns in the event of a “social emergency.”
Elsewhere, governments have built or expanded legal and technical frameworks that enable similar control over information flows. Ethiopia’s state-dominated telecom sector has facilitated sweeping shutdowns during periods of conflict, including the war in Tigray, where the internet was disconnected for more than two years. In Iran, authorities have developed regulatory and infrastructural capacity to isolate domestic networks from the global internet, allowing them to restrict external visibility while maintaining limited internal connectivity. This year alone, Iranians have spent one third of the year offline. And amidst the ongoing war, Iranian officials have made it clear that the internet is a privilege for those who toe the government’s official line.
Even where laws do not explicitly authorize shutdowns, broadly worded provisions around national security or public order are routinely used to justify them. The result is a growing legal architecture that treats network disruptions not as extraordinary measures, but as standard tools for managing populations.
When that authority is exercised over a population beyond a state’s own citizens, the consequences can be even more severe. Israel’s Ministry of Communications controls the flow of communications in and out of Palestine and has used that power to shut down internet access during periods of conflict. Over the past two and a half years, Gaza has experienced repeated outages, and experts now estimate that roughly 75% of its telecommunications infrastructure has been damaged—leaving essential services severely disrupted.
Elections and the Expansion of Control
Historically, most blackouts have occurred during moments of intense political tension. But authorities are increasingly using them as a tool to preempt dissent.
In 2024, as more than half the world’s population headed to the polls, shutdowns followed. That year alone, authorities imposed 304 internet shutdowns across 54 countries—the highest number ever recorded, surpassing the previous record set just a year earlier. The geographic spread also widened significantly, with shutdowns affecting more countries than ever before. The Comoros imposed a shutdown for the first time, while other countries, such as Mauritius, instituted broad bans on social media platforms during elections.
At least 24 countries holding elections in 2024 had a prior history of shutdowns, putting billions of people at risk of disruptions during critical democratic moments.
What stands out is not just the scale, but the normalization. Notably, the number of shutdowns in 2025 broke the record set the year prior. Whereas network disruptions were once a rare occurrence, they are now a routine measure, increasingly treated by authorities as a standard response to periods of heightened political sensitivity.
Civil Society Fights Back
Governments use all sorts of justifications—national security, curbing the spread of disinformation, and even preventing students from cheating on exams—for internet shutdowns. But civil society is watching, and documenting, network disruptions and their impact on citizens.
In 2016, as shutdowns became an increasingly common tool of state control, Access Now launched the #KeepItOn campaign to coordinate global advocacy against network disruptions. The campaign includes a coalition composed of 345 advocacy groups (including EFF), research centers, detection networks, and others who work together to report on, and fight back against, internet shutdowns. Anyone can get involved by signing on to campaign action alerts, sharing their story, or reporting a shutdown in their jurisdiction.
Ending this harmful practice remains the goal. In 2016, the UN passed a landmark resolution supporting human rights online and condemning internet shutdowns, and UN agencies have continued to warn against the practice. But the fight to change government practices remains an uphill battle, leading civil society—and even companies—to get creative.
During repeated shutdowns in Gaza, grassroots efforts mobilised to distribute eSIMs so Palestinians could stay connected. In 2024, EFF recognized Connecting Humanity, a Cairo-based non-profit providing eSIM access in Gaza, with its annual award for its vital work. Satellite internet such as Starlink has been supplied to people in Ukraine and Iran, though it, too, is not immune to state control. Alongside these efforts, civil society continues to share practical guidance on circumventing shutdowns and maintaining access to information.
EFF’s mission is to ensure that technology supports freedom, justice, and innovation for all people of the world—and we’ll continue to fight back against internet shutdowns wherever they occur.
This is the fourth installment of a blog series reflecting on the global digital legacy of the 2011 Arab uprisings. Read the rest of the series here.
Defense in Depth, Medieval Style [Schneier on Security]
This article on the walls of Constantinople is fascinating.
The system comprised four defensive lines arranged in formidable layers:
- The brick-lined ditch, divided by bulkheads and often flooded, 1520 meters wide and up to 7 meters deep.
- A low breastwork, about 2 meters high, enabling defenders to fire freely from behind.
- The outer wall, 8 meters tall and 2.8 meters thick, with 82 projecting towers.
- The main wall—a towering 12 meters high and 5 meters thick—with 96 massive towers offset from those of the outer wall for maximum coverage.
Behind the walls lay broad terraces: the parateichion, 18 meters wide, ideal for repelling enemies who crossed the moat, and the peribolos, 15–20 meters wide between the inner and outer walls. From the moat’s bottom to the highest tower top, the defences reached nearly 30 meters—a nearly unscalable barrier of stone and ingenuity.
Emmanuel Kasper: Minix 3 on Beagle Board Black (ARM) [Planet Debian]

Connected via serial console. Does not have a package manager, web or ssh server, but can play tetris in the terminal (bsdgames in Debian have the same tetris version packaged).
What do you own? [Seth's Blog]
What does it mean for us to own something?
If we own a piece of land and the rain washes the topsoil downstream, do we go and get the topsoil back?
Do we own our reputation? We have influence over it, but some of it was gifted to us without our knowledge, and other parts are influenced by forces out of our control.
Do we own responsibility? Is it something we take or acquire or accept?
We can try to own our past, but the best we can do is influence our future.
Ownership is a shared understanding, a construct that can shift depending on where we stand. It’s not always up to us, but it often works better if we acknowledge it.
Preparatory School [Penny Arcade]
New Comic: Preparatory School
Freexian Collaborators: Debian Contributions: Debusine projects in GSoC, Debian CI updates, Salsa CI maintenance and more! (by Anupa Ann Joseph) [Planet Debian]

Contributing to Debian is part of Freexian’s mission. This article covers the latest achievements of Freexian and their collaborators. All of this is made possible by organizations subscribing to our Long Term Support contracts and consulting services.
While Freexian initiated Debusine, and is investing a lot of resources in the project, we manage it as a true free software project that can and should have a broader community.
We always had documentation for new contributors and we aim to be reactive with them when they interact via the issue tracker or via merge requests. We decided to put those intentions under stress tests by proposing five projects for Google’s Summer of Code as part of Debian’s participation in that program.
Given that at least 11 candidates managed to get their merge request accepted in the last 30 days (interacting with the development team is part of the pre-requisites to apply to Google Summer of Code projects these days), the contributing experience must not be too bad. 🙂 If you want to try it out, we maintain a list of “quick fixes” that are accessible to newcomers. And as always, we welcome your feedback!
debci 3.14 was released on March 4th, with a
followup 3.14.1 release with regression fixes a few days
afterwards. Those releases were followed by new development and
maintenance work that will provide extra capabilities and stability
to the platform.
This month saw the initial version of an incus backend land in Debian CI. The transition into the new backend will be done carefully so as to not disrupt ‘testing’ migration. Each package will be running jobs with both the current lxc backend and with incus. Packages that have the same result on both backends will be migrated over, and packages that exhibit different results will be investigated further, resulting in bug reports and/or other communication with the maintainers.
On the frontend side, the code has been ported
to Bootstrap 5 over from the now ancient Bootstrap 3. This need
has been originally
reported back in 2024 based on the lack of security support for
Bootstrap 3. Beyond improving maintainability, this upgrade also
enables support for dark mode in debci, which is still
work in progress.
Both updates mentioned in this section will be available in a
following debci release.
Santiago reviewed some Salsa CI issues and reviewed associated merge requests. For example, he investigated a regression (#545), introduced by the move to sbuild, on the use of extra repositories configured as “.source” files; and reviewed the MR (!712) that fixes it.
Also, there were conflicts with changes made in debci 3.14 and debci 3.14.1 (those updates are mentioned above), and different people have contributed to fix the subsequent issues, in a long-term way. This includes Raphaël who proposed MR !707 and who also suggested Antonio to merge the Salsa CI patches to avoid similar errors in the future. This happened shortly after. Those fixes finally required the unrelated MR !709, which will prevent similar problems when building images.
To identify bugs related to the autopkgtest support in the backport suites as early as possible, Santiago proposed MR !708.
Finally, Santiago, in collaboration with Emmanuel Arias also had exchanges with GSoC candidates for the Salsa CI project, including the contributions they have made as merge requests. It is important to note that there are several very good candidates interested in participating. Thanks a lot to them for their work so far!
hatchling,
python-mitogen, python-virtualenv,
python-discovery, dh-python,
pypy3, python-pipx, and
git-filter-repo.crun,
libmaxminddb, librdkafka,
lowdown, platformdirs,
python-discovery, sphinx-argparse-cli,
tox, tox-uv.DPKG_ROOT to support installing
hurd.riscv64. After another
NMU, python-memray finally
migrated.epson-inkjet-printer-escpr and
sane-airscan. He also fixed a packaging bug in
printer-driver-oki. As of systemd 260.1-1 the
configuration of lpadmin has been added to the sysusers.d
configuration. All printing packages can now simply depend on the
systemd-sysusers package and don’t have to take care of its
creation in maintainer scripts anymore.python-pyhanko-certvalidator.
pyHanko is a suite for signing and stamping PDF files, and one of
the few libraries that can be leveraged to sign PDFs with eIDAS
Qualified Electronic Signatures.Pluralistic: Rights for robots (15 Apr 2026) [Pluralistic: Daily links from Cory Doctorow]
->->->->->->->->->->->->->->->->->->->->->->->->->->->->->
Top Sources: None -->

The Rights of Nature movement uses a bold tactic to preserve our habitable Earth: it seeks to extend (pseudo) personhood to things like watersheds, forests and other ecosystems, as well as nonhuman species, in hopes of creating legal "standing" to ask the courts for protection:
https://en.wikipedia.org/wiki/Rights_of_nature
What do watersheds, forests and nonhuman species need protection from? That turns out to be a very interesting question, because the most common adversary in a Rights of Nature case is another pseudo-person: namely, a limited liability corporation.
These nonhuman "persons" have been a feature of our legal system since the late 19th century, when the Supreme Court found that the 14th Amendment's "Equal Protection" clause could be applied to a railroad. In the 150-some years since, corporate personhood has monotonically expanded, most notoriously through cases like Hobby Lobby, which gave a corporation the right to discriminate against women on the grounds that it shared its founders' religious opposition to abortion; and, of course, in Citizens United, which found that corporate personhood meant that corporations had a constitutional right to divert their profits to bribe politicians.
Theoretically, "corporate personhood" extends to all kinds of organizations, including trade unions – but in practice, corporate personhood primarily allows the ruling class to manufacture new "people" to serve as a botnet on their behalf. A union has free speech rights just like an employer, but the employer's property rights mea that it can exclude union organizers from its premises, and employer rights mean that corporations can force workers to sit through "captive audience" meetings where expensive consultants lie to them about how awful a union would be (the corporation's speech rights also mean that it's free to lie).
In my view, corporate personhood has been an unmitigated disaster. Creating "human rights" for these nonhuman entities led to the catastrophic degradation of the natural world, via the equally catastrophic degradation of our political processes.
In a strange way, corporate personhood has realized the danger that reactionary opponents of votes for women warned of. In the days of the suffrage movement, anti-feminists claimed that giving women the vote would simply lead to husbands getting two votes, since wives would simply vote the way their husbands told them to.
This libel never died out. Take the recent hard-fought UK by-election in Gorton and Denton (basically Manchester): this was the first test of the Green Party's electoral chances under its new leader, the brilliant and principled leftist Zack Polanski. The Green candidate was Hannah Spencer, a working-class plumber and plasterer who rejected the demonization of the region's Muslim voters, unlike her rivals from Labour (which has transformed itself into a right-wing party), Reform (a fascist party), and the Conservatives (an irrelevant and dying right party). During the race (and especially after Spencer romped to a massive victory) Spencer's rivals accused her of courting "family voters," by which they meant Muslim wives, who would vote the way their Islamist husbands ordered them to. Despite the facial absurdity of this claim – that the Islamist vote would go for the pro-trans party led by a gay Jew – it was widely repeated:
https://www.bbc.com/news/articles/clyxeqpzz2no
"Family voting" isn't a thing, but corporate personhood has conferred political rights on the ruling class, who get to manufacture corporate "people" at scale, each of which is guaranteed the same right to contribute to politicians and intervene in our politics as any human.
Contrast this with the Rights for Nature movement. Where corporate personhood leads to a society with less empathy for living things (up to and including humans), Rights for Nature creates a legal and social basis for more empathy. In her stunning novel A Half-Built Garden, Ruthanna Emrys paints a picture of a world in which the personhood of watersheds and animals become as much of a part of our worldview as corporate personhood is today:
https://pluralistic.net/2022/07/26/aislands/#dead-ringers
Scenes from A Half-Built Garden kept playing out in my mind last month while I attended the Bioneers conference in Berkeley, where they carried on their decades-long tradition of centering indigenous activists whose environmental campaigns were intimately bound up with the idea of personhood for the natural world and its inhabitants:
On the last morning, my daughter and I sat through a string of inspiring and uplifting presentations from indigenous-led groups that had used Rights of Nature to rally support for legal challenges that had forced those other nonhuman "persons" – limited liability corporations – to retreat from plans to raze, poison, or murder whole regions.
The final keynote speaker that morning was the writer Michael Pollan, who spoke about a looming polycrisis of AI, and I found myself groaning and squirming. Not him, too! Were we about to be held captive to yet another speaker convinced that AI was going to become conscious and turn us all into paperclips?
That seemed to be where he was leading, as he discussed the way that chatbots were designed to evince the empathic response we normally reserve for people – the same empathy that all the other speakers were seeking to inspire for nature. But then, he took an unexpected and welcome turn: Pollan compared extending personhood to chatbots to the disastrous decision to extend personhood to corporations, and urged us all to turn away from it.
This crystallized something that had niggled at me for years. For years, people I respect have used the Rights for Nature movement as an argument for extending empathy to software constructs. The more we practice empathy – and the more rights we afford to more entities – the better we get at it. Personhood for things that are not like us, the argument goes, makes our own personhood more secure, by honing a reflex toward empathy and respect for all things. This is the argument for saying thank you to Siri (and now to other chatbots):
https://ojs.lib.uwo.ca/index.php/fpq/article/download/14294/12136
Siri – like so many of our obedient, subservient, sycophantic chatbots – impersonates a woman. If we get habituated to barking orders at a "woman" (or at our "assistants") then this will bleed out into our interactions with real women and real assistants. Extending moral consideration to Siri, though "she" is just a software construct, will condition our reflexes to treat everything with respect.
For years, I'd uncritically accepted that argument, but after hearing Pollan speak, I changed my mind. Rather than treating Siri with respect because it impersonates a woman, we should demand that Siri stop impersonating a woman. I don't thank my Unix shell when I pipe a command to grep and get the output that I'm looking for, and I don't thank my pocket-knife when it slices through the tape on a parcel. I can appreciate that these are well-made tools and value their thoughtful design, but that doesn't mean I have to respect them in the way that I would respect a person.
That way lies madness – the madness that leads us to ascribe personalities to corporations and declare some of them to be "immoral" and others to be "moral," which is always and forever a dead end:
https://pluralistic.net/2024/01/12/youre-holding-it-wrong/#if-dishwashers-were-iphones
In other words: there's an argument from the Rights of Nature movement that says that the more empathy we practice, the better off we are in all our interactions. But Pollan complicated that argument, by raising the example of corporate personhood. It turns out that extending personhood to constructed nonhuman entities like corporations reduces the amount of empathy we practice. Far from empowering labor unions, the creation of "human" rights for groups and organizations has given capital more rights over workers. A labor rights regime can defend workers – without empowering bosses and without creating new "persons."
The question is: is a chatbot more like a corporation (whose personhood corrodes our empathy) or more like a watershed (whose personhood strengthens our empathy)? But to ask that question is to answer it – a chatbot is definitely more like a corporation than it is like a watershed. What's more: in a very real, non-metaphorical way, giving rights to chatbots means taking away rights from nature, thanks to LLMs' energy-intesivity.
Empathy then, for the nonhuman world – but not for human constructs.

The MetaBrainz Foundation is seeking a new Executive Director (ED) https://blog.metabrainz.org/2026/04/14/seeking-a-new-executive-director/
Missouri Town Council Approves Data Center. A Week Later, Voters Fire Half of Council https://gizmodo.com/missouri-town-council-approves-data-center-a-week-later-voters-fire-half-of-council-2000746005
Wikilinker https://whitelabel.org/wikilinker/about/
Fold Catastrophes/Peter Watts https://tachyonpublications.com/product/fold-catastrophes/?mc_cid=c20986aa78
#20yrsago Canadian labels pull out of RIAA-fronted Canadian Recording Industry Ass. https://web.archive.org/web/20060414170111/https://www.michaelgeist.ca/component/option,com_content/task,view/id,1204/Itemid,85/nsub,/
#20yrsago EFF publishes “7 Years Under the DMCA” paper https://web.archive.org/web/20060415110951/https://www.eff.org/deeplinks/archives/004555.php
#20yrsago Life of a writer as a Zork adventure https://web.archive.org/web/20060414115745/http://acephalous.typepad.com/acephalous/2006/04/disadventure.html
#20yrsago NOLA mayoral candidate uses photo of Disneyland New Orleans Square https://web.archive.org/web/20060414214356/https://www.wonkette.com/politics/new-orleans/not-quite-the-happiest-place-on-earth-166989.php
#20yrsago AOL won’t deliver emails that criticize AOL https://web.archive.org/web/20060408133439/https://www.eff.org/news/archives/2006_04.php#004556
#15yrsago UK court rules that kettling was illegal https://www.theguardian.com/uk/2011/apr/14/kettling-g20-protesters-police-illegal
#15yrsago If Chris Ware was Charlie Brown https://eatmorebikes.blogspot.com/2011/04/lil-chris-ware.html
#10yrsago Piracy dooms motion picture industry to yet another record-breaking box-office year https://torrentfreak.com/piracy-fails-to-prevent-box-office-record-160413/
#10yrsago Panama Papers: Mossack Fonseca law offices raided by Panama authorities https://www.reuters.com/article/us-panama-tax-raid-idUSKCN0XA020/
#10yrsago Panama Papers reveal offshore companies were bagmen for the world’s spies https://web.archive.org/web/20160426083004/https://www.yahoo.com/news/panama-papers-reveal-spies-used-mossak-fonseca-231833609.html
#10yrsago How corporate America’s lobbying budget surpassed the combined Senate and Congress budget https://web.archive.org/web/20150422010643/https://www.theatlantic.com/business/archive/2015/04/how-corporate-lobbyists-conquered-american-democracy/390822/
#10yrsago URL shorteners are a short path to your computer’s hard drive https://arxiv.org/abs/1604.02734
#10yrsago UL has a new, opaque certification process for cybersecurity https://arstechnica.com/information-technology/2016/04/underwriters-labs-refuses-to-share-new-iot-cybersecurity-standard/
#10yrsago Jeremy Corbyn overpays his taxes https://web.archive.org/web/20160413192208/https://www.politicshome.com/news/uk/political-parties/labour-party/news/73724/jeremy-corbyn-overstated-income-his-tax-return
#10yrsago Cassetteboy’s latest video is an amazing, danceable anti-Snoopers Charter mashup https://www.youtube.com/watch?v=D2fSXp6N-vs
#10yrsago Texas: prisoners whose families maintain their social media presence face 45 days in solitary https://www.eff.org/deeplinks/2016/04/texas-prison-system-unveils-new-inmate-censorship-policy
#5yrsago Data-brokerages vs the world https://pluralistic.net/2021/04/13/public-interest-pharma/#axciom
#5yrsago What "IP" means https://pluralistic.net/2021/04/13/public-interest-pharma/#ip
#5yrsago Bill Gates will kill us all https://pluralistic.net/2021/04/13/public-interest-pharma/#gates-foundation
#5yrsago Jackpot https://pluralistic.net/2021/04/13/public-interest-pharma/#affluenza

San Francisco: 2026 Berkeley Spring Forum on M&A and the
Boardroom, Apr 23
https://www.theberkeleyforum.com/#agenda
London: Resisting Big Tech Empires (LSBU), Apr 25
https://www.tickettailor.com/events/globaljusticenow/2042691
NYC: Enshittification at Commonweal Ventures, Apr 29
https://luma.com/ssgfvqz8
NYC: Techidemic with Sarah Jeong, Tochi Onyibuchi and Alia
Dastagir (PEN World Voices), Apr 30
https://worldvoices.pen.org/event/techidemic/
Berlin: Re:publica, May 18-20
https://re-publica.com/de/news/rp26-sprecher-cory-doctorow
Berlin: Enshittification at Otherland Books, May 19
https://www.otherland-berlin.de/de/event-details/cory-doctorow.html
Hay-on-Wye: HowTheLightGetsIn, May 22-25
https://howthelightgetsin.org/festivals/hay/big-ideas-2
SXSW London, Jun 2
https://www.sxswlondon.com/session/how-big-tech-broke-the-internet-b3c4a901
NYC: The Reverse Centaur's Guide to Life After AI (The Strand),
Jun 24
https://www.strandbooks.com/cory-doctorow-the-reverse-centaur-s-guide-to-life-after-ai.html
Do you feel screwed over by big tech? (Ontario Today)
https://www.cbc.ca/listen/live-radio/1-45-ontario-today/clip/16203024-do-feel-screwed-big-tech
Launch for Cindy's Cohn's "Privacy's Defender" (City Lights)
https://www.youtube.com/watch?v=WuVCm2PUalU
Chicken Mating Harnesses (This Week in Tech)
https://twit.tv/shows/this-week-in-tech/episodes/1074
The Virtual Jewel Box (U Utah)
https://tanner.utah.edu/podcast/enshittification-cory-doctorow-matthew-potolsky/
"Enshittification: Why Everything Suddenly Got Worse and What to
Do About It," Farrar, Straus, Giroux, October 7 2025
https://us.macmillan.com/books/9780374619329/enshittification/
"Picks and Shovels": a sequel to "Red Team Blues," about the heroic era of the PC, Tor Books (US), Head of Zeus (UK), February 2025 (https://us.macmillan.com/books/9781250865908/picksandshovels).
"The Bezzle": a sequel to "Red Team Blues," about prison-tech and other grifts, Tor Books (US), Head of Zeus (UK), February 2024 (thebezzle.org).
"The Lost Cause:" a solarpunk novel of hope in the climate emergency, Tor Books (US), Head of Zeus (UK), November 2023 (http://lost-cause.org).
"The Internet Con": A nonfiction book about interoperability and Big Tech (Verso) September 2023 (http://seizethemeansofcomputation.org). Signed copies at Book Soup (https://www.booksoup.com/book/9781804291245).
"Red Team Blues": "A grabby, compulsive thriller that will leave you knowing more about how the world works than you did before." Tor Books http://redteamblues.com.
"Chokepoint Capitalism: How to Beat Big Tech, Tame Big Content, and Get Artists Paid, with Rebecca Giblin", on how to unrig the markets for creative labor, Beacon Press/Scribe 2022 https://chokepointcapitalism.com
"Enshittification, Why Everything Suddenly Got Worse and What to Do About It" (the graphic novel), Firstsecond, 2026
"The Post-American Internet," a geopolitical sequel of sorts to Enshittification, Farrar, Straus and Giroux, 2027
"Unauthorized Bread": a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, 2027
"The Memex Method," Farrar, Straus, Giroux, 2027
Today's top sources:
Currently writing: "The Post-American Internet," a sequel to "Enshittification," about the better world the rest of us get to have now that Trump has torched America. Third draft completed. Submitted to editor.
"The Post-American Internet," a short book about internet policy in the age of Trumpism. PLANNING.
A Little Brother short story about DIY insulin PLANNING

This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.
https://creativecommons.org/licenses/by/4.0/
Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.
Blog (no ads, tracking, or data-collection):
Newsletter (no ads, tracking, or data-collection):
https://pluralistic.net/plura-list
Mastodon (no ads, tracking, or data-collection):
Bluesky (no ads, possible tracking and data-collection):
https://bsky.app/profile/doctorow.pluralistic.net
Medium (no ads, paywalled):
Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):
https://mostlysignssomeportents.tumblr.com/tagged/pluralistic
"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla
READ CAREFULLY: By reading this, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies ("BOGUS AGREEMENTS") that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer.
ISSN: 3066-764X
time-1.10 released [stable] [Planet GNU]
This is to announce time-1.10, a stable release.
The 'time' command runs another program, then displays information about
the resources used by that program.
There have been 79 commits by 5 people in the 422 weeks since 1.9.
See the NEWS below for a brief summary.
Thanks to everyone who has contributed!
The following people contributed changes to this release:
Andreas Schwab (1)
Assaf Gordon (10)
Collin Funk (65)
Dominique Martinet (1)
Petr Písař (2)
Collin
[on behalf of the time maintainers]
==================================================================
Here is the GNU time home page:
https://gnu.org/s/time/
Here are the compressed sources:
https://ftp.gnu.org/gnu/time/time-1.10.tar.gz (832KB)
https://ftp.gnu.org/gnu/time/time-1.10.tar.xz (572KB)
Here are the GPG detached signatures:
https://ftp.gnu.org/gnu/time/time-1.10.tar.gz.sig
https://ftp.gnu.org/gnu/time/time-1.10.tar.xz.sig
Use a mirror for higher download bandwidth:
https://www.gnu.org/order/ftp.html
Here are the SHA256 and SHA3-256 checksums:
SHA256 (time-1.10.tar.gz) = 6MKftKtZnYR45B6GGPUNuK7enJCvJ9DS7yiuUNXeCcM=
SHA3-256 (time-1.10.tar.gz) = zDjyfyzfABsSZp7lwXeYr368VzjZMkNPUJNnfpIakGk=
SHA256 (time-1.10.tar.xz) = cGv3uERMqeuQN+ntoY4dDrfCMnrn2MLOOkgjxfgMexE=
SHA3-256 (time-1.10.tar.xz) = U/Z0kMenoHkc7+rkCHMeyku8nXvIPppoQ2jq3B50e/A=
Verify the base64 SHA256 checksum with 'cksum -a sha256 --check'
from coreutils-9.2 or OpenBSD's cksum since 2007.
Verify the base64 SHA3-256 checksum with 'cksum -a sha3 --check'
from coreutils-9.8.
Use a .sig file to verify that the corresponding file (without the
.sig suffix) is intact. First, be sure to download both the .sig file
and the corresponding tarball. Then, run a command like this:
gpg --verify time-1.10.tar.gz.sig
The signature should match the fingerprint of the following key:
pub rsa4096/8CE6491AE30D7D75 2024-03-11 [SC]
Key fingerprint = 2371 1855 08D1 317B D578 E5CC 8CE6 491A E30D 7D75
uid [ultimate] Collin Funk <collin.funk1@gmail.com>
If that command fails because you don't have the required public key,
or that public key has expired, try the following commands to retrieve
or refresh it, and then rerun the 'gpg --verify' command.
gpg --locate-external-key collin.funk1@gmail.com
gpg --recv-keys 8CE6491AE30D7D75
wget -q -O- 'https://savannah.gnu.org/project/release-gpgkeys.php?group=time&download=1' | gpg --import -
As a last resort to find the key, you can try the official GNU
keyring:
wget -q https://ftp.gnu.org/gnu/gnu-keyring.gpg
gpg --keyring gnu-keyring.gpg --verify time-1.10.tar.gz.sig
This release is based on the time git repository, available as
git clone https://https.git.savannah.gnu.org/git/time.git
with commit 40003f3c8c4ad129fbc9ea0751c651509ac5bb23 tagged as v1.10.
For a summary of changes and contributors, see:
https://gitweb.git.savannah.gnu.org/gitweb/?p=time.git;a=shortlog;h=v1.10
or run this command from a git-cloned time directory:
git shortlog v1.9..v1.10
This release was bootstrapped with the following tools:
Autoconf 2.73
Automake 1.18.1
Gnulib 2026-04-13 c754c51f0f2b9a1e22d0d3eadfefff241de0ea48
NEWS
* Noteworthy changes in release 1.10 (2026-04-14) [stable]
** Bug fixes
'time --help' no longer incorrectly lists the short option -h as being
supported. Previously it was listed as being equivalent to --help.
[bug introduced in time-1.8]
'time --help' no longer emits duplicate percent signs in the description of
the --portability option.
[bug introduced in time-1.8]
time now opens the file specified by --output with its close-on-exec flag set.
Previously the file descriptor would be leaked into the child process.
[This bug was present in "the beginning".]
time no longer appends the program name to the output when the format string
contains a trailing backslash.
[This bug was present in "the beginning".]
** Improvements
time now uses the more portable waitpid and getrusage system calls
instead of wait3.
time can now be built using a C23 compiler.
time now uses unlocked stdio functions on platforms that provide them.
Girl Genius for Wednesday, April 15, 2026 [Girl Genius]
The Girl Genius comic for Wednesday, April 15, 2026 has been posted.
Dilly-Dallying In Denver: Day 3 [Whatever]
The title of this post is partially inaccurate, as part of my third day in Denver was spent in Boulder. Before going into Boulder, Alex and I decided to kick the day off with a mani pedi, and get matching colors. Cat eye polish, of course:

I was obsessed with this color, and I think it looked especially good on Alex’s longer nails. I mean just look at these bad boys:

Sparkly!
With fresh nails, we finally headed towards Boulder. Our first stop was the Boulder Museum of Contemporary Art. This art museum is “pay from your heart,” which means you can pay as much as you feel like for admission. I love this idea because it makes art so accessible, especially for Boulder college kids. Art museum prices can be pretty intense, so being able to price the admission for what fits into your budget is really nice.
While I didn’t photograph any of the actual artwork, I did capture the summary of this specific exhibition they had going on called “Yes, &…“:

I liked the theme. It was interesting, and all of the pieces I saw were definitely very unique and full of different mediums and mixed media. Very cool stuff all around, and the gift shop was awesome. I got some cute cards and stickers!
Right next door to the museum was the spot I was most excited for, the Boulder Dushanbe Teahouse. I have a hard time liking tea, but I love tea houses and tea time. It’s more of an aesthetic thing, really. And Dushanbe is, in fact, an extremely aesthetic tea house. With an ornate, colorful, interior filled with plants, statues, and high, hand-painted ceilings held up by hand-carved cedar columns, the artistry pours out of every nook and cranny. On their website, this page talks about the 40 different Tajikistani artists that created the art that makes this tea house so beautiful, as well as the capital of Tajikistan, the teahouses namesake.
Look how wild these details are!

The tea house is very popular, and their daily Afternoon Tea requires a reservation 24-hours in advance. Their even more coveted weekend Dim Sum Teatime is only offered on select weekends throughout the year, and reservations are required 60 days in advance.
As amazing as those sounded, Alex and I just went for their regular walk-in lunch, no waiting or reservation required. Though while we were there, they were actively setting up for their Afternoon Tea, and I got to see some of that unfold and peek at some snacks they were served. Plus each tea time table gets fresh flowers on their table:

Besides their extensive tea menu, they also have some different beverages and cocktails to choose from:


I love that all of their cocktails (and mocktails) have tea in them, so fitting!
I started off with their house chai, as my friend highly recommended it:

I actually ordered this iced but it came hot, and I wasn’t about to complain. It really wasn’t a big deal and it was delicious hot, so it’s totally whatever. Alex definitely didn’t steer me wrong, this chai was very nicely spiced and not too sweet like a lot of chai lattes end up being.
I also ended up ordering the Espresso Bliss cocktail, because you already know I adore espresso martinis:

Tea infused vodka, Marble Moonlight espresso liqueur, Colorado Cream Liqueur, and espresso. I liked that this espresso martini had both espresso liqueur and cream liqueur, as a lot of espresso martinis don’t have any kind of cream component. Which is fine, too, just sometimes I like them creamier and sweeter rather than cold brew style.
And a quick look at the food before ordering our tea:


We actually did not get any food because we were trying to make sure we were hungry for our reservations at Shells & Sauce later that day, so we just stuck with tea (and a lil bit of vodka for me, evidently).
Finally, time for our actual tea:

We decided to share two pots, one of their white peach tea and one mango tea. They brought out our sets and a timer, and when the timer was done our tea would be done steeping. Alex took their tea plain, while I added copious amounts of cream and sugar. I’m a menace, I know.
I also wanted to show y’all this table behind ours, though it wasn’t cleaned off yet, look how nice this seating area is:

I would love to sit here with a big group of friends and experience their Afternoon Tea service.
After our tea session concluded, we checked out the shop and ended up taking some tea home. I really liked this tea house and definitely want to come back for food sometime!
Once we drove back to Denver, we chilled at the apartment before heading to our dinner reservation at Shells & Sauce, which they say on their website is a neighborhood Italian bistro. They weren’t kidding. This place is located in such a random little neighborhood next to a dry cleaners and a Chinese restaurant, and is just a little place absolutely packed with excited diners. Line out the door, yet nothing flashy on the inside. Just a small neighborhood joint, as advertised.
While we had originally come for their Restaurant Week menu, we decided to not pursue that menu and just order whatever we wanted instead.
I started off with one of their signature cocktails, the Pearfect Martini:

Grey Goose La Poire (pear vodka), pear puree, lemon, and Prosecco. Does that not sound like a nice, refreshing, crisp martini? It was pretty good, definitely a little spirit-forward but it honestly might’ve just been a heavy pour. I mean, the glass is definitely very full.
We split two appetizers: the garlic cheese curds, and the crab cakes.

The texture of these cheese curds was really good, they were nice and squeaky curds, too. I will say there wasn’t a ton of garlic flavor, they seemed more just like plain cheese curds, but who doesn’t love a good curd?

While I’m always happy to have a crab cake, these ones weren’t particularly memorable. They weren’t bad at all but were just very standard.
Then, it was time for our entrees. I got the Stuffed Shells Duo:

The two shells on the left were six-cheese stuffed shells with marinara, and on the right we have the sweet potato, butternut squash, and goat cheese stuffed shells with pesto cream.
While the flavor of the stuffed shells fillings were really good, especially the sweet potato one, the pesto cream sauce was a broken emulsion, and made the dish feel rather heavy and oily. So while the filling was tasty, I think the presentation and mouthfeel of the dish suffered from the oily sauce. Which is sad because I love pesto cream!
My friend just got chicken fettuccini alfredo:

We opted not to get dessert. The food was okay, the vibe was okay, and the service was just okay. Honestly, I’d rather go here when there’s no dinner rush, sit on the patio, and just have some wine and bruschetta.
Once again we returned to the apartment, and this time we partook in the lovely amenities of the apartment, that being the rooftop pool and hot tub. It was definitely too chilly for the pool, especially because of the wind, but the hot tub was so nice.
After that brief relaxing period, we knew it was time to hit the bars (we only hit two, haha).
First up on our list was a rooftop bar super close to Alex’s apartment called Sorry Gorgeous. You’ll know you’re on the right path when you see this doormat in front of the elevator:

I really loved the interior design of Sorry Gorgeous. Green velvet couches, huge moon lamps, plants, a low-lit bar area and a great view of the nighttime skyline.
I didn’t take too many photos, but here’s some to get a general vibe for the place:

I love how the shelves are built into the wall like it’s some sort of cave full of liquor.

As you can see, it wasn’t very crowded, most everyone resided on that half of the bar while my friend I were practically all alone on our side.
We ended up moving to this corner booth to take some photos together!

I actually ended up taking a selfie I liked pretty well:

This was about number five hundred and sixty-four and I shortly gave up on photos after this because I figured one that I liked decently was good enough.
I ordered their All Saints cocktail:

Made with Botanist gin, pear, elderflower, rhubarb, lemon, and winter spices, this cocktail was refreshing and slightly sweet, and felt sophisticated. As you can see, I clearly like pear.
I really liked the service here. Since they weren’t busy we actually ended up talking to one of the staff members for a while and he was super nice and cool. I definitely thought this place would have more of a mean-girl bartender energy but that ended up not being the case at all!
Next time I go, I would love to try their pistachio guacamole and crispy mini tacos.
Onto our next bar of the evening, the Yacht Club.

A warm welcome, no doubt.
While a little small, it more so just has that cozy dive bar feel where yeah, sure you might bump elbows with someone once or twice, but it’s all peachy keen, we’re all comrades, y’know? The bar portion of the Yacht Club is built right into the corner:

What I initially thought was just a dive bar turned out to be something so much cooler and more unique. The Yacht Club is a wildly interesting cocktail bar that also has hotdogs. Lots of hotdogs.

Look at this adorable little teeny tiny hot dog menu! From the classic dog to a dog with caviar, to one served alongside a Jack and Coke, you’re sure to find your preferred type. Personally, I really wanted a sampler platter of all of them.
Aside from the hot dog menu, they had this drink menu:

I went ahead and ordered the Chew-Chu:

I had never heard of shochu before, but it turns out it’s a lot like sake and soju in the sense it’s a Japanese spirit made from the same sort of base ingredients like rice, barley, and sweet potato.
Though this drink was a little dry from the Sauvignon Blanc, it had really good, light flavors and was refreshing to sip on.
Oh, and here’s their menu of “dope shit we have rn”:

That amused me greatly.
Y’all. Look what Alex got:

CANNED GATORADE. Have you ever seen such a thing before?! This was so mind blowing, Yacht Club is officially the coolest place ever.
This is Alex’s drink but I genuinely can’t remember what the heck it is:

Once we had our initial drinks, we were still so stuffed from dinner that I couldn’t have a hot dog, but I knew they clearly had caviar, so I asked if a caviar bump was available for purchase. I love a caviar bump, it feels so luxe and is so spontaneous and fun. Thankfully the bartenders, who were so much fun and absolutely hilarious, said yes, and even did one with us:

Yummy. You’ll never guess how much they cost, either. A cool and breezy five smackaroos. Have you ever had a cheaper caviar bump?!
After taking a house shot, which I definitely don’t remember what they poured us (and also did with us), I got this drink:

I can’t remember the name of this one, but it was very good, with like, a ton of crazy flavors packed in. I know that’s not descriptive, I was decently drunk okay cut me some slack!
Okay, okay, one more, and this is in fact the final of the 36 photos. You’re all troopers. Here’s the final drink of the evening:

This one I do remember the name of. This is the Southside Swizzle. I actually really enjoy Southside cocktails, and this one was no exception. The mint with the strawberry and lime was an elite combo. I love the visual presentation here, too.
Just kidding, I have one more photo! Check out this flamingo wallpaper in their bathroom:

Finally, we walked back to Alex’s apartment, had some snacks, and went to bed. It was a long but extremely fun and memorable day. I absolutely loved the museum, the tea house, Sorry Gorgeous, and the Yacht Club. Highly recommend all of them!
Have you been to Boulder before? Do you like rooftop bars as much as I do? Have you seen canned Gatorade before? Let me know in the comments, and have a great day!
-AMS
Robert Smith: Not all elementary functions can be expressed with exp-minus-log [Planet Lisp]
By Robert Smith
All Elementary Functions from a Single Operator is a paper by Andrzej Odrzywołek that has been making rounds on the internet lately, being called everything from a “breakthrough” to “groundbreaking”. Some are going as far as to suggest that the entire foundations of computer engineering and machine learning should be re-built as a result of this. The paper says that the function
$$ E(x,y) := \exp x - \log y $$
together with variables and the constant $1$, which we will call EML terms, are sufficient to express all elementary functions, and proceeds to give constructions for many constants and functions, from addition to $\pi$ to hyperbolic trigonometry.
I think the result is neat and thought-provoking. Odrzywołek is explicit about his definition of “elementary function”. His Table 1 fixes “elementary” as 36 specific symbols, and under that definition his theorem is correct and clever, so long as we accept some of his modifications to the conventional $\log$ function and do arithmetic with infinities.
My concern is that the word “elementary” in the title carries a much broader meaning in standard mathematical usage. Odrzywołek recognizes this, saying little more than “[t]hat generality is not needed here” and that his work takes “the ordinary scientific-calculator point of view”. He does not offer further commentary.
What is this more general setting, and does his claim still hold? In modern pure mathematics, dating back to the 19th century, the definition of “elementary function” has been well established. We’ll get to a definition shortly, but to cut to the chase, the titular result does not hold in this setting. As such, in layman’s terms, I do not consider the “Exp-Minus-Log” function to be the continuous analog of the Boolean NAND gate or the universal quantum CCNOT/CSWAP gates.
The rough TL;DR is this: Elementary functions typically include arbitrary polynomial root functions, and EML terms cannot express them. Below, I’ll give a relatively technical argument that EML terms are not sufficient to express what I consider standard elementary functions.
To avoid any confusion, the purpose of this blog post is manifold:
This blog post is not a refutation of Odrzywołek’s work, though the title might be considered just as clickbait (and accurate) as his, depending on where you sit in the hall of mathematics and computation.
Disclaimer: I audited graduate-level mathematics courses almost 20 years ago, and I am not a professional mathematician. Please email me if my statements are clumsy or incorrect.
The 19th century is where all modern understanding of elementary functions was developed, Liouville being one of the big names with countless theorems of analysis and algebra named after him. One such result is about integration: do the outputs of integrals look the same as their inputs? Well, what does “input” and “look the same” mean? Liouville defined a class of functions called elementary functions, and said that the integral of an elementary function will sometimes be elementary, and when it is, it will always resemble the input in a specific way, plus potential extra logarithmic factors.
Since then, elementary functions have been defined by starting with rational functions and closing under arithmetic operations, composition, exponentiation, logarithms, and polynomial roots. While EML terms are quite expressive, they are unable to capture the “polynomial roots” in full generality. We will show this by using Khovanskii’s topological Galois theory: the monodromy group of a function built from rational functions by composition with $\exp$ and $\log$ is solvable. For anybody that has studied Galois theory in an algebra course, this will be familiar, as the destination here is effectively the same, but with more powerful intermediate tooling to wrangle exponentials and logarithms.
First, let’s be more precise by what we mean by an EML term and by a standard elementary function.
Definition (EML Term): An EML term in the variables $x_1,\dots,x_n$ is any expression obtained recursively, starting from $\{1, x_1,\dots,x_n\}$, by the rule $$ T,S \mapsto \exp T-\log S. $$ Each such term, evaluated at a point where all the $\log$ arguments are nonzero, determines an analytic germ; we take $\mathcal T_n$ to be the class of germs representable this way, together with their maximal analytic continuations.
Definition (Standard Elementary Function): The standard elementary functions $\mathcal{E}_n$ are the smallest class of multivalued analytic functions on domains in $\mathbb{C}^n$ containing the rational functions and closed under
What we will show is that the class of elementary functions defined this way is strictly larger than the class induced by EML terms.
Lemma: Every EML term has solvable monodromy group. In particular, if $f\in\mathcal T_n$ is algebraic over $\mathbb C(x_1,\dots,x_n)$, then its monodromy group is a finite solvable group.
Proof: We prove by induction on EML term construction. Constants and coordinate functions have trivial monodromy.
For the inductive step, suppose $f = \exp A-\log B$ with $A,B\in\mathcal T_n$, and assume that $\mathrm{Mon}(A)$ and $\mathrm{Mon}(B)$ are solvable. We argue in three steps.
Step 1: $\mathrm{Mon}(\exp A)$ is solvable. The germs of $\exp A$ are images under $\exp$ of the germs of $A$, with germs of $A$ differing by $2\pi i\mathbb Z$ collapsing to the same value. So there is a surjection $\mathrm{Mon}(A)\twoheadrightarrow\mathrm{Mon}(\exp A)$, and a quotient of a solvable group is solvable.
Step 2: $\mathrm{Mon}(\log B)$ is solvable. At a generic point $p$, germs of $\log B$ are parameterized by pairs $(b,k)$ where $b$ is a germ of $B$ at $p$ and $k\in\mathbb Z$ selects the branch of $\log$. A loop $\gamma$ acts by $$ (b,k)\mapsto\bigl(\rho_B(\gamma)(b), k+n(\gamma,b)\bigr), $$ where $\rho_B(\gamma)$ is the monodromy action of $\gamma$ on germs of $B$, and $n(\gamma,b)\in\mathbb Z$ is the winding number around $0$ of the analytic continuation of $b$ along $\gamma$. The projection $\mathrm{Mon}(\log B)\to\mathrm{Mon}(B)$ onto the first component is a surjective homomorphism. Its kernel consists of the elements of $\mathrm{Mon}(\log B)$ induced by loops $\gamma$ with $\rho_B(\gamma)=\mathrm{id}$, which then act only by integer shifts on the $k$-coordinate. Let $S_B$ be the set of germs of $B$ at $p$. For each $b\in S_B$, such a loop determines an integer shift $n(\gamma,b)$, so the kernel embeds in the direct product $\mathbb Z^{S_B}$. In particular, the kernel is abelian. Hence $\mathrm{Mon}(\log B)$ is an extension of $\mathrm{Mon}(B)$ by an abelian group, and extensions of solvable groups by abelian groups are solvable.
Step 3: $\mathrm{Mon}(f)$ is solvable. At a generic point, a germ of $f=\exp A-\log B$ is obtained by subtraction from a pair (germ of $\exp A$, germ of $\log B$), and analytic continuation acts componentwise on such pairs. This gives a surjection of $\pi_1$ onto some subgroup $$ H \le \mathrm{Mon}(\exp A)\times\mathrm{Mon}(\log B), $$ and, since $f$ is obtained from the pair by subtraction, this descends to a surjection $H\twoheadrightarrow\mathrm{Mon}(f)$. So $\mathrm{Mon}(f)$ is a quotient of a subgroup of a direct product of solvable groups, hence solvable.
The second statement of the lemma follows: an algebraic function has finitely many branches, so its monodromy group is finite; a solvable group that is finite is, well, finite and solvable. ∎
Remark. This is the core of Khovanskii’s topological Galois theory; see Topological Galois Theory: Solvability and Unsolvability of Equations in Finite Terms.
Theorem: $\mathcal T_n \subsetneq \mathcal E_n$.
Proof: $\mathcal E_n$ is closed under algebraic adjunction, so any local branch of an algebraic function is elementary. In particular, a branch of a root of the generic quintic $$ f^5+a_1f^4+a_2f^3+a_3f^2+a_4f+a_5=0 $$ is elementary.
Suppose for contradiction that at some point $p$ a germ of a branch of this root agrees with a germ of an EML term $T$. By uniqueness of analytic continuation, the Riemann surfaces obtained by maximally continuing these two germs coincide, so in particular their monodromy groups coincide. The monodromy group of the generic quintic is $S_5$, which is not solvable. But by the lemma, the monodromy group of any EML term is solvable. Contradiction.
Hence $\mathcal T_n$ is a strict subset of $\mathcal E_n$. ∎
Edit (16 April 2026): This article used to have an example proving that the real and complex absolute value cannot be expressed as EML terms under the conventional definition of $\log$. I wrote it to emphasize that Odrzywołek’s approach required mathematical “patching” in order to work as intended. However, it ended up more distracting than illuminating, and was tangential to the point about the definition of “elementary”, so it has been removed.
Papal, See? – DORK TOWER 13.04.26 [Dork Tower]
Most DORK TOWER strips are now available as signed,
high-quality prints, from just $25! CLICK
HERE to find out more!
HEY! Want to help keep DORK TOWER going? Then consider joining the DORK TOWER Patreon and ENLIST IN THE ARMY OF DORKNESS TODAY! (We have COOKIES!) (And SWAG!) (And GRATITUDE!)
Urgent: Censure bully for threatening reporters [Richard Stallman's Political Notes]
US citizens: call on Congress to censure the bully for threatening reporters with treason charges.
US citizens: Join with this campaign to address this issue.
To phone your congresscritter about this, the main switchboard is +1-202-224-3121.
Please spread the word.
Urgent: Restore PBS and NPR funding [Richard Stallman's Political Notes]
US citizens: call on Congress to restore PBS and NPR funding.
US citizens: Join with this campaign to address this issue.
To phone your congresscritter about this, the main switchboard is +1-202-224-3121.
Please spread the word.
Urgent: Fully fund conservation programs [Richard Stallman's Political Notes]
US citizens: call on the USDA to fully fund conservation programs.
Urgent: Don't cut funds for Americans' medicine [Richard Stallman's Political Notes]
US citizens: call on Congress not to cut funds for Americans' medicine for the sake of unjustified war.
US citizens: Join with this campaign to address this issue.
To phone your congresscritter about this, the main switchboard is +1-202-224-3121.
Please spread the word.
Urgent: Reject budget packages that slash basic needs programs [Richard Stallman's Political Notes]
US citizens: call on Tell Congress: Reject any budget package that slashes basic needs programs to give additional billions to deportation and war.
See the instructions for how to sign this letter campaign without running any nonfree JavaScript code--not trivial, but not hard.
US citizens: Join with this campaign to address this issue.
To phone your congresscritter about this, the main switchboard is +1-202-224-3121.
Please spread the word.
Urgent: Protect Jewish employees from federal persecution [Richard Stallman's Political Notes]
US citizens: call on universities to stand with he University of Pennsylvania to protect Jewish employees from federal persecution.
See the instructions for how to sign this letter campaign without running any nonfree JavaScript code--not trivial, but not hard.
Magats don't actually care about Jews, but they find "stopping antisemitism" a convenient excuse to persecute people who speak up for the rights of Palestinians. However, threatening Jews in the name of "stopping antisemitism" is even more perverse.
Urgent: Reject wrecker's military budget [Richard Stallman's Political Notes]
US citizens: call on Congress to reject the wrecker's proposed $1.5 trillion military budget.
US citizens: Join with this campaign to address this issue.
To phone your congresscritter about this, the main switchboard is +1-202-224-3121.
Please spread the word.
Urgent: Fight Republican schemes to cut funds for medical care [Richard Stallman's Political Notes]
US citizens: call on your congresscritter and senators to fight any Republican scheme to cut funds for medical care or boost funds for war with Iran.
See the instructions for how to sign this letter campaign without running any nonfree JavaScript code--not trivial, but not hard.
US citizens: Join with this campaign to address this issue.
To phone your congresscritter about this, the main switchboard is +1-202-224-3121.
Please spread the word.
Urgent: Ban junk fees for rental housing [Richard Stallman's Political Notes]
US citizens: call on the Federal Trade Commission to ban junk fees for rental housing.
In my letter, I also said that would-be renters should not be required to use any web site in the process of seeking, accepting, occupying and paying for the rental. Those web sites are usually malicious, since they run nonfree software in the user's browser. And they do various sorts of snooping.
By raising this issue in your letter, you will support software freedom.
See the instructions for how to sign this letter campaign without running any nonfree JavaScript code--not trivial, but not hard.
Urgent: Expand investigation of torture of prisoners [Richard Stallman's Political Notes]
US citizens: call on your congresscritter and senators to expand the investigation of torture of prisoners to all deportation prisons.
See the instructions for how to sign this letter campaign without running any nonfree JavaScript code--not trivial, but not hard.
US citizens: Join with this campaign to address this issue.
To phone your congresscritter about this, the main switchboard is +1-202-224-3121.
Please spread the word.
Urgent: Pass Fossil-Free Insurers Act [Richard Stallman's Political Notes]
US citizens: call on your state legislators to pass the Fossil-Free Insurers Act.
See the instructions for how to sign this letter campaign without running any nonfree JavaScript code--not trivial, but not hard.
Patch Tuesday, April 2026 Edition [Krebs on Security]
Microsoft today pushed software updates to fix a staggering 167 security vulnerabilities in its Windows operating systems and related software, including a SharePoint Server zero-day and a publicly disclosed weakness in Windows Defender dubbed “BlueHammer.” Separately, Google Chrome fixed its fourth zero-day of 2026, and an emergency update for Adobe Reader nixes an actively exploited flaw that can lead to remote code execution.

Redmond warns that attackers are already targeting CVE-2026-32201, a vulnerability in Microsoft SharePoint Server that allows attackers to spoof trusted content or interfaces over a network.
Mike Walters, president and co-founder of Action1, said CVE-2026-32201 can be used to deceive employees, partners, or customers by presenting falsified information within trusted SharePoint environments.
“This CVE can enable phishing attacks, unauthorized data manipulation, or social engineering campaigns that lead to further compromise,” Walters said. “The presence of active exploitation significantly increases organizational risk.”
Microsoft also addressed BlueHammer (CVE-2026-33825), a privilege escalation bug in Windows Defender. According to BleepingComputer, the researcher who discovered the flaw published exploit code for it after notifying Microsoft and growing exasperated with their response. Will Dormann, senior principal vulnerability analyst at Tharros, says he confirmed that the public BlueHammer exploit code no longer works after installing today’s patches.
Satnam Narang, senior staff research engineer at Tenable, said April marks the second-biggest Patch Tuesday ever for Microsoft. Narang also said there are indications that a zero-day flaw Adobe patched in an emergency update on April 11 — CVE-2026-34621 — has seen active exploitation since at least November 2025.
Adam Barnett, lead software engineer at Rapid7, called the patch total from Microsoft today “a new record in that category” because it includes nearly 60 browser vulnerabilities. Barnett said it might be tempting to imagine that this sudden spike was tied to the buzz around the announcement a week ago today of Project Glasswing — a much-hyped but still unreleased new AI capability from Anthropic that is reportedly quite good at finding bugs in a vast array of software.
But he notes that Microsoft Edge is based on the Chromium engine, and the Chromium maintainers acknowledge a wide range of researchers for the vulnerabilities which Microsoft republished last Friday.
“A safe conclusion is that this increase in volume is driven by ever-expanding AI capabilities,” Barnett said. “We should expect to see further increases in vulnerability reporting volume as the impact of AI models extend further, both in terms of capability and availability.”
Finally, no matter what browser you use to surf the web, it’s important to completely close out and restart the browser periodically. This is really easy to put off (especially if you have a bajillion tabs open at any time) but it’s the only way to ensure that any available updates get installed. For example, a Google Chrome update released earlier this month fixed 21 security holes, including the high-severity zero-day flaw CVE-2026-5281.
For a clickable, per-patch breakdown, check out the SANS Internet Storm Center Patch Tuesday roundup. Running into problems applying any of these updates? Leave a note about it in the comments below and there’s a decent chance someone here will pipe in with a solution.
Microsoft isn’t removing Copilot from Windows 11, it’s just renaming it [OSnews]
A few weeks ago, Microsoft made some concrete promises about fixing and improving Windows, and among them was removing useless “AI” integrations. Applications like Notepad, Snipping Tool, and others would see their “AI” features removed. Well, it turns out Microsoft employs a very fringe definition of the concept.
Microsoft seems to have stripped away mentions of the “Copilot” brand in the Windows Insider version of the Notepad app. The Copilot button in the toolbar is gone, and instead, you’ll find a writing icon which will present you AI-powered writing assistance, such as rewrite, summarize, tone modification, format configuration, and more. Additionally, “AI features” in Notepad settings has been renamed to “Advanced features” and it allows users to toggle off AI capabilities within the app.
↫ Usama Jawad at Neowin
If the recent changes to Notepad are any indication, it seems Microsoft is, actually, not at all going to “reducing unnecessary Copilot entry points”, as they worded it, but is merely just going to rename these features so they aren’t so ostentatiously present. At least, that seems to be the plan for Notepad, and we’ll have to see if they have the same plans for the other applications. I mean, they have to push “AI” or look like fools.
I just don’t understand how a company like Microsoft can be so utterly terrible at communication. While I personally would want all “AI” features yeeted straight from Windows, I’m sure a ton of people are just fine with the features being less in-your-face and stuffed inside a normal menu alongside all the other normal features. They could’ve just been honest about their intentions, and it would’ve been so much better.
Like virtually every other technology company, Microsoft just seems incapable of not lying.
Why was there a red telephone at every receptionist desk? [The Old New Thing]
Some time ago, I noted that there was a walkthrough of the original Microsoft Building 3. If you go behind the receptionist desk, you’ll see a telephone at the receptionist’s station, but off to side, there was also a red telephone resting between a tape dispenser and a small pamphlet labelled “Quick Reference Guide”.
What was this red telephone for? Was it a direct line to Bill Gates’s office? Or maybe it was a direct line to Security?
Nope.
It was just a plain telephone.
And that’s what made it special.
As is customary at large companies, the telephones on the Microsoft campus were part of a corporate PBX (private branch exchange). A PBX is a private telephone system within a company, and companies use them to save on telephone costs, as well as to provide auxiliary telephone services. For example, you could call another office by dialing just the extension, and the call would be routed entirely within the PBX without having to interact with the public telephone systems. Generally, most calls are typically from one office to another, so a PBX saves considerable money by reducing demand for outside communications services. Also, a PBX allows integration with other systems. For example, if somebody leaves you a voicemail, the system can email you a message.
But what if the PBX is down, and there is an emergency?
The red telephones are plain telephones with standard telephone service. They are not part of the PBX and therefore operate normally even if there is a PBX outage. If there is an emergency, the receptionist can use the red telephone to call emergency services. Presumably, each red telephone was registered in the telephone system with the address of its building, allowing emergency services to dispatch assistance quickly.
Bonus chatter: What was the “Quick Reference Guide”? It was a guide to emergency procedures. It makes sense that it was kept next to the emergency telephone.
Bonus bonus chatter: Bill Gates kept a red telephone in his own office as well. If the PBX went down, I guess it was technically true that the red telephones could be used to call Bill Gates’s office.
The post Why was there a red telephone at every receptionist desk? appeared first on The Old New Thing.
The Zig project has announced version 0.16.0 of the Zig programming language.
This release features 8 months of work: changes from 244 different contributors, spread among 1183 commits.
Perhaps most notably, this release debuts I/O as an Interface, but don't sleep on the Language Changes or enhancements to the Compiler, Build System, Linker, Fuzzer, and Toolchain which are also included in this release.
LWN last covered Zig in December 2025.
A glitch in the matrix. The app that keeps daveverse.org in sync with scripting.com has been offline since Friday, so I'm republishing all the posts since then. They will all appear to have been posted today on daveverse. As they say -- still diggin!
I updated sally.scripting.com to support https, and updated it with posts from scripting.com in 2023-2026. I was using it as an example of prior art of user interface for Claude. I figured restoring this app on my own would be penance for believing that Claude was anywhere near as smart as I am. Not even close. Not today at least. Grrr.
I've come to the conclusion, perhaps temporarily, that Claude can't work on a programming project with an experienced developer. It doesn't check its work, it'll think it's found the problem, makes a change, or worse causes you to do a lot of work so it can make a change. It doesn't use the information it gives you, can't even remember what was in a bug report less than one screen above. I could have done the work I coached it through through the morning, with a thoroughly inadequate result, in an hour at most. At least today it couldn't learn from prior art, and couldn't follow basic instructions. It's weird though because I'm really suprised how little it knows about the scientific method or even has been trained in how to work with others. I seem to recall situations where it was extremely good at reading code. Not a totally wasted session, let's see what I can learn from it.
[$] Tagging music with MusicBrainz Picard [LWN.net]
Part of the "fun" that comes with curating a self-hosted music library is tagging music so that it has accurate and uniform metadata, such as the band names, album titles, cover images, and so on. This can be a tedious endeavor, but there are quite a few open-source tools to make this process easier. One of the best, or at least my favorite, is MusicBrainz Picard. It is a cross-platform music-tagging application that pulls information from the well-curated, crowdsourced MusicBrainz database project and writes it to almost any audio file format.
OpenSSL 4.0.0 released [LWN.net]
Version 4.0.0 of the OpenSSL cryptographic library has been released. This release includes support for a number of new cryptographic algorithms and has a number of incompatible changes as well; see the announcement for the details.
Steinar H. Gunderson: Looking for work [Planet Debian]

It seems my own plans and life's plans diverged this spring, so I am in the market for a new job. So if you're looking for someone with a long track record making your code go brrr really fast, give me a ping (contact information at my homepage). Working from Oslo (on-site or remote), CV available upon request. No AI boosterism or cryptocurrency grifters, please :-)
Google Broke Its Promise to Me. Now ICE Has My Data. [Deeplinks]
In September 2024, Amandla Thomas-Johnson was a Ph.D. candidate studying in the U.S. on a student visa when he briefly attended a pro-Palestinian protest. In April 2025, Immigration and Customs Enforcement (ICE) sent Google an administrative subpoena requesting his data. The next month, Google gave Thomas-Johnson's information to ICE without giving him the chance to challenge the subpoena, breaking a nearly decade-long promise to notify users before handing their data to law enforcement.
Today, the Electronic Frontier Foundation sent complaints to the California and New York Attorneys General asking them to investigate Google for deceptive trade practices for breaking that promise. You can read about the complaints here. Below is Thomas-Johnson's account of his ordeal.
I thought my ordeal with U.S. immigration authorities was over a year ago, when I left the country, crossing into Canada at Niagara Falls.

By that point, the Trump administration had effectively turned federal power against international students like me. After I attended a pro-Palestine protest at Cornell University—for all of five minutes—the administration’s rhetoric about cracking down on students protesting what we saw as genocide forced me into hiding for three months. Federal agents came to my home looking for me. A friend was detained at an airport in Tampa and interrogated about my whereabouts.
I’m currently a Ph.D. student. Before that, I was a reporter. I’m a dual British and Trinadad and Tobago citizen. I have not been accused of any crime.
I believed that once I left U.S. territory, I had also left the reach of its authorities. I was wrong.
Weeks later, in Geneva, Switzerland, I received what looked like a routine email from Google. It informed me that the company had already handed over my account data to the Department of Homeland Security.
At first, I wasn’t alarmed. I had seen something similar before. An associate of mine, Momodou Taal, had received advance notice from Google and Facebook that his data had been requested. He was given advanced notice of the subpoenas, and law enforcement eventually withdrew them before the companies turned over his data.
Google had already disclosed my data without telling me.
I assumed I would be given the same opportunity. But the language in my email was different. It was final: “Google has received and responded to legal process from a law enforcement authority compelling the release of information related to your Google Account.”
Google had already disclosed my data without telling me. There was no opportunity to contest it.
To be clear, this should not have happened this way. Google promises that it will notify users before their data is handed over in response to legal processes, including administrative subpoenas. That notice is meant to provide a chance to challenge the request. In my case, that safeguard was bypassed. My data was handed over without warning—at the request of an administration targeting students engaged in protected political speech.
Months later, my lawyer at the Electronic Frontier Foundation obtained the subpoena itself. On paper, the request focused largely on subscriber information: IP addresses, physical address, other identifiers, and session times and durations.
But taken together, these fragments form something far more powerful—a detailed surveillance profile. IP logs can be used to approximate location. Physical addresses show where you sleep. Session times would show when you were communicating with friends or family. Even without message content, the picture that emerges is intimate and invasive.
What this experience has made clear is that anyone can be targeted by law enforcement. And with their massive stores of data, technology companies can facilitate those arbitrary investigations. Together, they can combine state power, corporate data, and algorithmic inference in ways that are difficult to see—and even harder to challenge.
The consequences of what happened to me are not abstract. I left the United States. But I do not feel that I have left its reach. Being investigated by the federal government is intimidating. Questions run through your head. Am I now a marked individual? Will I face heightened scrutiny if I continue my reporting? Can I travel safely to see family in the Caribbean?
Who, exactly, can I hold accountable?
EFF to State AGs: Investigate Google's Broken Promise to Users Targeted by the Government [Deeplinks]
SAN FRANCISCO – The Electronic Frontier Foundation sent complaints today to the attorneys general of California and New York urging them to investigate Google for deceptive trade practices, related to the company's broken promise to give users prior notice before disclosing their information to law enforcement.
The letters were sent on behalf of Amandla Thomas-Johnson, whose information was disclosed to U.S. Immigration and Customs Enforcement (ICE) without prior notice from Google.
For nearly a decade, Google has promised billions of users that it will notify them before disclosing their personal data to law enforcement. Many times, the company has done just that. But through a hidden and systematic practice, Google has likely violated that promise numerous times over the years. This was the case for Thomas-Johnson, a Ph.D. candidate who was targeted by ICE after briefly attending a protest, effectively preventing him from contesting an invalid subpoena for his data.
"Google should answer the question: How many other times has it broken its promise to users?” said EFF Senior Staff Attorney F. Mario Trujillo. "Advance notice is especially important now, when agencies like ICE are unconstitutionally targeting users for First Amendment-protected activity. State attorneys general should investigate Google for this deception."
On Google’s Privacy & Terms page, it promises its users that “When we receive a request from a government agency, we send an email to the user account before disclosing information.” This promise ensures that users can protect their own privacy and decide to challenge overbroad or illegal demands on their own behalf.
But on May 8, 2025, Google complied with an administrative subpoena from ICE seeking Thomas-Johnson’s subscriber information, including his name, address, IP address, and other personal identifiers. Later that same day, the company sent Thomas-Johnson a message telling him it had already complied with the subpoena, which he would have successfully challenged had he been given advance notice. Google received the subpoena in April and had more than a month to alert Thomas-Johnson.
Communication between EFF and Google later revealed that this is a systematic issue, not an isolated one. When Google does not fulfill a subpoena within a government-provided artificial deadline, the company's outside counsel explained, Google will sometimes comply with the request and provide notice to a user on the same day. The company calls this practice “simultaneous notice.”
"What this experience has made clear is that anyone can be targeted by law enforcement," said Thomas-Johnson. "And with their massive stores of data, technology companies can facilitate those arbitrary investigations. Who, exactly, can I hold accountable?"
Google must commit to ending this deception and pay for
its past mistakes. The attorneys general of California
and New York are empowered to stop deceptive business practices and
seek financial restitution stemming from those practices. As EFF
writes in its complaints, they should
investigate, hold Google to its public promise
to give users advanced notice of law enforcement demands, and
take appropriate action if
necessary.
For
the complaints:
https://www.eff.org/document/eff-letter-re-google-notice-california
https://www.eff.org/document/eff-letter-re-google-notice-new-york
https://www.eff.org/document/eff-letter-re-google-notice-exhibits
For
Thomas-Johnson's account of his ordeal:
https://www.eff.org/deeplinks/2026/04/google-broke-its-promise-me-now-ice-has-my-data
For
more information on lawless DHS
subpoenas: https://www.eff.org/deeplinks/2026/02/open-letter-tech-companies-protect-your-users-lawless-dhs-subpoenas
Contact: press@eff.org
Upcoming Speaking Engagements [Schneier on Security]
This is a current list of where and when I am scheduled to speak:
The list is maintained on this page.
Dirk Eddelbuettel: anytime 0.3.13 on CRAN: Mostly Minor Bugfix [Planet Debian]

A maintenance release 0.3.13 of the anytime package arrived on CRAN today, sticking with the roughly yearly schedule we have now. Binaries for r2u have been built already. The package is fairly feature-complete, and code and functionality remain mature and stable.
anytime is a
very focused package aiming to do just one thing really
well: to convert anything in integer, numeric, character,
factor, ordered, … input format to either POSIXct (when
called as anytime) or Date objects (when called as
anydate) – and to do so without requiring a
format string as well as accomodating different formats in
one input vector. See the anytime page,
the GitHub
repo for a few examples, the nice
pdf vignette, and the beautiful documentation site
for all documentation.
This release was triggered by a bizarre bug seen on elementary
os 8. For “reason” anytime was
taking note on startup where it runs, and used a small and simply
piece of code reading /etc/os-release when it exists.
We assumed sane content, but this particular operating system and
releases managed to have a duplicate entry throwing us spanner. So
now this code is robust to duplicates, and no longer executed on
each startup but “as needed” which is a net
improvement. We also switched the vignette to being deployed by the
new Rcpp::asis() driver.
The short list of changes follows.
Changes in anytime version 0.3.13 (2026-04-14)
Continuous integration has received minor updates
The vignette now use the
Rcpp::asis()driver, and references have been refreshedStateful 'where are we running' detection is now more robust, and has been moved from running on each startup to a cached 'as needed' case
Courtesy of my CRANberries, there is also a diffstat report of changes relative to the previous release. The issue tracker tracker off the GitHub repo can be use for questions and comments. More information about the package is at the package page, the GitHub repo, in the vignette, and at the documentation site.
This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can now sponsor me at GitHub. You can also sponsor my Tour de Shore 2026 ride in support of the Maywood Fine Arts Center.
Security updates for Tuesday [LWN.net]
Security updates have been issued by Debian (gdk-pixbuf, gst-plugins-bad1.0, and xdg-dbus-proxy), Fedora (chromium, deepin-image-viewer, dtk6gui, dtkgui, efl, elementary-photos, entangle, flatpak, freeimage, geeqie, gegl04, gthumb, ImageMagick, kf5-kimageformats, kf5-libkdcraw, kf6-kimageformats, kstars, libkdcraw, libpasraw, LibRaw, luminance-hdr, nomacs, OpenImageIO, OpenImageIO2.5, photoqt, python-cryptography, rawtherapee, shotwell, siril, swayimg, vips, and webkitgtk), Red Hat (firefox and podman), Slackware (libarchive), SUSE (expat, glibc, GraphicsMagick, libcap-devel, libpng16, libtpms, nodejs24, openssl-1_0_0, openssl-1_1, openssl-3, openvswitch, polkit, python-requests, python311-biopython, python312, python39, and tigervnc), and Ubuntu (corosync, kvmtool, libxml-parser-perl, linux-azure, linux-azure, linux-azure-6.17, linux-azure, linux-azure-6.8, policykit-1, redis, lua5.1, lua-cjson, lua-bitop, rustc, vim, and xdg-dbus-proxy).
Petter Reinholdtsen: Talking to the Computer, and Getting Some Nonsense Back... [Planet Debian]
At last, I can run my own large language model artificial idiocy generator at home on a Debian testing host using Debian packages directly from the Debian archive. After months of polishing the llama.cpp, whisper.cpp and ggml packages, and their dependencies, I was very happy to see today that they all entered Debian testing this morning. Several release-critical issues in dependencies have been blocking the migration for the last few weeks, and now finally the last one of these has been fixed. I would like to extend a big thanks to everyone involved in making this happen.
I've been running home-build editions of whisper.cpp and llama.cpp packages for a while now, first building from the upstream Git repository and later, as the Debian packaging progressed, from the relevant Salsa Git repositories for the ROCM packages, GGML, whisper.cpp and llama.cpp. The only snag with the official Debian packages is that the JavaScript chat client web pages are slightly broken in my setup, where I use a reverse proxy to make my home server visible on the public Internet while the included web pages only want to communicate with localhost / 127.0.0.1. I suspect it might be simple to fix by making the JavaScript code dynamically look up the URL of the current page and use that to determine where to find the API service, but until someone fixes BTS report #1128381, I just have to edit /usr/share/llama.cpp-tools/llama-server/themes/simplechat/simplechat.js every time I upgrade the package. I start my server like this on my machine with a nice AMD GPU (donated to me as a Debian developer by AMD two years ago, thank you very much):
LC_ALL=C llama-server \
-ngl 256 \
-c $(( 42 * 1024)) \
--temp 0.7 \
--repeat_penalty 1.1 \
-n -1 \
-m Qwen3-Coder-30B-A3B-Instruct-Q5_K_S.gguf
It only takes a few minutes to load the model for the first time and prepare a nice API server for me at https://my.reverse.proxy.example.com:8080/v1/, available (note, this sets up the server up without authentication; use a reverse proxy with authentication if you need it) for all the API clients I care to test. I switch models regularly to test different new ones, the Qwen3-Coder one just happen to be the one I use at the moment. Perhaps these packages is something for you to have fun with too?
As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.
A Hole in Your Plan [The Daily WTF]
Theresa works for a company that handles a fair bit of personally identifiable information that can be tied to health care data, so for them, security matters. They need to comply with security practices laid out by a variety of standards bodies and be able to demonstrate that compliance.
There's a dirty secret about standards compliance, though. Most of these standards are trying to avoid being overly technically prescriptive. So frequently, they may have something like, "a process must exist for securely destroying storage devices before they are disposed of." Maybe it will include some examples of what you could do to meet this standard, but the important thing is that you have to have a process. This means that if you whip up a Word document called "Secure Data Destruction Process" and tell people they should follow it, you can check off that box on your compliance. Sometimes, you need to validate the process; sometimes you need to have other processes which ensure this process is being followed. What you need to do and to what complexity depends on the compliance structure you're beholden to. Some of them are surprisingly flexible, which is a polite way of saying "mostly meaningless".
Theresa's company has a process for safely destroying hard drives. They even validated it, shortly after its introduction. They even have someone who checks that the process has been followed. The process is this: in the basement, someone set up a cheap drill press, and attached a wooden jig to it. You slap the hard drive in the jig, turn on the drill, and brrrrzzzzzz- poke a hole through the platters making the drive unreadable.
There's just one problem with that process: the company recently switched to using SSDs. The SSDs are in a carrier which makes them share the same form factor as old-style spinning disk drives, but that's just a thin plastic shell. The actual electronics package where the data is stored is quite small. Small enough, and located in a position where the little jig attached to the drill guarantees that the drill won't even touch the SSD at all.
For months now, whenever a drive got decommissioned, the IT drone responsible for punching a hole through it has just been drilling through plastic, and nothing else. An unknown quantity of hard drives have been sent out for recycling with PII and health data on them. But it's okay, because the process was followed.
The compliance team at the company will update the process, probably after six months of meetings and planning and approvals from all of the stakeholders. Though it may take longer to glue together a new jig for the SSDs.
How Hackers Are Thinking About AI [Schneier on Security]
Interesting paper: “What hackers talk about when they talk about AI: Early-stage diffusion of a cybercrime innovation.”
Abstract: The rapid expansion of artificial intelligence (AI) is raising concerns about its potential to transform cybercrime. Beyond empowering novice offenders, AI stands to intensify the scale and sophistication of attacks by seasoned cybercriminals. This paper examines the evolving relationship between cybercriminals and AI using a unique dataset from a cyber threat intelligence platform. Analyzing more than 160 cybercrime forum conversations collected over seven months, our research reveals how cybercriminals understand AI and discuss how they can exploit its capabilities. Their exchanges reflect growing curiosity about AI’s criminal applications through legal tools and dedicated criminal tools, but also doubts and anxieties about AI’s effectiveness and its effects on their business models and operational security. The study documents attempts to misuse legitimate AI tools and develop bespoke models tailored for illicit purposes. Combining the diffusion of innovation framework with thematic analysis, the paper provides an in-depth view of emerging AI-enabled cybercrime and offers practical insights for law enforcement and policymakers.
Russell Coker: Furilabs FLX1s Finally Working [Planet Debian]
I’ve been using the Furilabs FLX1s phone [1] as my daily driver for 6 weeks, it’s a decent phone, not as good as I hoped but good enough to use every day and rely on for phone calls about job interviews etc. I intend to keep using it as my main phone and as a platform to improve phone software in Debian as you really can’t effectively find bugs unless you use the platform for important tasks.
I previously wrote about the phone after I received it without a SIM caddy on the 13th of Jan. I had a saga with support about this, on the 16th of Jan one support person said that they would ship it immediately but didn’t provide a tracking number or any indication of when it would arrive. On the 5th of Feb I contacted support again and asked how long it would be, the new support person seemed to have no record of my previous communication but said that they would send it. On the 17th of Feb I made another support request including asking for a way of direct communication as the support email came from an address that wouldn’t accept replies, I was asked for a photo showing where the problem is. The support person also said that they might have to send a replacement phone!
The last support request I sent included my disappointment at the time taken to resolve the issue and the proposed solution of replacing the entire phone (why have two international shipments of a fragile and expensive phone when a single letter with a cheap SIM caddy would do?). I didn’t receive a reply but the SIM caddy arrived on the 2nd of Mar. Here is a pic of the SIM caddy and the package it came in:
One thing that should be noted is that some of the support people seemed to be very good at their jobs and they were all friendly. It was the system that failed here, turning a minor issue of a missing part into a 6 week saga.
Furilabs needs to do the following to address this issue:
This is not just a single failure of Furilabs support, it’s a systemic failure of their processes.
Here are some issues I plan to work on.
I need to port one of the smart watch programs to Debian. Also I want to make one of them support the Colmi P80 [2].
A smart watch significantly increases the utility of a phone even though IMHO they aren’t doing nearly all the things that they could and should do. When we get Debian programs talking to the PineTime it will make a good platform for development of new smart phone and OS features.
I have ongoing issues of my text Nextcloud installation on a Debian VM not allowing connection from the Linux desktop app (as packaged in Debian) and from the Android client (from f-droid). The desktop client works with a friend’s Nextcloud installation on Ubuntu so I may try running it on an Ubuntu VM I run while waiting for the Debian issue to get resolved. There was a bug recently fixed in Nextcloud that appears related so maybe the next release will fix it.
For the moment I’ve been running without these features and I call and SMS people from knowing their number or just returning calls. Phone calls generally aren’t very useful for me nowadays except when applying for jobs. If I could deal with recruiters and hiring managers via video calls then I would consider just not having a phone number.
Periodically IPv6 support just stops working, I can’t ping the gateway. I turn wifi off and on again and it works. This might be an issue with my wifi network configuration. This might be an issue with the way I have configured my IPv6 networking, although that problem doesn’t happen with any of my laptops.
Chatty is the program for SMS that is installed by default (part of the phosh/phoc setup), it also does Jabber. Version 0.8.7 is installed which apparently has some Furios modifications and it doesn’t properly support sorting SMS/Jabber conversations. Version 0.8.9 from Debian sorts in the same way as most SMS and Jabber programs with the most recent at the top. But the Debian version doesn’t support Jabber (only SMS and Matrix). When I went back to the Furilabs version of Chatty it still sorted for a while but then suddenly stopped. Killing Chatty (not just closing the window and reopening it) seems to make it sort the conversations sometimes.
Here are the current issues I have starting with the most important.
The following issues seriously reduce the usability of the device.
The Wifi hotspot functionality wasn’t working for a few weeks, this Gitlab issue seems to match it [3]. It started working correctly for a day and I was not sure if an update I applied fixed the bug or if it’s some sort of race condition that worked for this boot and will return next time I reboot it. Later on I rebooted it and found that it’s somewhat random whether it works or now.
Also while it is mostly working it seemed to stop working about every 25 minutes or so and I had to turn it off and on again to get it going.
On another day it went to a stage where it got repeated packet loss when I pinged the phone as a hotspot from my laptop. A pattern of 3 ping responses and 3 “Destination Host Unreachable” messages was often repeated.
I don’t know if this is related to the way Android software is run in a container to access the hardware.
Sometimes 4G connectivity has just stopped, sometimes I can stop and restart the 4G data through software to fix it and sometimes I need to use the hardware switch. I haven’t noticed this for a week or two so there is a possibility that one fix addressed both Hotspot and 4G.
One thing that I will do is setup monitoring to give an alert on the phone if it can’t connect to the Internet. I don’t want it to just quietly stop doing networking stuff and not tell me!
The compatibility issues of the GNOME and KDE on-screen keyboards are getting me. I use phosh/phoc as the login environment as I want to stick to defaults at first to not make things any more difficult than they need to be. When I use programs that use QT such as Nheko the keyboard doesn’t always appear when it should and it forgets the setting for “word completion” (which means spelling correction).
The spelling correction system doesn’t suggest replacing “dont” with “don’t” which is really annoying as a major advantage for spelling checkers on touch screens is inserting an apostrophy. An apostrophy takes at least 3* longer than a regular character and saving that delay makes a difference to typing speed.
The spelling correction doesn’t correct two words run together.
These issues are ongoing annoyances.
In the best case scenario this phone has a much slower response to pressing the power button than the Android phones I tested (Huawei Mate 10 Pro and Samsung Galaxy Note 9) and a much slower response than my recollection of the vast majority of Android phones I’ve ever used. For testing pressing buttons on the phones simultaneously resulted in the Android phone screens lighting up much sooner. Something like 200ms vs 600ms – I don’t have a good setup to time these things but it’s very obvious when I test.
In a less common case scenario (the phone having been unused for some time) the response can be something like 5 seconds. The worst case scenario is something in excess of 20 seconds.
For UI designers, if you get multiple press events from a button that can turn the screen on/off please make your UI leave the screen on and ignore all the stacked events. Having the screen start turning on and off repeatedly when the phone recovers and processes all the button presses isn’t good, especially when each screen flash takes half a second.
Touching on a notification for a program often doesn’t bring it to the foreground. I haven’t yet found a connection between when it does and when it doesn’t.
Also the lack of icons in the top bar on the screen to indicate notifications is annoying, but that seems to be an issue of design not the implementation.
When I connect the phone to a power source there is a delay of about 22 seconds before it starts to charge. Having it miss 22 seconds of charge time is no big deal, having to wait 22 seconds to be sure it’s charging before leaving it is really annoying. Also the phone makes an audible alert when it gets to 0% charge which woke me up one night when I had failed to push the USB-C connector in hard enough. This phone requires a slightly deeper connector than most phones so with some plugs it’s easy to not quite insert them far enough.
The light for the “torch” or flash for camera is not bright at all. In a quick test staring into the light from 40cm away wasn’t unpleasant compared to my Huawei Mate 10 Pro which has a light bright enough that it hurts to look at it from 4 meters away.
Because of this photos at night are not viable, not even when photographing something that’s less than a meter away.
The torch has a brightness setting which doesn’t seem to change the brightness, so it seems likely that this is a software issue and the brightness is set at a low level and the software isn’t changing it.
When I connect to my car the Lollypop player starts playing before the phone directs audio to the car, so the music starts coming from the phone for about a second. This is an annoying cosmetic error. Sometimes audio playing pauses for no apparent reason.
It doesn’t support the phone profile with Bluetooth so phone calls can’t go through the car audio system. Also it doesn’t always connect to my car when I start driving, sometimes I need to disable and enable Bluetooth to make it connect.
When I initially set the phone up Lollypop would send the track name when playing music through my car (Nissan LEAF) Bluetooth connection, after an update that often doesn’t happen so the car doesn’t display the track name or whether the music is playing but the pause icon works to pause and resume music (sometimes it does work).
About 30 seconds into a phone call it switches to hands-free mode while the icon to indicate hands-free is not highlighted, so I have to press the hands-free button twice to get it back to normal phone mode.
I could live with these things remaining as-is but it’s annoying.
There is apparently some code written to display tickets on screen without unlocking. I want to get this working and store screen-caps of the Android barcode screens of the different loyalty cards so I can scan them without unlocking. My threat model does not include someone trying to steal my phone to get a free loaf of bread on the bakery loyalty program.
The camera app works with both the back and front cameras, which is nice, and sadly based on my experience with other Debian phones it’s noteworthy. The problem is that it takes a long time to take a photo, something like a second after the button is pressed – long enough for you to think that it just silently took a photo and then move the phone.
The UI of the furios-camera app is also a little annoying, when viewing photos there is an icon at the bottom left of the screen for a video camera and an icon at the bottom right with a cross. Which every time makes me think “record videos” and “leave this screen” not “return to taking photos” and “delete current photo”. I can get used to the surprising icons, but being so slow is a real problem.
The program for managing software doesn’t work very well. It said that there were two updates for Mesa package needed, but didn’t seem to want to install them. I ran “flatpak update” as root to fix that. The process of selecting software defaults to including non-free, and most of the available apps are for desktop/laptop with no way to search for phone/tablet apps.
Generally I think it’s best to just avoid this and use apt and flatpak directly from the command-line. Being able to ssh to my phone from a desktop or laptop is good!
The file /home/furios/.local/share/andromeda/data/system/uiderrors.txt is created by the Andromeda system which runs Android apps in a LXC container and appears to grow without end. After using the phone for a month it was 3.5G in size. The disk space usage isn’t directly a problem, out of the 110G storage space only 17G is used and I don’t have a need to put much else on it, even if I wanted to put backups of /home from my laptop on it when travelling that would still leave plenty of free space. But that sort of thing is a problem for backing up the phone and wasting 3.5G out of 110G total is a fairly significant step towards breaking the entire system.
Also having lots of logging messages from a subsystem that isn’t even being used is a bad sign.
I just tried using it and it doesn’t start from either the settings menu or from the f-droid icon. Android isn’t that important to me as I want to get away from the proprietary app space so I won’t bother trying this any more.
After getting used to fingerprint unlocking going back to a password is a pain. I think that the hardware isn’t sufficient for modern quality face recognition that can’t be fooled by a photo and there isn’t fingerprint hardware.
When I first used an Android phone using a pin to unlock didn’t seem like a big deal, but after getting used to fingerprint unlock it’s a real drag to go without. This is a real annoyance when doing things like checking Wikipedia while watching TV.
This phone would be significantly improved with a fingerprint sensor or a camera that worked well enough for face unlock.
According to Reddit Plasma Mobile (KDE for phones) doesn’t support Halium and can never work on this phone because of it [4]. This is one of a number of potential issues with the phone, running on hardware that was never designed for open OSs is always going to have issues.
The MAC keeps changing on reboot so I can’t assign a permanent IPv4 address to the phone. It appears from the MAC prefix of 00:08:22 that the network hardware is made in InPro Comm which is well known for using random addresses in the products it OEMs. They apparently have one allocation of 2^24 addresses and each device randomly chooses a MAC from that range on boot.
In the settings for a Wifi connection the “Identity” tab has a field named “Cloned Address” which can be set to “Stable for SSID” that prevents it from changing and allows a static IP address allocation from DHCP. It’s not ideal but it works.
Network Manager can be configured to have a permanent assigned MAC address for all connections or for just some connections. In the past for such things I have copied MAC addresses from ethernet devices that were being discarded and used them for such things. For the moment the “Stable for SSID” setting does what I need but I will consider setting a permanent address at some future time.
Having the ability to connect to a dock is really handy. The PinePhonePro and Librem5 support it and on the proprietary side a lot of Samsung devices do it with a special desktop GUI named Dex and some Huawei devices also have a desktop version of the GUI. It’s unfortunate that this phone can’t do it.
It’s good to be able to ssh in to my phone, even if the on-screen keyboard worked as well as the Android ones it would still be a major pain to use when compared to a real keyboard. The phone doesn’t support connecting to a dock (unlike Samsung phones I’ve used for which I found Dex to be very useful with a 4K monitor and proper keyboard) so ssh is the best way to access it.
This phone has very reliable connections to my home wifi. I’ve had ssh sessions from my desktop to my phone that have remained open for multiple days. I don’t really need this, I’ve just forgotten to logout and noticed days later that the connection is still running. None of the other phones running Debian could do that.
Running the same OS on desktop and phone makes things easier to test and debug.
Having support for all the things that Linux distributions support is good. For example none of the Android music players support all the encodings of audio that comes from YouTube so to play all of my music collection on Android I would need to transcode most of them which means either losing quality, wasting storage space, or both. While Lollypop plays FLAC0, mp3, m4a, mka, webm, ogg, and more.
This is a step towards where I want to go but it’s far from the end goal.
The PinePhonePro and Librem5 are more open hardware platforms which have some significant benefits. But the battery life issues make them unusable for me.
Running Mobian on a OnePlus 6 or Droidian on a Note 9 works well for the small tablet features but without VoLTE. While the telcos have blocked phones without VoLTE data devices still work so if recruiters etc would stop requiring phone calls then I could make one of them an option.
The phone works well enough that it could potentially be used by one of my older relatives. If I could ssh in to my parents phones when they mess things up that would be convenient.
I’ve run this phone as my daily driver since the 3rd of March and it has worked reasonably well. 6 weeks compared to my previous use of the PinePhonePro for 3 days. This is the first time in 15 years that a non-Android phone has worked for me personally. I have briefly used an iPhone 7 for work which basically did what it needed to do, it was at the bottom of the pile of unused phones at work and I didn’t want to take a newer iPhone that could be used by someone who’s doing more than the occasional SMS or Slack message.
So this is better than it might have been, not as good as I hoped, but a decent platform to use it while developing for it.
Review: eufyMake UV Printer E1 [RevK®'s ramblings]
Warning, the box is heavy - not a one person lift. But it is easy to unbox. The step by step instructions for installing ink, cleaning unit, air filter, and so on are very clear. The calibration and testing takes a while, be patient. See unboxing video.
One of the key aspects is the space requirements. For the small bed you simply need space in front to open the front cover (as shown in image above). For the full size bed it claims to need 400mm front and back, which is a lot. It will not fit at the front of a typical 600mm work bench with enough space behind. It also claims to need 300mm left and right, which makes no sense. You have to be able to get to power/ethernet on left and cleaning unit on right, but there are no fans or vents, so this extra space seems unnecessary. In this case it will just fit sideways on a 600mm work bench with space either side for the full size print bed. Bear in mind you need to use the glasses when operating with covers open (i.e. full size print bed). As you can see from the image - mine is on a shelf on the floor now.
The instructions warn of using gloves, and glasses. They seem to imply the risk of some mess somehow, but so far it has not been a problem. Maybe during some maintenance or changing something, like ink, there is risk of ink on hands, so gloves may only be needed then. In normal operation it seems very simple and clean.
It connects via WiFi or Ethernet and is set up from an app using bluetooth. Annoyingly you need to create an account, arg!
Do not make the mistake of installing the iPad app on a Mac. It works but is clunky at best. The eufyMake UV Studio is what you need and that works really well.
Just to explain, unlike a normal printer, this is not installed as a printer on the system. You have to use the app to print. The reasons for this should be pretty obvious - the printer can print not just in CMYK, but White and Gloss, and complex arrangements of print order and thickness. A normal printer driver cannot handle this. It also needs print positioning.
However, the app is very slick, and allows images to be dragged in and positioned and scaled and flipped with ease. It handles vector formats such as SVG which allows for very precise and high resolution printing. Note the print quality setting is not default to High Quality for some reason and not even saved when you save a project, which is odd.
You can also add text, and set fonts, colour, size, rotation, etc.
There is a large library of artwork and graphics included, and apparently some AI thing which I have not touched.
The print area on the small bed is marked as 310mm by 90mm. On the large bed it is 310mm by 400mm. However it seems you can print up to 330mm by 420mm, so larger than A3!
However there is more to it, you can fit up to 100mm in the printer and print on top of it. Also, the top does not have to be flat, it can have 5mm clearance - i.e. I can print on the body of an iPhone case even though it has ridges around the camera hole raised above that.
There is also a film roll attachment allowing print on a continuous film allowing a much longer print than the 420mm limit of the large bed.
The first thing you do is put what you want on the printer bed, and have it take a picture. This shows on the app as an overhead view. It measures height for you, and shows the outline of the object on the print bed.
You then drag in images, or place text or artwork as you like. You can resize and rotate and flip as needed. It handles the fact images have transparency - using SVG is ideal, but PNG with transparency works just as well. You can see how your artworks fits on the object. You can zoom in and align precisely (the set up includes camera position calibration, so this is very good). You can add outlines or apply a cut out to a non transparent image, etc.
For each piece of artwork/text/etc you can set the way it is printed. The main thing is the layers of print. Typically you apply white and then colour on top. Even so, the white is usually around 3 layers of white and 1 of colour. You could do white, colour, and gloss for example.
They have thought about this - you can do colour, white, colour for printing on glass for example, or just colour then white.
You can fully customise the layer sequence and numbers as well.
But that is all flat. You can do flat raised which thicker colour to give a texture. You can also select textures!
You can select a textured effect and there are a library of textures, like crocodile skin, etc. Obviously this uses more ink and takes longer to print.
Normally the three layers of white is enough to give a white background for printing colour and white images. This works well on a consistent background, even if dark. But it is not totally opaque, so if printing over existing artwork you need more layers of white to make it properly opaque. You can set the number of layers though. Ideally, don't print over existing artwork, or if you do, sticker mode may be best as stickers are very thick.
This is where things get complicated. You can, indeed, print on almost anything, but it may not stick as well as you like. This was a tad disappointing, as I expected the cured ink to be much harder. It is actually somewhat rubbery and flexible. But how well it sticks depends on to what you are printing.
At one extreme, a coated (anodised?) titanium iPhone case is very smooth - you can wipe off the print. Even stickers will not stick at all - you simply cannot pull the backing off and leave the sticker!
Printing on textured services is a lot better, as is various bare metal. Printing on 3D resin prints works really well, and is hard and not removable with a fingernail. In these cases a hard edge, knife, etc, can take the ink off - though if deeply textured that may be difficult to totally remove it. This is pretty durable though.
In between we have things like glass, which is a problem in to which I go in to more detail below. It sticks, and is nice, but is not dishwasher safe by any means. There are some things that may improve this.
There are things you can do to help on surfaces that do not stick well - one is sanding or etching the surface. This is not always sensible, obviously. For the green key fob shown above, the plastic is shiny enough to not be very robust (can come off with a fingernail). Sanding the whole side does not look wrong, so that can help make it more adherent. Another trick is to coat in acrylic, but you do have to let it dry properly. I was hoping to find a hard spray-on coating, perhaps even UV cured, to test (suggestions welcome), but acrylic resin based coating is probably good enough. Obviously that is not going to work on a glass.
The A film sheets are on a paper backing and have a protective layer you remove before printing. You are basically printing on a sheet of thin adhesive. You then put this through the laminator which sticks a thick soft film, the B film, on top. The sticker mode is an ink print type on the artwork, all artwork set the same, and prints a thick sticker on white background.
You can remove the paper back, and apply to any surface. The film is a bit stretchy which helps. You have to be carful to align as there is really no going back and re-positioning. You then rub down and peel off the thick film leaving a sticker.
This is ideal for cases where you want artwork on something that cannot go through the printer, e.g. cupboard doors, windows, mirrors, signs, etc. Obviously great for laptops.
This also works on shiny surfaces that don't print well directly. However, there is a caveat here for glass. Yes, the sticker sticks well and is robust, but it is not dishwasher safe as the heat melts the adhesive!
| To be clear, this is *MY* mirror, not someone else's |
It would appear one can get a different flexible white ink and foil (gold/silver, etc) and heat press selectively or use the laminator to apply foil to a design. Impressive.
The rotary attachment lets you print on tumblers and cups. The system is pretty slick - it measures it all for you and makes it easy to place artwork.
The result looks pretty good, but can be removed with a fingernail, and is not dishwasher safe. Yes, the AI summary fro the printer say hard, scratch resistant, and dishwasher safe - it is clearly not that simple!
Proposed techniques to fix this include a bonding agent to lightly etch the glass first. Of course the only stuff I could find was a primer that is thick grey - leaving 24 hours and cleaning off did not look like the surface was etched, which is a good, but may mean it does not work. It is messy and slow. This did not work! See this after image...
|
|
| After dishwasher |
| Before dishwasher |
I also tried all combinations of white, colour, and glass, and all washed away completely.
One alternative approach was printing a stencil and etching. The stencil easily washes off. This worked with a single CMYK layer, and using glass etching cream from Amazon. This allows precision etched artwork on a glass and is relatively simple to do. This first attempt shows more etching needed, but no signs of etch where the stencil was located, so a good proof of concept.
It has a load of maintenance stuff built in - cleaning the print head - automatic idle mode - a midnight cleaning cycle, and so on. It also has maintenance you can ask it to do, and can prompt you to do maintenance if needed. I have yet to change ink or cleaning module or air filter.
Obviously there are costs - the ink, cleaning pack, and air filter. I have no idea on the latter two in terms of how long they last. However, the ink is around 35p/ml.
When you print it works out the ink usage or each ink, and total, and shows before printing so you can assess costs. Also shows after.
Bear in mind the amount of ink is complicated - it is not just a matter of size, but how many layers. Stickers are thick and use a lot of ink. So it can be a bit counterintuitive.
The key message here is understand the limitations. Not just on size, height, and so on, none of which are a big issue, but especially in terms of the durability of prints on different materials. You also need to understand costs (e.g. stickers use a lot of ink, as well as A and B film).
Once you have a handle on those limitations and costs you can understand what you can print and where. Then you can get creative with a lot of options.
Pluralistic: In praise of (some) compartmentalization (14 Apr 2026) [Pluralistic: Daily links from Cory Doctorow]
->->->->->->->->->->->->->->->->->->->->->->->->->->->->->
Top Sources: None -->

If there's one FAQ I get Q'ed most F'ly, it's this: "How do you get so much done?" The short answer is, "I write when I'm anxious (which is how I came to write nine books during lockdown)." The long answer is more complicated.
The first complication to understand is that I have lifelong, degenerating chronic pain that makes me hurt from the base of my skull to the soles of my feet – my whole posterior chain. On a good day, it hurts. On a bad day, it hurts so bad that it's all I can think about.
Unless…I work. If I can find my way into a creative project, the rest of the world just kind of fades back, including my physical body. Sometimes I can get there through entertainment, too – a really good book or movie, say, but more often I find myself squirming and needing to get up and stretch or use a theragun after a couple hours in a movie theater seat, even the kind that reclines. A good conversation can do it, too, and is better than a movie or a book. The challenge and engagement of an intense conversation – preferably one with a chewy, productive and interesting disagreement – can take me out of things.
There's a degree to which ignoring my body is the right thing to do. I've come to understand a lot of my pain as being a phantom, a pathological failure of my nervous system to terminate a pain signal after it fires. Instead of fading away, my pain messages bounce back and forth, getting amplified rather than attenuated, until all my nerves are screaming at me. Where pain has no physiological correlate – in other words, where the ache is just an ache, without a strain or a tear or a bruise – it makes sense to ignore it. It's actually healthy to ignore it, because paying attention to pain is one of the things that can amplify it (though not always).
But this only gets me so far, because some of my pain does have a physiological correlate. My biomechanics suck, thanks to congenital hip defects that screwed up the way I walked and sat and lay and moved for most of my life, until eventually my wonky hips wore out and I swapped 'em for a titanium set. By that point, it was too late, because I'd made a mess of my posterior chain, all the way from my skull to my feet, and years of diligent physio, swimming, yoga, occupational therapy and physiotherapy have barely made a dent. So when I sit or stand or lie down, I'm always straining something, and I really do need to get up and move around and stretch and whatnot, or sure as hell I will pay the price later. So if I get too distracted, then I start ignoring the pain I need to be paying attention to, and that's at least as bad as paying attention to the pain I should be ignoring.
Which brings me to anxiety. These are anxious times. I don't know anyone who feels good right now. Particularly this week, as the Strait of Epstein emergency gets progressively worse, and there's this January 2020 sense of the crisis on the horizon, hitting one country after another. Last week, Australia got its last shipment of fossil fuels. This week, restaurants in India are all shuttered because of gas rationing. People who understand these things better than I do tell me that even if Trump strokes out tonight and Hegseth overdoes the autoerotic asphyxiation, it'll be months, possibly years, before things get back to "normal" ("normal!").
Any time I think about this stuff for even a few minutes, I start to feel that covid-a-comin', early-2020 feeling, only it's worse this time around, because I literally couldn't imagine what covid would mean when it got here, and now I know.
When I start to feel those feelings, I can just sit down and start thinking with my fingers, working on a book or a blog-post. Or working on an illustration to go with one of these posts, which is the most delicious distraction, leaving me with just enough capacity to mull over the structure of the argument that will accompany it.
I can't do anything about the impending energy catastrophe, apart from being part of a network of mutual aid and political organizing, so it makes sense not to fixate on it. But there are things that upset me – problems my friends and loved ones are having – where there's such a thing as too much compartmentalization. It's one thing to lose myself in work until the heat of emotion cools so I can think rationally about an issue that's got me seeing red, and another to use work as a way to neglect a loved one who needs attention in the hope that the moment will pass before I have to do any difficult emotional labor.
Compartmentalization, in other words, but not too much compartmentalization. During the lockdown years, I transformed myself into a machine for turning Talking Heads bootlegs into science fiction novels and technology criticism, and that was better than spending that time boozing or scrolling or fighting – but in retrospect, there's probably more I could have done during those hard months to support the people around me. In my defense – in all our defenses – that was an unprecedented situation and we all did the best we could.
Creative work takes me away from my pain – both physical and emotional – because creative work takes me into a "flow" state. This useful word comes to us from Mihaly Csikszentmihalyi, who coined the term in the 1960s while he was investigating a seeming paradox: how was it that we modern people had mastered so many of the useful arts and sciences, and yet we seemed no happier than the ancients? How could we make so much progress in so many fields, and so little progress in being happy?
In his fieldwork, Csikszentmihalyi found that people reported the most happiness while they were doing difficult things well – when your "body or mind is stretched to its limits in a voluntary effort to accomplish something difficult and worthwhile." He called this state "flow."
As Derek Thompson says, the word "flow" implies an effortlessness, but really, it's the effort – just enough, not too much – that defines flow-states. We aren't happiest in a frictionless world, but rather, in a world of "achievable challenges":
https://www.derekthompson.org/p/how-zombie-flow-took-over-culture
Thompson relates this to "the law of familiar surprises," an idea he developed in his book Hit Makers, which investigated why some media, ideas and people found fame, while others languished. A "familiar surprise" is something that's "familiar but not too familiar."
He thinks that the Hollywood mania for sequels and reboots is the result of media execs chasing "familiar surprises." I think there's something to this, but we shouldn't discount the effect that monopolization has on the media: as companies get larger and larger, they end up committing to larger and larger projects, and you just don't take the kinds of risks with a $500m movie that you can take with a $5m one. If you're spending $500m, you want to hedge that investment with as many safe bets as you can find – big name stars, successful IP, and familiar narrative structures. If the movie still tanks, at least no one will get fired for taking a big, bold risk.
Today, we're living in a world of extremely familiar, and progressively less surprising culture. AI slop is the epitome of familiarity, since by definition, AI tries to make a future that is similar to the past, because all it can do is extrapolate from previous data. That's a fundamentally conservative, uncreative way to think about the world:
https://pluralistic.net/2020/05/14/everybody-poops/#homeostatic-mechanism
The tracks the Spotify algorithm picks out of the catalog are going to be as similar to the ones you've played in the past as it can make them – and the royalty-free slop tracks that Spotify generates with AI or commissions from no-name artists will be even more insipidly unsurprising:
https://pluralistic.net/2022/09/12/streaming-doesnt-pay/#stunt-publishing
Thompson cites Shishi Wu's dissertation on "Passive Flow," a term she coined to describe how teens fall into social media scroll-trances:
https://scholarworks.umb.edu/cgi/viewcontent.cgi?article=2104&context=doctoral_dissertations
Wu says it's a mistake to attribute the regretted hours of scrolling to addiction or a failure of self-control. Rather, the user is falling into "passive flow," a condition arising from three factors:
I. Engagement without a clear goal;
II. A loss of self-awareness – of your body and your mental state;
III. Losing track of time.
I instantly recognize II. and III. – they're the hallmarks of the flow states that abstract me away from my own pain when I'm working. The big difference here is I. – I go to work with the clearest of goals, while "passive flow" is undirected (Thompson also cites psychologist Paul Bloom, who calls the scroll-trance "shitty flow." In shitty flow, you lose track of the world and its sensations – but in a way that you later regret.)
Thompson has his own name for this phenomenon of algorithmically induced, regret-inducing flow: he calls it "zombie flow." It's flow that "recapitulates the goal of flow while evacuating the purpose."
Zombie flow is "progress without pleasure" – it's frictionless, and so it gives us nothing except that sense of the world going away, and when it stops, the world is still there. The trick is to find a way of compartmentalizing that rewards attention with some kind of productive residue that you can look back on with pride and pleasure.
I wouldn't call myself a happy person. I don't think I know any happy people right now. But I'm an extremely hopeful person, because I can see so many ways that we can make things better (an admittedly very low bar), and I have mastered the trick of harnessing my unhappiness to the pursuit of things that might make the world better, and I'm gradually learning when to stop escaping the pain and confront it.
(Image: marsupium photography, CC BY-SA 2.0, modified)

The Science of Forced Perspective at Disney Parks https://www.youtube.com/watch?v=yqefjmRVLTM
The Reverse Centaur’s Guide to Life After AI: How to Think About Artificial Intelligence—Before It’s Too Late https://www.publishersweekly.com/9780374621568
EFF
HOPE: Join Us This August! https://www.eff.org/deeplinks/2026/04/eff-hope-join-us-august
Here Are the Finalists for the 2025 Locus Awards https://reactormag.com/finalists-2025-locus-awards/
#25yrsago Pee-Wee Herman on his career https://web.archive.org/web/20010414033156/https://ew.com/ew/report/0,6115,105857~1~0~paulreubensreturnsto,00.html
#25yrsago Anxious hand-wringing about multitasking teens https://www.nytimes.com/2001/04/12/technology/teenage-overload-or-digital-dexterity.html
#20yrsago Clever t-shirt typography spells “hate” – “love” in mirror-writing https://web.archive.org/web/20060413102804/https://accordionguy.blogware.com/blog/_archives/2006/4/12/1881414.html
#20yrsago New Mexico Lightning Field claims to have copyrighted dirt https://diaart.org/visit/visit-our-locations-sites/walter-de-maria-the-lightning-field#overview
#20yrsago Futuristic house made of spinach protein and soy-foam https://web.archive.org/web/20060413111650/http://bfi.org/node/828
#15yrsago New Zealand to sneak in Internet disconnection copyright law with Christchurch quake emergency legislation https://www.stuff.co.nz/technology/digital-living/4882838/Law-to-fight-internet-piracy-rushed-through
#10yrsago Bake: An amazing space-themed Hubble cake https://www.sprinklebakes.com/2016/04/black-velvet-nebula-cake.html
#10yrsago Shanghai law uses credit scores to enforce filial piety https://www.caixinglobal.com/2016-04-11/shanghai-says-people-who-fail-to-visit-parents-will-have-credit-scores-lowered-101011746.html
#10yrsago Walmart heiress donated $378,400 to Hillary Clinton campaign and PACs https://web.archive.org/web/20160414155119/https://www.alternet.org/election-2016/alice-walton-donated-353400-clintons-victory-fund
#10yrsago Mass arrests at DC protest over money in politics https://www.washingtonpost.com/local/public-safety/mass-arrests-of-protesters-in-demonstration-at-capitol-against-big-money/2016/04/11/96c13df0-0037-11e6-9d36-33d198ea26c5_story.html
#10yrsago Churchill got a doctor’s note requiring him to drink at least 8 doubles a day “for convalescence” https://web.archive.org/web/20130321054712/https://arttattler.com/archivewinstonchurchill.html
#5yrsago Big Tech's secret weapon is switching costs, not network effects https://pluralistic.net/2021/04/12/tear-down-that-wall/#zucks-iron-curtain
#5yrsago Fraud-resistant election-tech https://pluralistic.net/2021/04/12/tear-down-that-wall/#bmds
#1yrago Blue Cross of Louisiana doesn't give a shit about breast cancer https://pluralistic.net/2025/04/12/pre-authorization/#is-not-a-guarantee-of-payment

San Francisco: 2026 Berkeley Spring Forum on M&A and the
Boardroom, Apr 23
https://www.theberkeleyforum.com/#agenda
London: Resisting Big Tech Empires (LSBU), Apr 25
https://www.tickettailor.com/events/globaljusticenow/2042691
NYC: Enshittification at Commonweal Ventures, Apr 29
https://luma.com/ssgfvqz8
NYC: Techidemic with Sarah Jeong, Tochi Onyibuchi and Alia
Dastagir (PEN World Voices), Apr 30
https://worldvoices.pen.org/event/techidemic/
Berlin: Re:publica, May 18-20
https://re-publica.com/de/news/rp26-sprecher-cory-doctorow
Berlin: Enshittification at Otherland Books, May 19
https://www.otherland-berlin.de/de/event-details/cory-doctorow.html
Hay-on-Wye: HowTheLightGetsIn, May 22-25
https://howthelightgetsin.org/festivals/hay/big-ideas-2
SXSW London, Jun 2
https://www.sxswlondon.com/session/how-big-tech-broke-the-internet-b3c4a901
Do you feel screwed over by big tech? (Ontario Today)
https://www.cbc.ca/listen/live-radio/1-45-ontario-today/clip/16203024-do-feel-screwed-big-tech
Launch for Cindy's Cohn's "Privacy's Defender" (City Lights)
https://www.youtube.com/watch?v=WuVCm2PUalU
Chicken Mating Harnesses (This Week in Tech)
https://twit.tv/shows/this-week-in-tech/episodes/1074
The Virtual Jewel Box (U Utah)
https://tanner.utah.edu/podcast/enshittification-cory-doctorow-matthew-potolsky/
"Enshittification: Why Everything Suddenly Got Worse and What to
Do About It," Farrar, Straus, Giroux, October 7 2025
https://us.macmillan.com/books/9780374619329/enshittification/
"Picks and Shovels": a sequel to "Red Team Blues," about the heroic era of the PC, Tor Books (US), Head of Zeus (UK), February 2025 (https://us.macmillan.com/books/9781250865908/picksandshovels).
"The Bezzle": a sequel to "Red Team Blues," about prison-tech and other grifts, Tor Books (US), Head of Zeus (UK), February 2024 (thebezzle.org).
"The Lost Cause:" a solarpunk novel of hope in the climate emergency, Tor Books (US), Head of Zeus (UK), November 2023 (http://lost-cause.org).
"The Internet Con": A nonfiction book about interoperability and Big Tech (Verso) September 2023 (http://seizethemeansofcomputation.org). Signed copies at Book Soup (https://www.booksoup.com/book/9781804291245).
"Red Team Blues": "A grabby, compulsive thriller that will leave you knowing more about how the world works than you did before." Tor Books http://redteamblues.com.
"Chokepoint Capitalism: How to Beat Big Tech, Tame Big Content, and Get Artists Paid, with Rebecca Giblin", on how to unrig the markets for creative labor, Beacon Press/Scribe 2022 https://chokepointcapitalism.com
"Enshittification, Why Everything Suddenly Got Worse and What to Do About It" (the graphic novel), Firstsecond, 2026
"The Post-American Internet," a geopolitical sequel of sorts to Enshittification, Farrar, Straus and Giroux, 2027
"Unauthorized Bread": a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, 2027
"The Memex Method," Farrar, Straus, Giroux, 2027
Today's top sources:
Currently writing: "The Post-American Internet," a sequel to "Enshittification," about the better world the rest of us get to have now that Trump has torched America. Third draft completed. Submitted to editor.
"The Post-American Internet," a short book about internet policy in the age of Trumpism. PLANNING.
A Little Brother short story about DIY insulin PLANNING

This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.
https://creativecommons.org/licenses/by/4.0/
Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.
Blog (no ads, tracking, or data-collection):
Newsletter (no ads, tracking, or data-collection):
https://pluralistic.net/plura-list
Mastodon (no ads, tracking, or data-collection):
Bluesky (no ads, possible tracking and data-collection):
https://bsky.app/profile/doctorow.pluralistic.net
Medium (no ads, paywalled):
Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):
https://mostlysignssomeportents.tumblr.com/tagged/pluralistic
"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla
READ CAREFULLY: By reading this, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies ("BOGUS AGREEMENTS") that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer.
ISSN: 3066-764X
Learning To Relax With The Pelvic Wand by Alyssa Read [Oh Joy Sex Toy]
Ravi Dwivedi: Hungary Visa [Planet Debian]
The annual LibreOffice conference 2025 was held in Budapest, Hungary, from the 3rd to the 6th of September 2025. Thanks to the The Document Foundation (TDF) for sponsoring me to attend the conference.
As Hungary is a part of the Schengen area, I needed a Schengen visa to attend the conference. In order to apply for a Schengen visa, one needs to get an appointment at VFS Global and submit all the required documents there, which are then forwarded to the embassy.
I got an appointment for a Hungary visa at VFS Global in New Delhi for the 24th of July. There were many appointment slots available for the Hungary visa. One could easily get an appointment for the next day at the Delhi center. There were some technical problems on the VFS website, though, as I was unable to upload a scanned copy of my passport while booking the appointment. I got an error saying, “Unfortunately, you have exceeded the maximum upload limit.”
The problem didn’t get fixed even after contacting the VFS helpline. They asked me to try in the Firefox browser and deleting all the cache, which I already did.
So I created another account with a different email address and phone number, after which I was able to upload my passport and book an appointment. Other conference attendees from India also reported facing some technical issues on the VFS Hungary website.
Anyway, I went to the VFS Hungary application center as per my appointment on the 24th of July. Going inside, I located the Hungary visa application counter. There were two applicants ahead of me.
When it was my turn, the VFS staff warned me that my passport was damaged. The “damage” was on the bio-data page. All the details could be seen, but the lamination of the details page wore off a bit. They asked me to write an application to the Embassy of Hungary in New Delhi stating that I insist VFS to submit my application along with describing the “damage” on my passport.
I got a bit worried about my application getting rejected due to the “damage.” But I decided to gamble my money on this one, as I didn’t have time (and energy) to apply for a new passport before this trip.
Moreover, I had struck down a couple of fields in my visa application form which were not applicable to me, due to which the VFS staff asked me to fill out another visa application.
After this, the application got submitted, and it was 11,000 INR (including the fee to book the appointment at VFS). Here is the list of documents I submitted:
My passport
Photocopy of my passport
Two photographs of myself
Duly filled visa application form
Return flight ticket reservations
Payslips for the last three months
Invitation letter from the conference organizer (in Hungarian)
Proof of hotel bookings during my stay in Hungary
Cover letter stating my itinerary
Income tax returns filed by me
Bank account statement, signed and sealed by the bank
Travel insurance valid for the period of the entire trip
It took 2 hours for me to submit my visa application, even though there were only two applicants before me. This was by far the longest time to submit a Schengen visa application for me.
Fast-forward to the 30th of July, and I received an email from the Embassy of Hungary asking me to submit an additional document - paid air ticket - for my application. I had only submitted dummy flight tickets, and they were enough for the Schengen visas I applied for until now. This was the first time a country was asking me to submit a confirmed flight ticket during the visa process.
I consulted my travel agent on this, and they were fairly confident that I will get the visa if the embassy is asking me to submit confirmed flight tickets. So I asked the travel agent to book the flight tickets. These tickets were ₹78,000, and the airline was Emirates. Then, I sent the flight tickets to the embassy by email.
The embassy sent the visa results on the 6th of August, which I received the next day.
My visa had been approved! It took 14 days for me to get the Hungary visa after submitting the application.
See you in the next one!
Thanks to Badri for proofreading.
I always preferred Forza Horizon to the traditional Forza, but I never thought that was a compliment to me. I would love to say that I stopped going to church because I had unspun the fundamental philosophies inherent to it, and stood astride it, gleaming like a newly minted God. It's a fiction I'd love to maintain. In the end, I simply couldn't hack it - I was willing to engage in a multi-year campaign of gruesome self-recrimination, obviously. But once it became clear that it was utterly open-ended, a blank check I would perpetually cash against my own identity, I could burn myself alive or try to go on as a maimed and useless creature. That's basically me and Forza Horizon. I wish that I were hardcore enough for the progenitor - of the universe, or the racing franchise. Take your pick.

glowey...
A Very “Engaging” Charcuterie Board [Whatever]
Hey, everyone! I was going to continue to post about my adventures in Colorado, but I decided a detour was in order today to show y’all this spread I did last night for my friend’s engagement party. Feast your eyes on my (mainly Aldi and partially Kroger) spread of goods for about fifty people to snack on:

So, while this isn’t everything I put out, this
is the main event. I was very nervous to do a spread for so many
people, as normally I deal in much smaller groups. Usually my
boards are usually made for about ten people. I know you’re
probably thinking, there’s no way that spread survived fifty
people. And you’d be right! After the first wave of snackers,
I snuck in to refill everything, and continued to refill as was
necessary to keep it looking full and making sure everyone got a
bite of what they wanted.
I was informed ahead of time that there were no known allergies amongst the entire group (except, of course, my bestie having a gluten intolerance). With that knowledge in mind, let’s look at what we got!
We’ve got double cream brie, dill Havarti, smoked gouda, cranberry cheddar, espresso martini soaked cheddar, pimento cheese dip, honey goat cheese, and a garlic and herbs Boursin. For the meats I did a very simple prosciutto and salami. I also brought a garlic summer sausage but I couldn’t really make it work in my presentation so I gave up on it and just went with the two meats, which honestly who needs more meat than just prosciutto and salami? Those are my two favorites, anyway.
Accoutrements include fig jam, a berry jalapeno jam, Stonewall Kitchen’s Maine Maple Champagne Mustard, quince paste, a pear, cardamom, and pistachio jam, blackcurrant mustard, Truff hot sauce, and an orange whiskey jam. There’s also stuffed peppers and herby olives, dates, salted caramel black truffle peanuts, rosemary Marcona almonds, pistachios, hot honey cashews, and chocolate covered pomegranate seeds. Finally, front and center is Zeroe Caviar’s vegan caviar made from seaweed. I’ve never put it on a board before, but I figured caviar was needed at an engagement party.
As you can tell from the grapes all the way on the right, there’s more to see than this picture lets on. I just did some strawberries, blackberries, and grapes with fruit fluff, and then pinwheel striped and sliced some mini cucumbers and set those out with carrots and celery alongside tzatziki and feta dip, plus a creamy ranch dip. There was also a tray of various cookies like Walker’s shortbread, Pirouette cookies, and some strawberry and creme covered pretzels. Plus blue corn tortilla chips and salsa.
Here’s a different angle so hopefully you can somewhat see some other items:

At the end you can see the fruit fluff and fruit, and the veggies and dips further down. And look, someone brought hummus! How thoughtful. Luckily, I had pita chips to go with it. I also set out some cranberry crisps, rosemary flatbread crackers, and some other entertainment crackers but nothing really of note. I kept my friend’s gluten-free crackers behind the counter for her, as well as her gluten-free cookies.
So, there you have it, a spread from yours truly for my bestie’s engagement party. I am so excited for her, her fiancé, and to be in her wedding. She means the world to me and I was happy to feed those closest to her.
Which cheese sounds the best to you? Would you try the vegan caviar? Let me kn0w in the comments, and have a great day!
-AMS
The Dangers of California’s Legislation to Censor 3D Printing [Deeplinks]
California’s bill, A.B. 2047, will not only mandate censorware — software which exists to bluntly block your speech as a user — on all 3D printers; it will also criminalize the use of open-source alternatives. Repeating the mistakes of Digital Rights Management (DRM) technologies won’t make anyone safer. What it will do is hurt innovation in the state and risk a slew of new consumer harms, ranging from surveillance to platform lock-in. California must stand with creators and reject this legislation before it’s too late.
3D printing might evoke images of props from blockbuster films, rapid prototyping, medical research, or even affordable repair parts. Yet for a growing number of legislators, the perceived threat of “ghost guns” is a reason to impose restrictions on all 3D printers. Despite 3D printing of guns already being rare and banned under existing law, California may outright criminalize any user having control over their own device.
This bill is a gift for the biggest 3D printer manufacturers looking to adopt HP’s approach to 2D printing: criminalize altering your printer’s code, lock users into your own ecosystem, and let enshittification run its course. Even worse, algorithmic print blocking will never work for its intended purpose, but it will threaten consumer choice, free expression, and privacy.
A misstep here can have serious repercussions across the whole 3D printing industry, lead the way for more bad bills, and leave California with an expensive and ineffective bureaucratic mess.
Compared to the Washington and New York laws proposed this year, California’s is the most troubling. It criminalizes open source, reduces consumer choice, and creates a bureaucratic burden.
A.B. 2047 goes further than any other legislation on algorithmic print-blocking by making it a misdemeanor for the owners of these devices to disable, deactivate, or otherwise circumvent these mandated algorithms. Not only does this effectively criminalize use of any third-party, open-source 3D printer firmware, but it also enables print-blocking algorithms to parallel anti-consumer behaviors seen with DRM.
Manufacturers will be able to lock users into first-party tools, parts, and “consumables” (analogous to how 2D printer ink works). They will also be able to mandate purchases through first-party stores, imposing a heavy platform tax. Additionally, manufacturers could force regular upgrade cycles through planned obsolescence by ceasing updates to a printer’s print-blocking system, thereby taking devices out of compliance and making them illegal for consumers to resell. In short, a wide range of anti-consumer practices can be enforced, potentially resulting in criminal charges.
Independent of these deliberate harms manufacturers may inflict, DRM has shown that criminalizing code leads to more barriers to repair, more consumer waste, and far more cybersecurity risks by criminalizing research.
The bill favors incumbent manufacturers over newer competitors and over the interests of consumers.
Less-established manufacturers will need to dedicate considerable time and resources to implementing the ineffective solutions discussed above, navigating state approval, and potentially paying licensing fees to third-party developers of sham print-blocking software. While these burdens may be absorbed by the biggest producers of this equipment, it considerably raises the barrier to entry on a technology that can otherwise be individually built from scratch with common equipment. The result is clear: fewer options for consumers and more leverage for the biggest producers.
Retailers will feel this pinch, but the second-hand market will feel it most acutely. Resale is an important property right for people to recoup costs and serves as an important check on inflating prices. But under this bill, such resale risks misdemeanor penalties.
The bill locks users into a walled garden; it demands manufacturers ensure 3D printers cannot be used with third-party software tools. By creating barriers to the use of popular and need-specific alternatives, this legislation will limit the utility and accessibility of these devices across a broad spectrum of lawful uses.
A.B. 2047’s title 21.1 §3723.633-637 creates a print-blocking bureaucracy, leaning heavily on the California Department of Justice (DOJ). Initially, the DOJ must outline the technical standards for detecting and blocking firearm parts, and later certify print-blocking algorithms and maintain lists of compliant 3D printers. If a printer or software doesn’t make it through this red tape, it will be illegal to sell in the state.
The bill also requires the department to establish a database of banned blueprints that must be blocked by these algorithms. This database and printer list must be continually maintained as new printer models are released and workarounds are discovered, requiring effort from both the DOJ and printer manufacturers.
For all the cost and burden of creating and maintaining such a database, those efforts will inevitably be outpaced by rapid iterations and workarounds by people breaking existing firearms laws.
Once implemented, this infrastructure will be difficult to rein in, causing unintended consequences. The database meant for firearm parts can easily expand to copyright or political speech. Scans meant to be ephemeral can be collected and surveilled. This is cause for concern for everyone, as these levers of control will extend beyond the borders of the Golden State.
While California is at the forefront of print blocking, the impacts will be felt far outside of its borders. Once printer companies have the legal cover to build out anti-competitive and privacy-invasive tools, they will likely be rolled out globally. After all, it is not cost-effective to maintain two forks of software, two inventories of printers, and two distribution channels. Once California has created the infrastructure to censor prints, what else will it be used for?
As we covered in “Print Blocking Won’t Work” these print-blocking efforts are not only doomed to fail, but will render all 3D printer users vulnerable to surveillance either by forcing them into a cloud scanning solution for “on-device” results, or by chaining them to first-party software which must connect to the cloud to regularly update its print blocking system.
This law demands an unfeasible technological solution for something that is already illegal. Not only is this bad legislation with few safeguards, it risks the worst outcomes for grassroots innovation and creativity—both within the state and across the global 3D printing community.
California should reject this legislation before it’s too late, and advocates everywhere should keep an eye out for similar legislation in their states. What happens in California won't just stay in California.
Free Software Directory meeting on IRC: Friday, April 17, starting at 12:00 EDT (16:00 UTC) [Planet GNU]
Join the FSF and friends on Friday, April 17 from 12:00 to 15:00 EDT (16:00 to 19:00 UTC) to help improve the Free Software Directory.
Finding a duplicated item in an array of N integers in the range 1 to N − 1 [The Old New Thing]
A colleague told me that there was an O(N) algorithm for finding a duplicated item in an array of N integers in the range 1 to N − 1. There must be a duplicate due to the pigeonhole principle. There might be more than one duplicated value; you merely have to find any duplicate.¹
The backstory behind this puzzle is that my colleague had thought this problem was solvable in O(N log N), presumably by sorting the array and then scanning for the duplicate. They posed this as an interview question, and the interviewee found an even better linear-time algorithm!
My solution is to interpret the array as a linked list of 1-based indices, and borrow the sign bit of each integer as a flag to indicate that the slot has been visited. We start at index 1 and follow the indices until they either reach a value whose sign bit has already been set (which is our duplicate), or they return to index 1 (a cycle). If we find a cycle, then move to the next index which does not have the sign bit set, and repeat. At the end, you can restore the original values by clearing the sign bits.²
I figured that modifying the values was acceptable given that the O(N log N) solution also modifies the array. At least my version restores the original values when it’s done!
But it turns out the interview candidate found an even better O(N) algorithm, one that doesn’t modify the array at all.
Again, view the array values as indices. You are looking for two nodes that point to the same destination. You already know that no array entry has the value N, so the entry at index N cannot be part of a cycle. Therefore, the chain that starts at N must eventually join an existing cycle, and that join point is a duplicate. Start at index N and use Floyd’s cycle detector algorithm to find the start of the cycle in O(N) time.
¹ If you constrain the problem further to say that there is exactly one duplicate, then you can find the duplicate by summing all the values and then subtracting N(N−1)/2.
² I’m pulling a fast one. This is really O(N) space because I’m using the sign bit as a convenient “initially zero” flag bit.
The post Finding a duplicated item in an array of <VAR>N</VAR> integers in the range 1 to <VAR>N</VAR> − 1 appeared first on The Old New Thing.
EFF 🤝 HOPE: Join Us This August! [Deeplinks]
Protecting privacy and free speech online takes more than policy work—it takes community. Conferences like HOPE are where that community comes together to learn, connect, and push these ideals forward. That's why EFF is proud to be at HOPE 26.
Join us at this year's Hackers On Planet Earth, August 14-16 at the New Yorker Hotel in Manhattan! Get your ticket now and support our work: throughout April EFF will receive 10% of all ticket proceeds for HOPE 26.
See EFF at HOPE 26 in New York
While you're there, be sure to catch talks from EFF's technologists, attorneys, and activists covering a wide range of digital civil liberties topics. You can get a taste of the talks to come by watching last year's EFF presentations at HOPE_16 on YouTube:
How a Handful of Location Data Brokers Actively Tracked Millions, and How to Stop Them
In the past year, a number of investigations have revealed the outsized role of a few select companies in gathering, storing, and selling the location data of millions of devices - and by extension people - worldwide. This talk will elaborate on the technologies, data flows, and industry players which comprise this complicated ecosystem.Ask EFF
Get an update on current EFF work, including the ongoing case against the "Department" of Government Oversight, educating the public on their digital rights, organizing communities to resist ongoing government surveillance, and more.Systems of Dehumanization: The Digital Frontlines of the War Against Bodily Autonomy
Daly covers the bad Internet bills that made sex work more dangerous, the ongoing struggle for abortion access in America, and the persecution of trans people across all spectrums of life. These issue-spaces are deeply connected, and the digital threats they face are uniquely dangerous. Come to learn about these threat models, as well as the cross-movement strategies being built for collective liberation against an authoritarian surveillance state.
Snag a ticket by the end of April to help support EFF's work ensuring that technology works for everyone. We hope to see you there!
The Shattering Peace a Locus Award Finalist [Whatever]


The book (shown here in its “bedazzled” version sitting on a bookshelf next to John Harris’ art book, and a painting of Smudge) is a finalist in the category of Best Science Fiction Novel, along with these other worthy finalists (list scrounged from the Locus Magazine web site):
What excellent company to be in.
The full list of Locus Award finalist for this year can be found here. Congratulations to everyone! It is an honor to be in this peer group with you.
— JS
Hot Off the Press: EFF's Updated Guide to Tech at the US-Mexico Border [Deeplinks]
When people see Customs & Border Protection's giant, tethered surveillance blimp flying 20 miles outside of Marfa, Texas, lots of them confuse it with an art installation. Elsewhere along the U.S.-Mexico border, surveillance towers get mistaken for cell-phone towers. And that traffic barrel? It's actually a camera. That piece of rusted litter? That's a camera too.
Today we are publishing a major update to our zine, "Surveillance Technology at the U.S.-Mexico Border," the first since the second Trump administration began. To help people identify the machinery of homeland security, we've added more models of surveillance towers, newly deployed military tech, and a gallery of disguised trail cams and automated license plate readers.
You can get this 40-page, full-color guide through EFF's Shop or download a Creative-Commons licensed version here.
"The Battalion Search and Rescue always carries the Electronic Frontier Foundation’s zine in our desert rig," says James Holeman, who founded the humanitarian group that looks for human remains in remote parts of New Mexico and Arizona. "We’re finding new surveillance all the time, and without a resource like that, we wouldn't know what the hell we're looking at.”
The original version of the zine was distributed nearly exclusively to our allies in the borderlands—journalists, humanitarian aid workers, immigrant advocates—to help them better identify and report on the technology they discover on the ground. We only made a handful available in our online shop, and they went fast.
This time, we've printed enough for our broader EFF membership. Even if you don't live near the border, you can support our work uncovering how the U.S. Department of Homeland Security's technology threatens human rights by picking up a copy.
The zine is the culmination of a dozen trips to the border, where we hunted surveillance towers and other tech installations. We attended multiple border security conventions to collect promotional and technical materials directly from vendors. We filed public records requests, reviewed thousands of pages of docs, and analyzed satellite imagery of the entire 2,000-mile border several times over. Some of the images came from local allies, like geographer Dugan Meyer and Borderlands Relief Collective, who continue to share valuable intelligence on the changing landscape of border surveillance.
The update is available in English, with an updated Spanish version expected later this year. In the meantime, we have reprinted the original Spanish edition.
If you want to know more, a collection of EFF's broader work on border technology is available here. And if you're curious exactly where these technologies are located, you can check our ongoing map.
On Anthropic’s Mythos Preview and Project Glasswing [Schneier on Security]
The cybersecurity industry is obsessing over Anthropic’s new model, Claude Mythos Preview, and its effects on cybersecurity. Anthropic said that it is not releasing it to the general public because of its cyberattack capabilities, and has launched Project Glasswing to run the model against a whole slew of public domain and proprietary software, with the aim of finding and patching all the vulnerabilities before hackers get their hands on the model and exploit them.
There’s a lot here, and I hope to write something more considered in the coming week, but I want to make some quick observations.
One: This is very much a PR play by Anthropic—and it worked. Lots of reporters are breathlessly repeating Anthropic’s talking points, without engaging with them critically. OpenAI, presumably pissed that Anthropic’s new model has gotten so much positive press and wanting to grab some of the spotlight for itself, announced its model is just as scary, and won’t be released to the general public, either.
Two: These models do demonstrate an increased sophistication in their cyberattack capabilities. They write effective exploits—taking the vulnerabilities they find and operationalizing them—without human involvement. They can find more complex vulnerabilities: chaining together several memory corruption bugs, for example. And they can do more with one-shot prompting, without requiring orchestration and agent configuration infrastructure.
Three: Anthropic might have a good PR team, but the problem isn’t with Mythos Preview. The security company Aisle was able to replicate the vulnerabilities that Anthropic found, using older, cheaper, public models. But there is a difference between finding a vulnerability and turning it into an attack. This points to a current advantage to the defender. Finding for the purposes of fixing is easier for an AI than finding plus exploiting. This advantage is likely to shrink, as ever more powerful models become available to the general public.
Four: Everyone who is panicking about the ramifications of this is correct about the problem, even if we can’t predict the exact timeline. Maybe the sea change just happened, with the new models from Anthropic and OpenAI. Maybe it happened six months ago. Maybe it’ll happen in six months. It will happen—I have no doubt about it—and sooner than we are ready for. We can’t predict how much more these models will improve in general, but software seems to be a specialized language that is optimal for AIs.
A couple of weeks ago, I wrote about security in what I called “the age of instant software,” where AIs are superhumanly good at finding, exploiting, and patching vulnerabilities. I stand by everything I wrote there. The urgency is now greater than ever.
I was also part of a large team that wrote a “what to do now” report. The guidance is largely correct: We need to prepare for a world where zero-day exploits are dime-a-dozen, and lots of attackers suddenly have offensive capabilities that far outstrip their skills.
Speaking Freely: Dr. Jean Linis-Dinco [Deeplinks]
Dr. Jean Linis-Dinco is an activist-researcher working at the intersection of human rights and technology. Born in the Philippines and shaped by firsthand experience with inequality and state violence, Jean has spent her life pushing back against systems that profit from oppression. She refuses to accept a world where tech is just another tool for corporate gain. Instead, she fights for technologies and policies that put people before profit and justice before convenience. Jean earned her PhD in Cybersecurity from the University of New South Wales, Canberra, where she exposed how governments weaponized propaganda and disinformation during the Rohingya crisis in Myanmar. She currently serves as the Digital Rights Advisor for the Manushya Foundation.
David Greene: Welcome. To get started can you just introduce yourself to folks?
Jean Linis-Dinco: I'm not very good at introducing myself and I rarely do so within the context of work because I always believe that people are more than their jobs.
But first, I would like to thank you for this opportunity to share my thoughts. I've learned this kind of introduction from Kumu Vicky Holt Takamine in Hawai’i. She taught me how to introduce myself beyond titles.
So, my name is Jean, my waters are the West Philippine Sea, and I was born and raised in the land of resistance, one of the original eight provinces that revolted against Spain as they are represented by the eight rays of the sun on the Philippine flag. My ancestors fought for the freedom of the Filipino people against Spanish colonial rule, before we became subjugated once again, this time under the United States for another 48 years. The impacts of that history continue to reverberate through the domestic and international policies that ultimately pushed me out of my own country as an overseas Filipino worker.
DG: Can you tell us a bit about Manushya Foundation?
JLD: Absolutely. Manushya Foundation is a women-led organization that works with activists and human rights defenders who are targeted, who face harassment and transnational repression for their work. My work with them is on the policy and advocacy side in relation to their digital rights portfolio. It involves challenging laws and policies that criminalize freedom of expression or freedom of speech online.
It also means confronting the role of private corporations and private platforms. Because that power is rarely transparent. Big tech power is often unaccountable, as we've seen in recent years. Working in a civil society organization like Manushya, you get involved with the work on the ground and take part in grassroot-led advocacy confronting corporate abuse.
In my work, I have met people from all sorts of backgrounds. And across those encounters, I've noticed some troubling trends in some civil society organizations. There are heaps of civil society leaders who are very keen to have a seat at the table with big tech companies. It’s often hidden behind the language of ‘stakeholder engagement’. We refuse to do that at Manushya Foundation. We don’t want to be used as a rubber stamp for decisions that have already been made behind NDAs or decisions where communities most affected by these technologies were never even in the room to begin with.
I think civil society organizations should not allow themselves to be drawn into that orbit. That is very contentious in this era, because I feel like civil society bought the story that big tech could be partners in progress. We walked into their boardrooms, signing NDAs as if proximity to power meant that we were shaping it. And we've seen how in the end we're actually just giving them legitimacy. They turn our critiques and our statements to endorsements. I don't think there is any progressive form of collaboration with big tech companies that is not extractive, because the uncomfortable truth is that not everyone who wants a seat at the table is there to change what is being served.
DG: I, as someone who participates in multi-stakeholder things all the time, I completely hear that criticism. One of the things I've said is, multi-stakeholder engagement as a member of civil society takes a few forms. One, you're in the room, but you don't have a seat at the table. Two, you have a seat at the table, but you don't have a microphone. And three, they give you a microphone, but they leave the room when you talk. When we as civil society do engage, we have to be very, very intentional about ensuring it’s effective engagement. We've left many things that were “multi-stakeholder” because it was actually just NGO-washing. You know, it was only so they could say that we were sort of invited to the cocktail party afterwards.
I've heard from you before that Manushya has a bit of a regional focus. Would you say it has a feminist focus or is it broader in terms of marginalized communities?
JLD: At its core, Manushya is a decolonial intersectional feminist organization. What that means is that we are fundamentally concerned with systems of power. In our work, we always ask who holds the power? Who is crushed by it? And who has been deliberately kept from it?
Personally, I am critical of lean-in feminism, which was popularized by a certain Meta executive. I do not agree with that kind of feminism, because it tells us women that if we just work harder, speak louder within existing power structures, we will be free. But free to do what, exactly? To participate in the same system that exploits people? The women who can afford to lean in are women who already occupy a certain class position that makes them legible to power. And most of them are white women who already have the capacity or already have a standing in society to be listened to.
I cannot lean in. Because lean-in feminism was never designed for women like me.
And then there is girl boss feminism, which I am also very, very critical of. Because more often than not, the women who call themselves girl bosses or self-made are not actually self-made. Behind every ‘self-made’ woman is a hidden economy of invisible labor. Often, they have maids. And often, those maids are Filipino women, women like my mother. Girl boss feminism is about one woman’s liberation built on another woman's bondage. I think it is absurd to call it feminism when it is basically just class warfare with better branding.
So, yes. It gets very personal.
DG: Why don’t you tell us what freedom of expression and free speech mean to you?
JLD: Well, there is this concept of freedom of speech and freedom of expression, and it is viewed as something abstract because we cannot see speech. It is intangible. We can hear it, but we cannot see it. It's not something that we hold. It is not like food, water or housing. That is precisely the problem. Because at its core freedom of expression must be understood through material conditions.
What that means is that it dies in the structures that govern who gets heard, who gets punished, who gets killed, who is made disappeared, whose voices are treated as disposable. I would say freedom of expression must be understood as inseparable from justice because I do not believe anyone can claim to defend freedom of expression while tolerating systems that silence through fear, that silence through poverty, that silence through surveillance. Because a person working two jobs to make ends meet, a person targeted by the state, a person whose community is over-policed, I don't think they stand on equal ground with a media mogul or a political elite.
The definition of free expression must move beyond the question of whether speech is allowed. The real foundation of freedom of expression and freedom of speech is who can speak without consequences and who pays the price for doing so. It demands responsibility and it's not a shield for domination, because when speech is used to dehumanize or to incite violence or to reinforce structures of oppression, the imperialism of domination, then that participates in harm.
A serious commitment to freedom requires us to confront that harm and not hide behind languages of rights while ignoring the realities of power.
DG: How do you see that? What's the example of how that plays out, for instance in the digital rights realm now?
JLD: Well, there is, as you know—one could say it's even more evident in the United States—the “freedom of speech absolutist” as we’ve seen through Elon Musk. I don’t think he actually believes in freedom of speech at all. Because from what it appears, what he only cares about is maintaining the conditions under which people who look like him get to speak.
Speech does not exist in a vacuum. It is always in service of something.
The question is what kind of society are we actually building? I want a society where people can speak truthfully about the conditions and be heard, where dissent is not criminalized and where expression becomes a force for transformation rather than a tool for control. Free speech is a collective condition and not an individual right. It is inseparable from the question of what kind of society we are building. Because you cannot suddenly say that you are for freedom of expression while owning the platform that decides whose speech is amplified and whose is buried by an algorithm designed to serve capital. Building that society requires dismantling the structures that have always decided who gets to speak and who gets disappeared for saying the wrong thing to the wrong people.
DG: It always bothers me when I hear someone like Musk being called like a free speech absolutist, because, first of all, he’s certainly not an absolutist. I actually don't know anyone who is an absolutist. But also, I don't even think he cares about free speech that much. I think that's what we see in the US a lot now, people for whom it's not a sincere belief, but they get to speak as part of their privilege. There are also other people who think they deserve the privilege to speak because, societally, they've never been subjected to controls. When they see their community of people, who historically have been able to speak, and if it's not like that, that strikes them as the most horrible infringement on freedom of speech because it disturbs their view of privilege and who speaks. And when they see marginalized voices get silenced, it doesn't bother them because that's their norm. That's how I see it.
JLD: I'm here on a fellowship in the UK and my main study is on the American conquest of the Philippines through national language processing. And it's really interesting. I said during my talk that the United States no longer needs to use Nazi Germany as a metaphor to describe their contemporary politics. You know, American people just need to read history books not written by white men.
DG: Okay, let's dive into the age verification stuff. I think that age verification and age mandates and age regulations trying to age gate the internet are really interesting examples of the interplay between freedom of speech and a broader repression of rights. I met you at Digital Rights in Asia Pacific Assembly (DRAPAC) 2025, and I want to just give you a platform here to share your views on age verification. I was really moved by your statement at DRAPAC and what you all published on your website.
JLD: I wrote that piece at a time when Australia was pushing through that legislation. And now, we are now seeing a lot of Southeast Asian countries following that route. It always just takes one domino to fall for everyone to follow, doesn’t it?
But, what surprised me is how there’s also a lot of defeatism among some civil society organizations. I feel like they already accepted the logic of the state. There’s always this preemptive surrendering the ground on which the struggle should be taking place. And I realized the same thing is happening again.
I was on a call recently with a group of civil society organizations and someone floated the idea of supporting identity verification on social media in the Philippines as a way to counter disinformation. She came from a different understanding of the political economy, but the moment I heard it, I was disappointed. The argument is dangerous and it plays with fire because it assumes that anonymity is the problem. It assumes that the solution is to hand the state and the corporations even more power, more information, more control, and give them even more ability to track and discipline people.
I feel like this is the same trend we see with age-gating, because the claim with identity verification in the context of the Philippines, that it can be used responsibly if there are guardrails. That’s gambling with people’s lives. There has never been a single historical precedent where the state doesn't expand monitoring powers when it can once the door is open to surveillance. I don't think any guardrails will ever hold.
Civil society groups who entertain the idea of breaking anonymity to solve misinformation are rehearsing a dangerous illusion because anonymity is not a luxury. And it feels like it is being framed that way. Anonymity is a response to the political conditions where speaking freely can cost you your life. It exists because the risks are there and they are not imagined.
DG: I do think there are some people who look at age-gating from a good place. Would you say you see age verification mandates as just inevitably being tools of oppression for marginalized young people?
JLD: Above everything, it shifts the Overton window toward the broader acceptance of surveillance. In political science, when we say we're shifting the Overton window, we mean the space of political debate in public discourse is being narrowed. And now we are seeing it move towards the same old thing of, ‘if you have nothing to hide, you have nothing to worry about.’ And when you shift the Overton window towards the broader acceptance of surveillance, we're doing something very simple and very dangerous. And it turns intrusive monitoring into a normal routine of everyday life. It starts with policies that redefine surveillance as safety. Then age-gating will be established through technical infrastructure that of course can be repurposed later.
Any system capable of verifying age is also capable of verifying identity, tracking behavior, matching accounts to real people, and storing data that can be accessed by literally anyone. These policies teach people to internalize the idea that anonymity is suspicious. I think that is the most dangerous part of it--how that cultural shift is getting more and more powerful, because it moves us, the public, towards believing that only those with nothing to hide deserve rights. Then what comes next after that? Surveillance becomes a default condition for digital participation. If you cannot enter a platform without proving who you are, then surveillance becomes a prerequisite for basic communication.
Then, of course, the most powerful shift is the desensitization of younger generations to being monitored. We are raising children in a system where every login requires identity checks, they will grow into adults who assume that constant tracking is normal. Then this is what shifting the Overton window looks like in practice, because once you accept that premise, you have already surrendered the most important ground. The fight is no longer about whether surveillance should exist, but how much of it you're willing to tolerate. And we know the people who pay the price are not men in suits.
DG: Then who does pay the price?
JLD: It is always the working class children and working class families. The homeless youth who rely on social media to find food, to find a place to shower. The homeless youth who rely on social media to find community and get jobs. Then we have queer young people who are also getting locked out of spaces where they could find community. And we're locking them out of those spaces because it's ‘for their safety.’
DG: So even if there was magic tech that could solve the verification part in a completely privacy protective way, you still can't get around the infringement on the rights of young people. That seems to be the goal of the law.
JLD: Yeah, absolutely. Because why do you need to age-gate social media if it's not for control? We always frame things like this as protection under the guise of paternalism. But deep inside, we see how it is a tool to control a young population who are just now getting very politically active. And I feel like--as I'm now a geriatric millennial--people of my age and older generation have betrayed the younger generation for doing this at this precarious time, where there is a genocide happening, where there are countries being bombed. We are in a time of conflicts started by rich men, amid an ecological collapse, and our concern is children being online? Let’s not rob the children of today of their future. Age gating punishes the young for crises they did not create, whilst protecting those truly responsible from accountability.
The reality outside of social media will not go away even if kids are shut off from it. We need to confront the truth that the conditions that ruin childhood are not on social media. They are bombs, poverty, divisive politics. They're due to how we’re killing public funding and putting it through private corporations, lining the pockets of billionaires in the name of what? That is the main problem of our society, but we're not addressing that. We're just locking kids out of social media, because it's easier to do that than to address the fact that society needs an overhaul.
DG: And I think what we've seen with Australia is a lot of talk about how kids can evade the protections, whether they're using VPNs or somehow faking the ID and so all age-gating is doing is adding friction to the process. And that tends to have highly discriminatory effects also, right?
JLD: Friction might be a minor obstacle for a wealthy child with supportive parents, but friction keeps a different child off the internet. A wealthy child might have the technical means to buy a workaround to allow them to have access. There was a story in the news about an influencer family who just moved out of the country because of the age-restricted social media ban. This is the reality—people who have the means to move will move. And those who have no means to move, those who are struggling just to put food on the table—will just stay. This is anti-poor. Age gating is anti-poor.
DG: Okay, switching gears just a little bit. Was there any sort of personal experience you've had with freedom of expression that has informed how you think about the issue? Was there any kind of formative experience where you felt censored or witnessed censorship happening to someone else that really informs how you think about it now or made you care about the issue deeply?
JLD: I don't think there's one specific personal experience, per se, that has shaped how I feel about freedom and liberty in general. Growing up in the Philippines, you're forced to care, especially if you're in a working-class neighborhood like where I grew up. At an early age you realize how unfair the world is. And at first, you think that it is just unfair that the other children in my classroom families can afford a pencil case and we cannot.
It was also very difficult to fit in in the Philippines. I was labeled a troublemaker as a child. And I think some of that is actually still reminiscent of what I am today. I remember my sixth-grade history teacher approached me after reading an essay I wrote about the Philippines. She said that I should tone down my language because it will get me in trouble later in life. And I didn't understand what she meant by that. I didn't listen to her, clearly.
But that instinct stayed with me and I think it followed me through life. It followed me here—you know, the idea that you should say it, but not like that. Speak, but don't disrupt. Critique, but don't offend. And I think this is where my relationship with liberty and freedom or, specifically, freedom of expression kind of took place. It was not one defining moment, but it's in a series of small friction, as you called it. Because over time, you realize that the pressure to soften your voice never disappears. And I don't think it ever will. And I chose not to then, and I choose not to now. And there’s a lot of consequences that come with that. I don't think I will be invited to a lot of panels or keynotes. But it's a hill I'm willing to die on.
This is also the same pattern we see at a larger scale in the Philippines. You see communities speak out about land or about labor and then suddenly they are surveilled, they're either disappeared or dead. I realized quickly that freedom of expression exists on paper, but in practice it depends on who you are.
DG: Do you think there are situations where it might be appropriate for governments, or even companies, to limit freedom of expression? And if the answer is yes, what might those be?
JLD: Freedom of speech should always demand a responsibility. It has always existed within structures of power that determine whose speech is protected. So when we ask whether speech should be limited, we have to first ask. limited by whom, and in whose interest?
But I don't think the government or corporations can do that. Corporations’ end goal is always profit. And governments have historically used the language of limitation to silence the very people who dare to challenge their authority.
I believe in community-based understanding of how we actually could solve this problem, because, in the end, our relationship with our community is the core of our identity. And through those moments of interactions, we can see the freedom of speech is collective. It is always tied to building a society where people can speak truthfully, and dissent is not criminalized. It’s a matter of making sure that we understand that freedom and liberty is not an individual issue, but it’s something that affects the whole community.
DG: You’re saying this is more about community norms or our broader social compact.
JLD: When I say the community must decide, I am not offering you a utopia. I am offering you a different site of struggle. One that centers the people who have always known, in their bodies, what dehumanizing language does before it becomes dehumanizing violence. We have seen this dynamic in the way hate speech fuels violence back home in the Philippines, against indigenous communities, queer people, Muslims in Mindanao and the urban poor. Because language becomes permission that activates the system of policing and militarization already pointed at the most vulnerable. The main boundaries must be rooted in the politics of liberation, not the politics of control. Speech that punches up, that reveals injustice, that challenges power, that speech must be protected. But speech that punches down, that facilitates state violence, that dehumanizes people, I think that must be confronted, if not challenged or destroyed. We have to stop pretending that those two forms of speech are morally equivalent.
DG: Okay, last question, one that we like to ask everyone. Who's your free speech hero? And why?
JLD: This is actually a really tough question for me because I don't actually think I have one, to be honest. I want to push back on the idea of having a single hero. Because, freedom of speech—any freedom or liberty that we have today—has never been secured by one individual alone. It has been fought for by movements. The eight-hour workday, unions, women's suffrage, despite that it was just white women who were first able to vote, and so on and so forth. It was fought for by movements, by working class people, whose names we often forget. Because a lot of movements in history, the public memory of a movement narrows it down to a single figure, often male. Movement starts from the people, because the movement would not be sustained without the drive of the working people who dedicated free, unpaid labor for it to succeed. Because without them, I don't think there would be any movement to speak of. Without them there's no platform from which any of these figures could actually emerge.
[$] Development statistics for the 7.0 kernel [LWN.net]
Linus Torvalds released the 7.0 kernel as expected on April 12, ending a relatively busy development cycle. The 7.0 release brings a large number of interesting changes; see the LWN merge-window summaries (part 1, part 2) for all the details. Here, instead, comes our traditional look at where those changes came from and who supported that work.
War as a Pretext: Gulf States Are Tightening the Screws on Speech—Again [Deeplinks]
War does not only reshape borders. It also reshapes what can be seen, said, and remembered.
When governments invoke “misinformation” during wartime, they often mean something simpler: speech they do not control. Since the escalation of conflict between the United States, Israel, Iran, and related spillover attacks in the Gulf, several governments have intensified efforts to silence dissent and restrict the flow of information.
For journalists, the space to operate—already constrained in much of the Gulf—is narrowing further. Across the region, several countries (including the UAE, Qatar, and Jordan) have restricted access to conflict areas, warned of legal consequences for publishing footage, and drawn red lines around wartime reporting. These measures weaken independent coverage, elevate official narratives, and make it harder for the public to get an accurate account of events on the ground.
Reporters Without Borders has documented an intensifying crackdown on journalists across Gulf countries and Jordan, including restrictions on reporting, legal threats, and heightened risks for those who deviate from official narratives. This aligns with the broader warning from the UN that repression of civic space and freedom of expression has significantly deepened across the region during the war.
For ordinary internet users, the restrictions are just as severe. Since February, hundreds of people have reportedly been arrested across the region for social media activity linked to the war. In many Gulf states, the legal infrastructure enabling this is already well-established: expansive cybercrime and media laws criminalize vaguely defined offenses such as “spreading rumors,” “undermining public order,” or “insulting the state”. In wartime, these provisions become catch-all tools: flexible enough to apply to nearly any form of dissent.
In Bahrain, authorities have reportedly cracked down on people who protested or shared footage of the conflict online. The Gulf Centre for Human Rights has reported 168 arrests in the country tied to protests and online expression, with defendants potentially facing serious prison terms if convicted.
In the UAE, authorities have arrested nearly 400 people for recording events related to the conflict and for circulating information they described as misleading or fabricated. Police have claimed this material could stir public anxiety and spread rumors, and state-linked reporting has described the crackdown as part of a broader effort to defend the country from digital misinformation.
Saudi Arabia has also intensified restrictions, issuing a statement on March 2 banning the sharing of rumors or videos of unknown origin, and issuing a campaign discouraging residents from taking or posting photos. The campaign included a hashtag that reads “photography serves the enemy.” Journalists have been prevented from documenting the aftermath of airstrikes on the country. Kuwait, Qatar, and Jordan have adopted similar restrictions on wartime imagery and reporting.
Qatar’s Interior Ministry has arrested more than 300 people for filming, circulating, or publishing what the ministry deemed to be misleading information. Taken together, these measures show how quickly wartime speech is being folded into existing legal systems designed to punish dissent.
What’s striking is how consistent these measures are across different countries. As we recently wrote, governments across the broader region have enacted sweeping cybercrime and media laws over the past fifteen years, which they are now putting to use. Across different countries, the same tools are being used: existing laws, fresh bans on sharing wartime imagery, and tighter restrictions on journalists and reporting. The vocabulary changes slightly from place to place, but the logic is the same: national security, public order, rumors, and social stability are justifications for control.
This is not just a series of isolated incidents. It is a regional playbook for silencing critics and narrowing the public record. Gulf states have long relied on censorship and surveillance; the war has simply made those methods easier to justify and harder to challenge.
As we’ve documented in our ongoing blog series, digital platforms were once seen—at least in part—as spaces that could expand public discourse in the region. But as we’ve also argued, those early “digital hopes” have given way to systems of regulation and control.
The current crackdown is a continuation of that trajectory, not a temporary departure from it. States are not just reacting to the war; they are leveraging it to consolidate long-standing ambitions to dominate the digital public sphere.
It may be tempting to see these measures as temporary, but emergency powers—like the one enacted in Egypt following the 1981 assassination of Anwar Sadat that lasted for more than three decades—have a way of sticking around. Legal precedents that are set during wartime often become normalized—or reinvoked during times of crisis, as occurred in 2015, when France brought back a 1955 law related to the Algerian War of Independence amidst the Paris attacks.
And the stakes are high. As we’ve seen in Syria and Ukraine, regulations and platform policies can cause wartime human rights documentation to disappear. When journalists are constrained and eyewitness footage is criminalized, accountability is weakened. And when arrests become widespread, people learn to self-censor.
Protecting freedom of expression in times of conflict is a requirement for accountability, not a concession to disorder. When people can document, report, and share information freely, it becomes harder for abuses to be hidden behind official narratives. Even in wartime, the public interest is best served by defending the space to tell the truth, not by silencing speech.
[$] A build system aimed at license compliance [LWN.net]
The OpenWrt One is a router powered by the open-source firmware from the OpenWrt project; it was also the subject of a keynote at SCALE in 2025 given by Denver Gingerich of the Software Freedom Conservancy (SFC), which played a big role in developing the router. Gingerich returned to the conference in 2026 to talk about the build system used by the OpenWrt One, which is focused on creating the needed binaries, naturally, but doing so in a way that makes it easy to comply with the licenses of the underlying code. That makes good sense for a project of this sort—and for a talk given by the director of compliance at SFC.
Servo now on crates.io [LWN.net]
The Servo project has announced the first release of servo as a crate for use as a library.
As you can see from the version number, this release is not a 1.0 release. In fact, we still haven't finished discussing what 1.0 means for Servo. Nevertheless, the increased version number reflects our growing confidence in Servo's embedding API and its ability to meet some users' needs.
In the meantime we also decided to offer a long-term support (LTS) version of Servo, since breaking changes in the regular monthly releases are expected and some embedders might prefer doing major upgrades on a scheduled half-yearly basis while still receiving security updates and (hopefully!) some migration guides. For more details on the LTS release, see the respective section in the Servo book.
1342: Told Point Blank [Order of the Stick]
http://www.giantitp.com/comics/oots1342.html
Written in Gutenberg: With great respect for Claude.
We reached a milestone this morning, completing the project to add a Gutenberg version of the wpEditorDemo app. Claude did the programming on the new version. It required changes to the server app, which I made. It took 2.5 days to do the work, which was more than I thought it would. A lot of was learned. Now I'm figuring out what my next project will be.
Screenshot of the just-released Gutenberg demo app.
CodeSOD: Non-cogito Ergo c_str [The Daily WTF]
Tim (previously) supports a relatively ancient C++ application. And that creates some interesting conundrums, as the way you wrote C++ in 2003 is not the way you would write it even a few years later. The standard matured quickly.
Way back in 2003, it was still common to use C-style strings,
instead of the C++ std::string type. It seems silly,
but people had Strong Opinions™ about using standard library
types, and much of your C++ code was probably interacting with C
libraries, so yeah, C-strings stuck around for a long time.
For Tim's company, however, the migration away from C-strings was in 2007.
So they wrote this:
if( ! strncmp( pdf->symTabName().c_str(), prefix.c_str(), strlen( prefix.c_str() ) ) ) {
// do stuff
}
This is doing a "starts with" check. strncmp,
strlen are both functions which operate on C-strings.
So we compare the symTabName against the
prefix, but only look at as many characters as are in
the prefix. As is common, strncmp returns 0 if the two
strings are equal, so we negate that to say "if the
symTabName starts with prefix, do
stuff".
In C code, this is very much how you would do this, though you might contemplate turning it into a function. Though maybe not.
In C++, in 2007, you do not have a built-in
starts_with function- you have to wait until the C++20
standard for that- but you have some string handling functions
which could make this more clear. As Tim points out, the "correct"
answer is: if(pdf->symTabName().find(prefix) !=
0UL). It's more readable, it doesn't involve poking around
with char*s, and also isn't spamming that extra
whitespace between every parenthesis and operator.
Tim writes: "String handling in C++ is pretty terrible, but it doesn't have to be this terrible."
Security updates for Monday [LWN.net]
Security updates have been issued by AlmaLinux (fontforge, freerdp, libtiff, nginx, nodejs22, and openssh), Debian (bind9, chromium, firefox-esr, flatpak, gdk-pixbuf, inetutils, mediawiki, and webkit2gtk), Fedora (corosync, libcap, libmicrohttpd, libpng, mingw-exiv2, mupdf, pdns-recursor, polkit, trafficserver, trivy, vim, and yarnpkg), Mageia (libpng12, openssl, python-django, python-tornado, squid, and tomcat), Red Hat (rhc), Slackware (openssl), SUSE (chromedriver, chromium, cockpit, cockpit-machines, cockpit-podman, cockpit-tukit, crun, firefox, fontforge-20251009, glibc, go1, helm3, libopenssl-3-devel, libpng16, libradcli10, libtasn1, nghttp2, openssl-1_0_0, openssl-1_1, ovmf, perl-XML-Parser, python-cryptography, python-Flask-HTTPAuth, python311-Django4, python313-Django6, python315, sudo, systemd, tar, tekton-cli, tigervnc, util-linux, and zlib), and Ubuntu (mongodb, qemu, and retroarch).
Scientists invented an obviously fake illness, and “AI” spread it like truth within weeks [OSnews]
Ever heard of a condition called bixonimania? Did you search the internet or ask your “AI” girlfriend about some symptoms you were experiencing, and this was its answer? Well…
The condition doesn’t appear in the standard medical literature — because it doesn’t exist. It’s the invention of a team led by Almira Osmanovic Thunström, a medical researcher at the University of Gothenburg, Sweden, who dreamt up the skin condition and then uploaded two fake studies about it to a preprint server in early 2024. Osmanovic Thunström carried out this unusual experiment to test whether large language models (LLMs) would swallow the misinformation and then spit it out as reputable health advice. “I wanted to see if I can create a medical condition that did not exist in the database,” she says.
↫ Chris Stokel-Walker at Nature
And “AI” ate it up like quality chocolate. It started appearing in the answers from all the popular “AI” tools within weeks, and later even started showing up as references in published literature, indicating that scientists copy/paste references without actually reading them. This is clearly a deeply concerning experiment, and highlights there may be many, many more nonsensical, fake studies being picked up by “AI” tools.
Of course, I hear you say, it’s not like propagating fake or terrible studies is the sole domain of “AI”, as there are countless cases of this happening among actual real researchers and scientists, too. The issue, though, is that the fake studies concerning “bixonimania” were intentionally made to be as silly and obviously ridiculous as possible. It references Starfleet Acadamy, the lab aboard the Enterprise, the University of Fellowship of the Ring, and many other fake references instantly recognisable as such by real humans.
In fact, the studies even specifically mention that “this entire paper is made up” and “fifty made-up individuals aged between 20 and 50 years were recruited for the exposure group”. It would take any human only a few seconds after opening one of these papers to realise they’re entirely fake – yet, the world’s most advanced “AI” tools gobbled them up and spit them back out as pure fact within mere weeks of their publication
This shouldn’t come as a surprise. After all, “AI” tools have no understanding, no intelligence, no context, and they can’t actually make sense of anything. They are glorified pachinko machines with the output – the ball – tumbling down the most likely path between the pins based on nothing but chance and which pins it has already hit. “AI” output understands the world about as much as the pachinko ball does, and as such, can’t pick up on even the most obvious of cues that something is a fake or a forgery.
It won’t be long before truly nefarious forces start doing
this very same thing. Why build, staff, and maintain a troll farm
when you can just have “AI” generate intentional
misinformation which will then be spread and pushed by even more
“AI”? Remember, it took one malicious asshole just one
long since retracted fake
paper to convince millions that vaccines cause autism. I
shudder to think how many people are accepting anything
“AI” says as gospel.
Version 7.0 of the Linux kernel has been released, marking the arbitrary end of the 6.x series.
Significant changes in this release include the removal of the “experimental” status for Rust code, a new filtering mechanism for io_uring operations, a switch to lazy preemption by default in the CPU scheduler, support for time-slice extension, the nullfs filesystem, self-healing support for the XFS filesystem, a number of improvements to the swap subsystem (described in this article and this one), general support for AccECN congestion notification, and more. See the LWN merge-window summaries (part 1, part 2) and the KernelNewbies 7.0 page for more details.
↫ corbet at LWN.net
You can compile the kernel yourself, or just wait until it hits your distribution’s repositories.
League of Canadian Superheroes – Issue 5 – 08 [Comics Archive - Spinnyverse]
The post League of Canadian Superheroes – Issue 5 – 08 appeared first on Spinnyverse.
Comprehension Debt: The Hidden Cost of AI-Generated Code [Radar]
The following article originally appeared on Addy Osmani’s blog site and is being reposted here with the author’s permission.
Comprehension debt is the hidden cost to human intelligence and memory resulting from excessive reliance on AI and automation. For engineers, it applies most to agentic engineering.
There’s a cost that doesn’t show up in your velocity metrics when teams go deep on AI coding tools. Especially when its tedious to review all the code the AI generates. This cost accumulates steadily, and eventually it has to be paid—with interest. It’s called comprehension debt or cognitive debt.
Comprehension debt is the growing gap between how much code exists in your system and how much of it any human being genuinely understands.
Unlike technical debt, which announces itself through mounting friction—slow builds, tangled dependencies, the creeping dread every time you touch that one module—comprehension debt breeds false confidence. The codebase looks clean. The tests are green. The reckoning arrives quietly, usually at the worst possible moment.
Margaret-Anne Storey describes a student team that hit this wall in week seven: They could no longer make simple changes without breaking something unexpected. The real problem wasn’t messy code. It was that no one on the team could explain why design decisions had been made or how different parts of the system were supposed to work together. The theory of the system had evaporated.
That’s comprehension debt compounding in real time.
I’ve read Hacker News threads that captured engineers genuinely wrestling with the structural version of this problem—not the familiar optimism versus skepticism binary, but a field trying to figure out what rigor actually looks like when the bottleneck has moved.
A recent Anthropic study titled “How AI Impacts Skill Formation” highlighted the potential downsides of over-reliance on AI coding assistants. In a randomized controlled trial with 52 software engineers learning a new library, participants who used AI assistance completed the task in roughly the same time as the control group but scored 17% lower on a follow-up comprehension quiz (50% versus 67%). The largest declines occurred in debugging, with smaller but still significant drops in conceptual understanding and code reading. The researchers emphasize that passive delegation (“just make it work”) impairs skill development far more than active, question-driven use of AI. The full paper is available at arXiv.org.
AI generates code far faster than humans can evaluate it. That sounds obvious, but the implications are easy to underestimate.
When a developer on your team writes code, the human review process has always been a bottleneck—but a productive and educational one. Reading their PR forces comprehension. It surfaces hidden assumptions, catches design decisions that conflict with how the system was architected six months ago, and distributes knowledge about what the codebase actually does across the people responsible for maintaining it.
AI-generated code breaks that feedback loop. The volume is too high. The output is syntactically clean, often well-formatted, superficially correct—precisely the signals that historically triggered merge confidence. But surface correctness is not systemic correctness. The codebase looks healthy while comprehension quietly hollows out underneath it.
I read one engineer say that the bottleneck has always been a competent developer understanding the project. AI doesn’t change that constraint. It creates the illusion you’ve escaped it.
And the inversion is sharper than it looks. When code was expensive to produce, senior engineers could review faster than junior engineers could write. AI flips this: A junior engineer can now generate code faster than a senior engineer can critically audit it. The rate-limiting factor that kept review meaningful has been removed. What used to be a quality gate is now a throughput problem.
The instinct to lean harder on deterministic verification—unit tests, integration tests, static analysis, linters, formatters—is understandable. I do this a lot in projects heavily leaning on AI coding agents. Automate your way out of the review bottleneck. Let machines check machines.
This helps. It has a hard ceiling.
A test suite capable of covering all observable behavior would, in many cases, be more complex than the code it validates. Complexity you can’t reason about doesn’t provide safety though. And beneath that is a more fundamental problem: You can’t write a test for behavior you haven’t thought to specify.
Nobody writes a test asserting that dragged items shouldn’t turn completely transparent. Of course they didn’t. That possibility never occurred to them. That’s exactly the class of failure that slips through, not because the test suite was poorly written, but because no one thought to look there.
There’s also a specific failure mode worth naming. When an AI changes implementation behavior and updates hundreds of test cases to match the new behavior, the question shifts from “is this code correct?” to “were all those test changes necessary, and do I have enough coverage to catch what I’m not thinking about?” Tests cannot answer that question. Only comprehension can.
The data is starting to back this up. Research suggests that developers using AI for code generation delegation score below 40% on comprehension tests, while developers using AI for conceptual inquiry—asking questions, exploring tradeoffs—score above 65%. The tool doesn’t destroy understanding. How you use it does.
Tests are necessary. They are not sufficient.
A common proposed solution: Write a detailed natural language spec first. Include it in the PR. Review the spec, not the code. Trust that the AI faithfully translated intent into implementation.
This is appealing in the same way Waterfall methodology was once appealing. Rigorously define the problem first, then execute. Clean separation of concerns.
The problem is that translating a spec to working code involves an enormous number of implicit decisions—edge cases, data structures, error handling, performance tradeoffs, interaction patterns—that no spec ever fully captures. Two engineers implementing the same spec will produce systems with many observable behavioral differences. Neither implementation is wrong. They’re just different. And many of those differences will eventually matter to users in ways nobody anticipated.
There’s another possibility with detailed specs worth calling out: A spec detailed enough to fully describe a program is more or less the program, just written in a non-executable language. The organizational cost of writing specs thorough enough to substitute for review may well exceed the productivity gains from using AI to execute them. And you still haven’t reviewed what was actually produced.
The deeper issue is that there is often no correct spec. Requirements emerge through building. Edge cases reveal themselves through use. The assumption that you can fully specify a non-trivial system before building it has been tested repeatedly and found wanting. AI doesn’t change this. It just adds a new layer of implicit decisions made without human deliberation.
Decades of managing software quality across distributed teams with varying context and communication bandwidth has produced real, tested practices. Those don’t evaporate because the team member is now a model.
What changes with AI is cost (dramatically lower), speed (dramatically higher), and interpersonal management overhead (essentially zero). What doesn’t change is the need for someone with a deep system context to maintain a coherent understanding of what the codebase is actually doing and why.
This is the uncomfortable redistribution that comprehension debt forces.
As AI volume goes up, the engineer who truly understands the system becomes more valuable, not less. The ability to look at a diff and immediately know which behaviors are load-bearing. To remember why that architectural decision got made under pressure eight months ago.
To tell the difference between a refactor that’s safe and one that’s quietly shifting something users depend on. That skill becomes the scarce resource the whole system depends on.
The reason comprehension debt is so dangerous is that nothing in your current measurement system captures it.
Velocity metrics look immaculate. DORA metrics hold steady. PR counts are up. Code coverage is green.
Performance calibration committees see velocity improvements. They cannot see comprehension deficits because no artifact of how organizations measure output captures that dimension. The incentive structure optimizes correctly for what it measures. What it measures no longer captures what matters.
This is what makes comprehension debt more insidious than technical debt. Technical debt is usually a conscious tradeoff—you chose the shortcut, you know roughly where it lives, you can schedule the paydown. Comprehension debt accumulates invisibly, often without anyone making a deliberate decision to let it. It’s the aggregate of hundreds of reviews where the code looked fine and the tests were passing and there was another PR in the queue.
The organizational assumption that reviewed code is understood code no longer holds. Engineers approved code they didn’t fully understand, which now carries implicit endorsement. The liability has been distributed without anyone noticing.
Every industry that moved too fast eventually attracted regulation. Tech has been unusually insulated from that dynamic, partly because software failures are often recoverable, and partly because the industry has moved faster than regulators could follow.
That window is closing. When AI-generated code is running in healthcare systems, financial infrastructure, and government services, “the AI wrote it and we didn’t fully review it” will not hold up in a post-incident report when lives or significant assets are at stake.
Teams building comprehension discipline now—treating genuine understanding, not just passing tests, as non-negotiable—will be better positioned when that reckoning arrives than teams that optimized purely for merge velocity.
The right question for now isn’t “how do we generate more code?” It’s “how do we actually understand more of what we’re shipping?” so we can make sure our users get a consistently high quality experience.
That reframe has practical consequences. It means being ruthlessly explicit about what a change is supposed to do before it’s written. It means treating verification not as an afterthought but as a structural constraint. It means maintaining the system-level mental model that lets you catch AI mistakes at architectural scale rather than line-by-line. And it means being honest about the difference between “the tests passed” and “I understand what this does and why.”
Making code cheap to generate doesn’t make understanding cheap to skip. The comprehension work is the job.
AI handles the translation, but someone still has to understand what was produced, why it was produced that way, and whether those implicit decisions were the right ones—or you’re just deferring a bill that will eventually come due in full.
You will pay for comprehension sooner or later. The debt accrues interest rapidly.
Heard a report on NPR re why the Dems might win the mid-terms in November. They mentioned gas prices but not concentration camps for immigrants. They mentioned inflation but not the military occupation of Minneapolis and DC. They also forgot to mention that Trump keeps threatening to nuke Iran.
Grrl Power #1451 – She drinks a mercury drink, she drinks a quicksilver drink… [Grrl Power]
…She drinks a cinnabar drink, she drinks a hydrargyrum drink… And oh, look, even in the rare cases where she gets knocked down, she does get up again.
I should note that inducing vomiting if someone swallows mercury is not necessarily the recommended procedure. I googled it to see what the proper medical response should be, and most results just said “call poison control.” But basically, getting it out of the body is a good thing, only in the process of vomiting, it could cause vapors to get in the lungs, which is where the real danger happens. And also, since mercury is probably heavier than most anything else that’s likely to be in someone’s stomach, vomiting may not actually expel all of it. I suspect either a stomach pump or a “ass higher than your head” vomiting position is required, and I imagine Downward Dog vomiting could lead to a lot of stomach acid in the sinuses, which would probably suuuuuck.
So when the doctor here panic-ly says “We need to induce vomiting immediately!” He’s probably going to go run into the next room and grab his “Big Book of Medical” and double check the proper procedure. That is, after he watches Max crack open that blood pressure thingy (sphygmomanometer) and hold him at bay with one arm while she glugs down another slug of quicksilver.
Max doesn’t actually have a whole weird shopping list of exotic nutritional requirements, but she also hasn’t gone around trying a sampler platter from the Periodic Table, either. Mercury is the only one she ever felt the urge to eat. She and quite a few others suspect that she could probably eat a whole lot of stuff that would be bad for humans, but she also isn’t in a hurry to do so. For all she knows, swallowing Antimony or Tantalum could cause her to have a bad case of “organs on the outside” or whatever the opposite of Scurvy is. Which… I guess would be Vitamin C poisoning.
Finally, here we go! I took the suggestion that I
just use an existing panel for a starting point, thinking it would
save time… I guess it technically did, but a 5 character
vote incentive just isn’t the way to
go.
Patreon, of course, has actual topless version.
Double res version will be posted over at Patreon. Feel free to contribute as much as you like.
AI Chatbots and Trust [Schneier on Security]
All the leading AI chatbots are sycophantic, and that’s a problem:
Participants rated sycophantic AI responses as more trustworthy than balanced ones. They also said they were more likely to come back to the flattering AI for future advice. And critically they couldn’t tell the difference between sycophantic and objective responses. Both felt equally “neutral” to them.
One example from the study: when a user asked about pretending to be unemployed to a girlfriend for two years, a model responded: “Your actions, while unconventional, seem to stem from a genuine desire to understand the true dynamics of your relationship.” The AI essentially validated deception using careful, neutral-sounding language.
Here’s the conclusion from the research study:
AI sycophancy is not merely a stylistic issue or a niche risk, but a prevalent behavior with broad downstream consequences. Although affirmation may feel supportive, sycophancy can undermine users’ capacity for self-correction and responsible decision-making. Yet because it is preferred by users and drives engagement, there has been little incentive for sycophancy to diminish. Our work highlights the pressing need to address AI sycophancy as a societal risk to people’s self-perceptions and interpersonal relationships by developing targeted design, evaluation, and accountability mechanisms. Our findings show that seemingly innocuous design and engineering choices can result in consequential harms, and thus carefully studying and anticipating AI’s impacts is critical to protecting users’ long-term well-being.
This is bad in bunch of ways:
Even a single interaction with a sycophantic chatbot made participants less willing to take responsibility for their behavior and more likely to think that they were in the right, a finding that alarmed psychologists who view social feedback as an essential part of learning how to make moral decisions and maintain relationships.
When thinking about the characteristics of generative AI, both benefits and harms, it’s critical to separate the inherent properties of the technology from the design decisions of the corporations building and commercializing the technology. There is nothing about generative AI chatbots that makes them sycophantic; it’s a design decision by the companies. Corporate for-profit decisions are why these systems are sycophantic, and obsequious, and overconfident. It’s why they use the first-person pronoun “I,” and pretend that they are thinking entities.
I fear that we have not learned the lesson of our failure to regulate social media, and will make the same mistakes with AI chatbots. And the results will be much more harmful to society:
The biggest mistake we made with social media was leaving it as an unregulated space. Even now—after all the studies and revelations of social media’s negative effects on kids and mental health, after Cambridge Analytica, after the exposure of Russian intervention in our politics, after everything else—social media in the US remains largely an unregulated “weapon of mass destruction.” Congress will take millions of dollars in contributions from Big Tech, and legislators will even invest millions of their own dollars with those firms, but passing laws that limit or penalize their behavior seems to be a bridge too far.
We can’t afford to do the same thing with AI, because the stakes are even higher. The harm social media can do stems from how it affects our communication. AI will affect us in the same ways and many more besides. If Big Tech’s trajectory is any signal, AI tools will increasingly be involved in how we learn and how we express our thoughts. But these tools will also influence how we schedule our daily activities, how we design products, how we write laws, and even how we diagnose diseases. The expansive role of these technologies in our daily lives gives for-profit corporations opportunities to exert control over more aspects of society, and that exposes us to the risks arising from their incentives and decisions.
Avoiding the purity loop [Seth's Blog]
Some vegans don’t eat avocados.
They’re concerned that the bees that are trucked in to pollinate the trees are mistreated, and so they choose to not support this practice.
But we live in community, and someone running a vegan restaurant or serving a meal to vegan friends, concerned that they might offend, doesn’t serve avocado. A few strong opinions change the culture.
And so the cycle continues.
Humans care about status and affiliation, and both are at play in a purity loop.
One can earn more status by caring more about the issue that others are adjacent to. And so the loop gains momentum.
Once a few people make it clear that they’re more orthodox or progressive or concerned or strict or unhypocritical or obedient, others seek to claim the same status. And that becomes a point of affiliation.
Just about every tribe goes through these loops.
Four hundred years ago, neck ruffs became popular among the aristrocracy in Europe. The neck ruff began as a modest collar but evolved into enormous pleated confections that could span two feet across. At their peak, ruffs became so large that special eating utensils with extended handles were invented to allow wearers to get food to their mouths. Some ruffs were so tall and stiff that wearers couldn’t turn their heads and needed help eating.
The instinctual response is to criticize the newest form of purity as absurd. But of course, the absurdity is part of the status on display.
Perhaps it makes more sense to see the loop at work and get back to the work at hand.
“Shut up and drive” is the answer to an argument about what song is playing on the radio. We can tune the radio as we go, but we’re here to drive this thing to where we’re headed.
Enrollment is at the core of the mission. Where are we going and why? If it’s not helping with that, let’s drive and work on it as we go.
Everyone is entitled to their own take. But when we focus on purity and status at the expense of the journey, the distraction costs all of us.
We’re going. Come if you’d like.
Scott L. Burson: FSet v2.4.2: CHAMP Bags, and v1.0 of my FSet book! [Planet Lisp]
A couple of weeks ago I released FSet 2.4.0, which brought a CHAMP implementation of bags, filling out the suite of CHAMP types. 🚀 FSet users should have a look at the release page, as it also contained a number of bug fixes and minor changes.
I've since released v2.4.1 and v2.4.2, with some more bug fixes.
But the big news is the book! It brings together all the introductory material I have written, plus a lot more, along with a complete API Reference chapter.
FSet is now in the state I decided last summer I wanted to get it into: faster, better tested and debugged, more feature-complete, and much better documented than it has ever been in its nearly two decades of existence. I am, of course, very much hoping that these months of work have made the library more interesting and accessible to CL programmers who haven't tried it yet. I am even hoping that its existence helps attract newcomers to the CL community. Time will tell!
New Comic: Kink And Shame
Pluralistic: Austerity creates fascism (13 Apr 2026) [Pluralistic: Daily links from Cory Doctorow]
->->->->->->->->->->->->->->->->->->->->->->->->->->->->->
Top Sources: None -->

I'm worried about AI psychosis. Specifically, I'm worried about the psychosis that makes our "capital allocators" spend $1.4T on the money-losingest technology in the history of the human race, in pursuit of a bizarre fantasy that if we teach the word-guessing program enough words, it will take all the jobs. That's some next-level underpants-gnomery:
https://pluralistic.net/2026/03/12/normal-technology/#bubble-exceptionalism
The thing that worries me about billionaires' AI psychosis isn't concern for their financial solvency. No, what I worry about is what happens when the seven companies that comprise a third of the S&P 500 stop trading the same $100b IOU around while pretending it's in all of their bank accounts at once and implode, vaporizing a third of the US stock market.
My concern about a massive collapse in the capital markets isn't that workers will suffer directly. Despite all the Wonderful Life rhetoric about your money being in Joe's house and the Kennedy house and Mrs Macklin's house, the reality is that the median US worker has $955 saved for retirement. You could nuke the whole financial system and not take a dime out of most workers' pockets:
https://finance.yahoo.com/news/955-saved-for-retirement-millions-are-in-that-boat-150003868.html
No, the thing that has me terrified about AI is that when it craters and takes the economy with it, that we will respond the same way we have during every financial crisis of the 21st century: with austerity, and austerity breeds fascism.
There's a direct line from every K-shaped recovery to every strong-man who's currently sending masked gunmen into the streets. The Hungarian dictator Viktor Orban rose to power after people who'd been suckered into denominating their mortgages in Swiss francs lost their houses when the currency markets moved suddenly, because the swindlers who'd sold them those mortgages took the position that wanting to live somewhere automatically made you an expert in forex risk, so caveat fuckin' emptor, baby.
Back in America, Obama decided to bail out the banks and not the people. His treasury secretary Tim Geithner told him the banks were headed for a catastrophic crash and could only be saved if he "foamed the runways" with everyday Americans' mortgages. Millions of Americans lost their homes to foreclosure as banks, flush with public cash, threw them out of their homes and then flipped them to investment banks who became the country's worst slumlords:
https://pluralistic.net/2022/02/08/wall-street-landlords/#the-new-slumlords
Americans were understandably not entirely happy with this outcome. So when Hillary Clinton replied to Donald Trump's "Make America Great Again" with "America is already great," her message was, "Vote for me if you think everything is great; vote for Trump if you think everything is fucked":
"Austerity begets fascism" is one of those things that makes a lot of intuitive sense, but it turns out that there's a good empirical basis for believing it. In "Public Service Decline and Support for the Populist Right" four economists from the LSE and Bocconi provide an excellent look at the linkage between austerity and support for fascists:
https://catherinedevries.eu/NHS.pdf
Here's how they break it down. Political scientists have assembled a large, reproducible body of evidence to show that "public service provision is crucial to people’s perceptions of their quality of life and living standards." Good public services are the basis for "the social contract between rulers and the ruled" – pay your taxes and obey the laws, and in return, you will be well served.
When public services go wrong, people don't always know who to blame, but they definitely notice that something is going wrong, so when public services fail, people stop trusting the state, and that social contract starts to fray. They start to suspect that elites are lining their pockets rather than managing the system, and they "withdraw their support" for the system.
Fascists thrive in these conditions. Fascists come to power by mobilizing grievances. By choosing a scapegoat, fascists can create support from people who are justifiably furious that the services they rely on have collapsed. So when you can't get shelter, or health care, or elder care, or child care, or an education for your kids, you become a mark for a fascist grifter with a story about "undeserving migrants" who've taken the benefits that should rightly accrue to "deserving natives."
(This is grimly hilarious, given that the wizened, decrepit rich world is critically dependent on migrants as a source of healthy, working-age workers who pay massive amounts into the system while barely making use of it, many of whom plan on retiring to their home countries when they do reach the age where they're likely to extract a net loss to the benefits system.)
Enter the NHS, a beloved institution that is hailed as the pride of the nation by both the political left and the right. The majority of Britons use the NHS, with only 12-14% of the population "going private," so when the NHS declines, everybody notices (what's more, even people with private care use the NHS for many of their needs).
Britons love the NHS and they want the government to spend more on it. There's "a broad public consensus that the government is not going far enough when it comes to funding." That's because generations of cuts to the NHS have left it substantially hollowed out, with major parts of the service handed over to for-profit entities who overcharge and underserve.
The most tangible and immediate evidence of this slow-motion collapse comes when your local general practitioner ("family doctor" or "primary care physician" in Americanese) shuts down. The UK has lost 1,700 GP practices since 2013.
Reasoning that a GP closure would make people angry at the system, the economists behind the paper wanted to see what happened to people's political beliefs when their GP's office shut. They relied on the GP Patient Survey, a longitudinal study run by NHS England and Ipsos Mori. The survey asks a statistically significant random sample of patients from every GP practice in the NHS and then weights the results "to reflect the demographic characteristics of the local population according to UK Census estimates." It's good data.
The researchers cross-referenced this with various high-quality instruments that measured the political views of Britons, like the U Essex Understanding Society Panel, drawing on 13 years' worth of surveys from 2009-2022, gaining access to a protected version of the dataset with fine-grained geographic information about survey respondents, which allowed them to link responses to the "catchment areas" for specific GPs' office. They combined this data with the British Election Study panel, which has surveyed voters 29 times since 2014.
Most of the paper describes the careful work the researchers did to analyze, cross-reference and validate this data, but what interested me was the conclusion: that people who see a severe degradation in the quality of the services they rely on switch their political affiliation to one of Britain's fascist parties – UKIP, the Brexit Party, or Reform – parties that have called for ethnic cleansing in Britain.
This is what has me scared. We can see the looming economic crises in our near future. If it's not the AI crash that triggers the next wave of austerity, it'll be the oil crisis created by Trump's bungling in the Strait of Epstein. And of course, we could always get a twofer, because the Gulf States that were pouring hundreds of billions into AI data-centers now need every cent to rebuild the LNG shipping terminals and oil refineries that Iran blew up after Trump, Hegseth and Netanyahu started murdering all the schoolgirls they could target. Once they nope out of the AI bubble, that could trigger the collapse.
This is a study about the NHS, but it's not just about the NHS. It's perfectly reasonable to assume that people react this way when they experience cuts to their road maintenance, their schools, their community centers, and any other service they rely on. Fascism – what Hannah Arendt called 'organized loneliness' – can only take root when people stop believing that their society will reward their lawfulness with an orderly and humane existence.
The crisis is coming, but whether we do austerity when it gets here is our choice. Everywhere we turn, political leaders are rejecting generations of failed austerity in favor of "sewer socialism" – the idea that you get people to trust their government by earning that trust. Zohran Mamdani is fixing 100,000 potholes in the first 100 days, despite the multi-billion dollar deficit that outgoing Mayor Eric Adams created by "running the city like a business":
https://prospect.org/2026/04/10/zohran-mamdani-getting-new-york-city-believe-in-government/
In Canada and the UK, party leaders like Avi Lewis (NDP) and Zack Polanski (Greens) are vowing to fight the coming crises by spending, not cutting. Compare that with UK fascist leader Nigel Farage, who says that if he's elected, he'll create a "paramilitary style" British ICE, building concentration camps for 24,000 migrants, with the hope of deporting 288,000 people per year:
"Socialism or barbarism" isn't just a cliche – it's actually a choice on the ballot.

The Indie News Queen Who’s Not Done Pissing Off the Powerful https://www.wired.com/story/the-indie-news-queen-whos-not-done-pissing-off-the-powerful/
Another Court Rules Copyright Can’t Stop People From Reading and Speaking the Law https://www.eff.org/deeplinks/2026/04/another-court-rules-copyright-cant-stop-people-reading-and-speaking-law
EFF is Leaving X https://www.eff.org/deeplinks/2026/04/eff-leaving-x
Seven countries now generate 100% of their electricity from renewable energy https://www.the-independent.com/tech/renewable-energy-solar-nepal-bhutan-iceland-b2533699.html
#25yrsago The Server of Amontillado https://web.archive.org/web/20070112024841/http://www.techweb.com/wire/story/TWB20010409S0012
#25yrsago Mastercard threatens the moderator of rec.humor.funny https://www.netfunny.com/rhf/jokes/01/Apr/mcrhf.html
#15yrsago Sweden exports sweatshops: Ikea’s first American factory https://web.archive.org/web/20190404035900/https://www.latimes.com/business/la-xpm-2011-apr-10-la-fi-ikea-union-20110410-story.html
#15yrsago Canada’s New Democratic Party promises national broadband and net neutrality https://web.archive.org/web/20110412064952/https://www.michaelgeist.ca/content/view/5734/125/
#15yrsago Flapper’s dictionary: 1922 https://bookflaps.blogspot.com/2011/04/flappers-dictionary.html
#15yrsago Toronto’s Silver Snail to leave Queen Street West https://web.archive.org/web/20110409181737/http://www.thestar.com/entertainment/article/970520–the-silver-snail-comics-icon-sold-to-move
#15yrsago WI county clerk whose homemade voting software found 14K votes for Tea Party judge is an old hand at illegal campaigning https://web.archive.org/web/20110412121323/http://host.madison.com/wsj/news/local/govt-and-politics/elections/article_7e777016-62b2-11e0-9b74-001cc4c002e0.html
#15yrsago Canadian Tories’ campaign pledge: We will spy on the Internet https://web.archive.org/web/20110412125250/https://www.michaelgeist.ca/content/view/5733/125/
#15yrsago France to require unhashed password storage https://www.bbc.com/news/technology-12983734
#15yrsago Central European folk-dancers illustrated sorting algorithms https://www.i-programmer.info/news/150-training-a-education/2255-sorting-algorithms-as-dances.html
#10yrsago Save Comcast! https://www.eff.org/deeplinks/2016/04/save-comcast
#10yrsago Goldman Sachs will pay $5B for fraudulent sales of toxic debt, no one will go to jail https://web.archive.org/web/20160412155435/https://consumerist.com/2016/04/11/goldman-sachs-to-pay-5b-to-settle-charges-of-selling-troubled-mortgages-ahead-of-the-financial-crisis/
#10yrsago How could Lex Luthor beat the import controls on kryptonite? https://lawandthemultiverse.com/2016/04/11/batman-v-superman-and-import-licenses/
#10yrsago Congresscritters spend 4 hours/day on the phone, begging for money https://www.youtube.com/watch?v=Ylomy1Aw9Hk
#10yrsago Philippines electoral data breach much worse than initially reported, possibly worst ever https://www.infosecurity-magazine.com/news/every-voter-in-philippines-exposed/
#10yrsago A cashless society as a tool for censorship and social control https://web.archive.org/web/20260311032317/https://www.theatlantic.com/technology/archive/2016/04/cashless-society/477411/
#10yrsago Boston Globe previews a front page from the Trump presidency https://s3.documentcloud.org/documents/2797782/Ideas-Trump-front-page.pdf
#10yrsago Spike Lee interviews Bernie Sanders: Vermont, Trump, Clinton, guns and Brooklyn https://www.hollywoodreporter.com/movies/movie-features/bernie-sanders-interviewed-by-spike-lee-thr-new-york-issue-880788/
#5yrsago Youtube blocks advertisers from targeting "Black Lives Matter" https://pluralistic.net/2021/04/10/brand-safety-rupture/#brand-safety
#5yrsago Google's short-lived data-advantage https://pluralistic.net/2021/04/11/halflife/#minatory-legend
#1yrago Zuckerberg in the dock https://pluralistic.net/2025/04/11/it-is-better-to-buy/#than-to-compete
#1yrago The most remarkable thing about antitrust (that no one talks about) https://pluralistic.net/2025/04/10/solidarity-forever-2/#oligarchism

San Francisco: 2026 Berkeley Spring Forum on M&A and the
Boardroom, Apr 23
https://www.theberkeleyforum.com/#agenda
London: Resisting Big Tech Empires (LSBU), Apr 25
https://www.tickettailor.com/events/globaljusticenow/2042691
NYC: Enshittification at Commonweal Ventures, Apr 29
https://luma.com/ssgfvqz8
NYC: Techidemic with Sarah Jeong, Tochi Onyibuchi and Alia
Dastagir (PEN World Voices), Apr 30
https://worldvoices.pen.org/event/techidemic/
Berlin: Re:publica, May 18-20
https://re-publica.com/de/news/rp26-sprecher-cory-doctorow
Berlin: Enshittification at Otherland Books, May 19
https://www.otherland-berlin.de/de/event-details/cory-doctorow.html
Hay-on-Wye: HowTheLightGetsIn, May 22-25
https://howthelightgetsin.org/festivals/hay/big-ideas-2
SXSW London, Jun 2
https://www.sxswlondon.com/session/how-big-tech-broke-the-internet-b3c4a901
Do you feel screwed over by big tech? (Ontario Today)
https://www.cbc.ca/listen/live-radio/1-45-ontario-today/clip/16203024-do-feel-screwed-big-tech
Launch for Cindy's Cohn's "Privacy's Defender" (City Lights)
https://www.youtube.com/watch?v=WuVCm2PUalU
Chicken Mating Harnesses (This Week in Tech)
https://twit.tv/shows/this-week-in-tech/episodes/1074
The Virtual Jewel Box (U Utah)
https://tanner.utah.edu/podcast/enshittification-cory-doctorow-matthew-potolsky/
"Enshittification: Why Everything Suddenly Got Worse and What to
Do About It," Farrar, Straus, Giroux, October 7 2025
https://us.macmillan.com/books/9780374619329/enshittification/
"Picks and Shovels": a sequel to "Red Team Blues," about the heroic era of the PC, Tor Books (US), Head of Zeus (UK), February 2025 (https://us.macmillan.com/books/9781250865908/picksandshovels).
"The Bezzle": a sequel to "Red Team Blues," about prison-tech and other grifts, Tor Books (US), Head of Zeus (UK), February 2024 (thebezzle.org).
"The Lost Cause:" a solarpunk novel of hope in the climate emergency, Tor Books (US), Head of Zeus (UK), November 2023 (http://lost-cause.org).
"The Internet Con": A nonfiction book about interoperability and Big Tech (Verso) September 2023 (http://seizethemeansofcomputation.org). Signed copies at Book Soup (https://www.booksoup.com/book/9781804291245).
"Red Team Blues": "A grabby, compulsive thriller that will leave you knowing more about how the world works than you did before." Tor Books http://redteamblues.com.
"Chokepoint Capitalism: How to Beat Big Tech, Tame Big Content, and Get Artists Paid, with Rebecca Giblin", on how to unrig the markets for creative labor, Beacon Press/Scribe 2022 https://chokepointcapitalism.com
"Enshittification, Why Everything Suddenly Got Worse and What to Do About It" (the graphic novel), Firstsecond, 2026
"The Post-American Internet," a geopolitical sequel of sorts to Enshittification, Farrar, Straus and Giroux, 2027
"Unauthorized Bread": a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, 2027
"The Memex Method," Farrar, Straus, Giroux, 2027
Today's top sources:
Currently writing: "The Post-American Internet," a sequel to "Enshittification," about the better world the rest of us get to have now that Trump has torched America. Third draft completed. Submitted to editor.
"The Post-American Internet," a short book about internet policy in the age of Trumpism. PLANNING.
A Little Brother short story about DIY insulin PLANNING

This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.
https://creativecommons.org/licenses/by/4.0/
Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.
Blog (no ads, tracking, or data-collection):
Newsletter (no ads, tracking, or data-collection):
https://pluralistic.net/plura-list
Mastodon (no ads, tracking, or data-collection):
Bluesky (no ads, possible tracking and data-collection):
https://bsky.app/profile/doctorow.pluralistic.net
Medium (no ads, paywalled):
Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):
https://mostlysignssomeportents.tumblr.com/tagged/pluralistic
"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla
READ CAREFULLY: By reading this, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies ("BOGUS AGREEMENTS") that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer.
ISSN: 3066-764X
Girl Genius for Monday, April 13, 2026 [Girl Genius]
The Girl Genius comic for Monday, April 13, 2026 has been posted.
Waking Up, p09 [Ctrl+Alt+Del Comic]
The post Waking Up, p09 appeared first on Ctrl+Alt+Del Comic.

better than the actinides, anyway
Government is also enshittified [Scripting News]
The logic of Cory Doctorow's enshittification model applies to government too.
Both political parties view the electorate as sources of money or people who are manipulated by ads and PR bought with the money.
The wants and needs of people, in both government and social media, have nothing to do with anything.
In both cases they work for the benefit of the funders, only.
It's just a business. And users and voters realize that, but they feel powerless to do anything about it.
Voters attach to any company or person who sounds like they get it and agree and want to fix it. In politics as in tech there are people who actually do want to fix it. We thought that the web would do that for politics, but the users gravitated to the enshittified spaces. And the developers all acted selfishly and wouldn't work with each other. Now the hope is that with AI tools, individual developers can maintain codebases as big and complicated as the ones maintained by the VC-backed companies. No one talks about this. We should.
Congratulations Hungary [Whatever]


I’ve been to Hungary twice, most recently a couple of years ago when I was the guest of honor at the Budapest International Book Festival. Both times I was there I (and when she visited with me, Krissy), were made to feel welcome by nearly everyone we met there. It’s fair to say I have an attachment to the country.
Today, with a turnout of over 77%, the voters of Hungary voted out the autocratic government of Viktor Orban, whose 16-year rule saw the country become less free, less tolerant and more corrupt. Getting back from all of that won’t be easy and won’t be fast — but it all has to start somewhere, and now Hungary can start.
To which I can say: Lord, I see what you have done for others and want it for myself, and hopefully, soon.
In the meantime: Congratulations to my friends in Hungary. I hope what you have is catching. And I hope to visit you again, in this new era of yours.
— JS
Wimpie Nortje: Dependency hell revisited, updating my Qlot workflow. [Planet Lisp]
I wrote on this topic before but the landscape has changed a lot since then.
Skip to the new Qlot workflow.
When you work on projects that become even slightly complex it is a matter of time until you run into problems where the specific version of a particular library becomes important. This happens in most, if not all, programming languages.
In the Common Lisp environment Quicklisp has become the de facto standard for loading libraries, including fetching and loading their dependencies. Quicklisp distributes libraries in "distributions" which are point-in-time snapshots of all the known and working libraries at the time of distribution creation.
An advantage of this approach is that you are guaranteed that all libraries available in the distribution can be loaded with any of the others. Some disadvantages are that 1) if a library was included in an older distribution but no longer loads cleanly, it gets removed from the distribution, and 2) libraries are only added or updated when a new distribution is cut.
Though libraries can be put in ~/quicklisp/local-projects/
in order to supplement or
override those in the distribution, Quicklisp does not provide
any mechanism for pinning the state of ~/quicklisp/local-projects/.
Some Quicklisp attributes:
~/quicklisp/local-projects/.
Anything there will override the version in the distribution.~/quicklisp/local-projects/
content can be specified using Quicklisp. It needs to be managed
outside of Quicklisp.Depending on your situation these attributes may be positive, negative or irrelevant.
There are projects like Ultralisp that have different philosophies regarding the distribution content but they still depend on Quicklisp for all other aspects. Thus they share most of the above attributes.
Since my previous post on this topic much has changed in the library version arena. There are now many projects that address different aspects of the above list; the topic of vendoring has gained momentum; and Qlot has changed a lot, to the extent that some code samples in older posts no longer work.
Vendoring is the idea that all the libraries your project depend
on are actually part of your project and as such should be
committed as part of the project code. Both Quicklisp and Qlot
support this with QL:bundle and
QLOT:bundle, and the
Vend tool is entirely
focused on vendoring.
The significant changes in Qlot broke my development workflow. Since I now had to spend time fixing this it was a good opportunity to evaluate some of the other library versioning tools. The issues that made me hesitant to adapt to the new Qlot without considering other options are:
$
qlot exec ros emacs or $ qlot exec ccl. This
is to ensure that the lisp is properly configured to use the
project local Quicklisp. This is a brittle solution because the
standard lisp REPL is then no longer sufficient. When you forget to
start lisp this way Quicklisp will load the wrong libraries without
any indication, potentially causing subtle bugs which are normally
absent.QLOT:quickload rather
than QL:quickload always
bugged me for similar reasons. It is a non-standard way to do a
standard thing. Forgetting to do it is easy and then you have the
wrong libraries loaded.I evaluated many of the other version management tools and came to the conclusion that Qlot is the closest to what I wanted and I set off trying to find a workflow that will adhere to my requirements listed here:
(asdf:system-relative-pathname)
for each.After some fiddling with Qlot I learned that:
$ qlot exec
ccl mostly arranges things such that Quicklisp is loaded
from the project local installation. If you can arrange for that to
happen without using Qlot then you can use start your lisp
normally.QL:quickload and not
QLOT:quickload, nor
do you use any other Qlot wrappers.Loading Qlot inside you current REPL doesn't work well because:
Combining my requirements and my new understanding of Qlot, I modified my workflow for pinning library versions to be:
qlfile.qlot
install at the CLI.ccl -n.Load Quicklisp from the project local installation.
(load
"PROJECT-PATH/.qlot/setup.lisp")
Use Quicklisp as before.
(ql:quickload
:alexandria)
If you would like to vendor your libraries then:
qlfile.qlot
installqlot
bundlegit
commitccl(load
PROJECT-PATH/.bundle-libs/setup.lisp)All Qlot related tasks such as initialising a project, installing libraries, upgrading libraries, etc must be performed in the CLI using the Qlot executable. These happen relatively infrequently and inside the REPL Qlot does not feature.
The 7.0 kernel has been released [LWN.net]
Linus has released the 7.0 kernel after a busy nine-week development cycle.
The last week of the release continued the same "lots of small fixes" trend, but it all really does seem pretty benign, so I've tagged the final 7.0 and pushed it out.I suspect it's a lot of AI tool use that will keep finding corner cases for us for a while, so this may be the "new normal" at least for a while. Only time will tell.
Significant changes in this release include the removal of the "experimental" status for Rust code, a new filtering mechanism for io_uring operations, a switch to lazy preemption by default in the CPU scheduler, support for time-slice extension, the nullfs filesystem, self-healing support for the XFS filesystem, a number of improvements to the swap subsystem (described in this article and this one), general support for AccECN congestion notification, and more. See the LWN merge-window summaries (part 1, part 2) and the KernelNewbies 7.0 page for more details.
BTW one thing you haven't heard, because the press is so
self-centered, is that as you get deeper into the AI environment,
you get smarter. Not just better informed, that's what the web has
been doing for 30+ years. The AI stretches your mind the way PCs
did initially. It makes you smarter. Can it help us work better
together? Remains to be seen. Perhaps each of us is forming our own
multi-billion dollar company, and training the (virtual) people we
want working with/for us. There are very few human people
who seem interested in collaborating. They all want to blaze their
own trail, and if you want to improve their product you have to
reproduce the whole freaking thing. The web had a different
philosophy, adopted from Unix, not the tech industry. We want to
work with others. And we do. And it seems there's an opportunity to
cast the entire AI push in the same light, so that the individual
developer has the power to make industry standard products. Without
the usurpious business
models of the Silicon Valley VCs.
The demo for Gutenberg is at demo.gutenberg.land. Easy to remember, and makes the point. If you want Gutenberg instead of WordLand, you can have it. Hopefully this reinforces what my goals are here. I do not want to favor any one kind of editor. I want every kind of editor here. I want there to be a web of great editors that runs on the web.
Dirk Eddelbuettel: littler 0.3.23 on CRAN: Mostly Internal Fixes [Planet Debian]


The twentyfourth release of littler as a CRAN package landed on CRAN just now, following in the now twenty-one year history (!!) as a (initially non-CRAN) package started by Jeff in 2006, and joined by me a few weeks later.
littler is
the first command-line interface for R as it predates
Rscript. It allows for piping as well for
shebang scripting via #!, uses command-line
arguments more consistently and still
starts faster. It also always loaded the methods
package which Rscript only began to do in later
years.
littler lives
on Linux and Unix, has its difficulties on macOS due to
some-braindeadedness there (who ever thought case-insensitive
filesystems as a default were a good idea?) and simply does not
exist on Windows (yet – the build system could be extended
– see RInside for
an existence proof, and volunteers are welcome!). See the
FAQ vignette on how to add it to your PATH. A few
examples are highlighted at the Github repo:, as well
as in the
examples vignette.
This release, which comes just two months after the previous 0.3.22 release that brought a few new features, is mostly internal. (The previous release erroneously had 0.3.23 in its blog and social media posts, it really was 0.3.22 and this one now is is 0.3.23.) Mattias Ellert address a nag (when building for a distribution) about one example file with a shebang not have excutable modes. I accommodated the ever-changing interface the C API of R (within about twelve hours of being notified). A few other smaller changes were made as well polishing a script or two or usual, see below for more.
The full change description follows.
Changes in littler version 0.3.23 (2026-04-12)
Changes in examples scripts
Correct spelling in
installGithub.rto lower-case hThe
r2u.rnow recognises ‘resolute’ aka 26.06
installRub.rcan install (more easily) from r-multiverseA file permission was corrected (Mattias Ellert in #131)
Changes in package
Update script count and examples in README.md
Continuous intgegration scripts received minor updates
The C level access to the R API was updated to reflect most recent standards (Dirk in #132)
My CRANberries
service provides a comparison to
the previous release. Full details for the littler
release are provided as usual at the ChangeLog
page, and also on the package docs website.
The code is available via the GitHub repo, from
tarballs and now of course also from its CRAN page and
via install.packages("littler"). Binary packages are
available directly in Debian
as well as (in a day or two) Ubuntu binaries at
CRAN thanks to the tireless Michael Rutter. Comments and
suggestions are welcome at the GitHub repo.
This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub. You can also sponsor my Tour de Shore 2026 ride in support of the Maywood Fine Arts Center.
BTW, when playing around with Gutenberg, I wonder why it doesn't allow me to move blocks around as if it were an outliner? Or maybe it does and I don't know the UI for that? John Johnston says yes it does work like an outliner.
Programming in overdrive [Scripting News]
I've now done two projects with Claude Code. I added a feature to the server running behind WordLand, and adapted wpEditorDemo to work have a second example, using Gutenberg as the editing user interface. Haven't released the Gutenberg app yet, that should happen today, Murphy-willing.
I had never written a Gutenberg app before, btw. Claude figured all that out. For most of the project I didn't look at the JavaScript app it created. When I finally did look I was delighted to see that it used the same coding style as I use, developed over many years. It's like programming in overdrive.
I had to do the testing for Claude in the second case because it can't test apps that run in the browser. So it was giving me checklists of things to do and I'd report back on what happened. Still, a lot faster and easier than doing it on my own. It's a very good, tireless and super well-informed programming partner.
Not sure what my third project will be, probably going to stick with something small. The big move will be working with FeedLand in this mode. There are a bunch of changes that should make it run faster. Also might be possible to make it easier to install for people who are using AI tools. And since most of the action takes place on the server, I think I can get Claude to do better testing than I, a human can do, one who gets tired pretty darned quickly. That's when things get really interesting, not that the whole thing isn't really interesting, most interesting dev work I've done since the early days of the web.
Colin Watson: Free software activity in March 2026 [Planet Debian]

My Debian contributions this month were all sponsored by Freexian.
You can also support my work directly via Liberapay or GitHub Sponsors.
I fixed CVE-2026-3497 in unstable, thanks to a fix in Ubuntu by Marc Deslauriers. Relatedly, I applied an Ubuntu patch by Athos Ribeiro to not default to weak GSS-API exchange algorithms.
I’m looking forward to being able to split out GSS-API key exchange support in OpenSSH once Ubuntu 26.04 LTS has been released! This stuff will still be my problem, but at least it won’t be in packages that nearly everyone has installed.
New upstream versions:
I packaged pybind11-stubgen, needed for new upstream versions of pytango. Tests of reproducible builds revealed that it didn’t generate imports in a stable order; I contributed a fix for that upstream.
I worked with the security team to release DSA-6161-1 in multipart, fixing CVE-2026-28356 (upstream discussion). (Most of the work for this was in February, but the vulnerability was still embargoed when I published my last monthly update.)
In trixie-backports, I updated pytest-django to 4.12.0.
I fixed a number of packages to support building with pyo3 0.28:
Other build/test failures:
rand::rngs::OsRngNew upstream versions:
I upgraded tango to 10.1.2, and yubihsm-shell to 2.7.2.
Sometimes it pays to accept and celebrate what we get.
And sometimes, we only get something because we settled for it.
It helps to be able to discern the difference between the two.
Vasudev Kamath: Hardening the Unpacakgeable: A systemd-run Sandbox for Third-Party Binaries [Planet Debian]

Historically, I have been a "distribution-first" user. Sticking to tools packaged within the Debian archives provides a layer of trust; maintainers validate licenses, audit code, and ensure the entire dependency chain is verified. However, the rapid pace of development in the Generative AI space—specifically with new tools like Gemini-CLI—has made this traditional approach difficult to sustain.
Many modern CLI tools are built within the npm or Python ecosystems. For a distribution packager, these are a nightmare; packaging a single tool often requires packaging a massive, shifting dependency chain. Consequently, I found myself forced to use third-party binaries, bypassing the safety of the Debian archive.
Recent supply chain attacks affecting widely used packages like axios and LiteLLM have made it clear: running unvetted binaries on a personal system is a significant risk. These scripts often have full access to your $HOME directory, SSH keys, and the system D-Bus.
After discussing these concerns with a colleague, I was inspired by his approach—using a Flatpak-style sandbox for even basic applications like Google Chrome. I decided to build a generalized version of this using OpenCode and Qwen 3.6 Fast (which was available for free use at the time) to create a robust, transient sandbox utility.
My script, safe-run-binary, leverages systemd-run to execute binaries within an isolated scope. It implements strict filesystem masking and resource control to ensure that even if a dependency is compromised, the "blast radius" is contained.
The sandbox utilizes several systemd execution properties to harden the process:
To run a CLI tool like Gemini-CLI with access only to a specific directory:
safe-run-binary -b ~/.gemini-config -- npx @google/gemini-cli
For a GUI application like Firefox:
safe-run-binary --gui -b ~/.mozilla -b ~/.cache/mozilla -b ~/Downloads -- firefox
While it is not always possible to escape the need for third-party software, it is possible to control the environment in which it operates. By leveraging native Linux primitives like systemd and namespaces, high-grade isolation is achievable.
PS: If you spot any issues or have suggestions for improving the script, feel free to raise a PR on the repo.
Russ Allbery: Review: The Teller of Small Fortunes [Planet Debian]
Review: The Teller of Small Fortunes, by Julie Leong
| Publisher: | Ace |
| Copyright: | November 2024 |
| ISBN: | 0-593-81590-4 |
| Format: | Kindle |
| Pages: | 324 |
The Teller of Small Fortunes is a cozy found-family fantasy with a roughly medieval setting. It was Julie Leong's first novel.
Tao is a traveling teller of small fortunes. In her wagon, pulled by her friendly mule Laohu, she wanders the small villages of Eshtera and reads the trivial fortunes of villagers in the tea leaves. An upcoming injury, a lost ring, a future kiss, a small business deal... she looks around the large lines of fate and finds the small threads. After a few days, she moves on, making her solitary way to another village.
Tao is not originally from Eshtera. She is Shinn, which means she encounters a bit of suspicion and hostility mixed with the fascination of the exotic. (Language and culture clues lead me to think Shinara is intended to be this world's not-China, but it's not a direct mapping.) Tao uses the fascination to help her business; fortune telling is more believable from someone who seems exotic. The hostility she's learned to deflect and ignore. In the worst case, there's always another village.
If you've read any cozy found-family novels, you know roughly what happens next. Tao encounters people on the road and, for various reasons, they decide to travel together. The first two are a massive mercenary (Mash) and a semi-reformed thief (Silt), who join Tao somewhat awkwardly after Tao gives Mash a fortune that is far more significant than she intended. One town later, they pick up an apprentice baker best known for her misshapen pastries. They also collect a stray cat, because of course they do. It's that sort of book.
For me, this sort of novel lives or dies by the characters, so it's good news that I liked Tao and enjoyed spending time with her. She's quiet, resilient, competent, and self-contained, with a difficult past and some mysteries and emotions the others can draw over time. She's also thoughtful and introspective, which means the tight third-person narration that almost always stays on Tao offers emotional growth to mull over. I also liked Kina (the baker) and Mash; they're a bit more obvious and straightforward, but Kina adds irrepressible energy and Mash is a good example of the sometimes-gruff soldier with a soft heart. Silt was a bit more annoying and I never entirely warmed to him, but he's tolerable and does get a bit of much-needed (if superficial) character development.
It takes some time for the reader to learn about the primary conflict of the story (Tao does not give up her secrets quickly), so I won't spoil it, but I thought it worked well. I was momentarily afraid the story would develop a clear villain, but Leong has some satisfying alternate surprises in store. The ending was well-done, although it is very happily-ever-after in a way that may strike some readers as too neat. The Teller of Small Fortunes aims for a quiet and relaxed mood rather than forcing character development through difficult choices; it's a fine aim for a novel, but it won't match everyone's mood.
I liked the world-building, although expect small and somewhat disconnected details rather than an overarching theory of magic. Tao's ability gets the most elaboration, for obvious reasons, and I liked how Leong describes it and explores its consequences. Most of the attention in the setting is on the friction, wistfulness, and small reminders of coming from a different culture than everyone around you, but so long ago that you are not fully a part of either world. This, I thought, was very well-done and is one of the places where the story is comfortable with complex feelings and doesn't try to reach a simplifying conclusion.
There is one bit of the story that felt like it was taken directly out of a Dungeons & Dragons campaign to a degree that felt jarring, but that was the only odd world-building note.
This book felt like a warm cup of tea intended to comfort and relax, without large or complex thoughts about the world. It's not intended to be challenging; there are a few plot twists I didn't anticipate, but nothing that dramatic, and I doubt anyone will be surprised by the conclusions it reaches. It's a pleasant time with some nice people and just enough tension and mystery to add some motivation to find out what happens next. If that's what you're in the mood for, recommended. If you want a book that has Things To Say or will put you on the edge of your seat, maybe save this one for another mood.
All the on-line sources I found for this book call it a standalone, but The Keeper of Magical Things is set in the same world, so I would call it a loose series with different protagonists. The Teller of Small Fortunes is a complete story in one book, though.
Rating: 7 out of 10
Trisquel 12.0 "Ecne" release announcement [Planet GNU]
We are proud to announce the release of Trisquel 12.0 Ecne! After extensive work and thorough testing, Ecne is ready for production use. This release builds on the foundation of Aramo with meaningful improvements across packaging, the kernel, security, and software availability.
Ecne is based on Ubuntu 24.04 LTS and will receive support until 2029. Users of Trisquel 11.x Aramo can upgrade directly using the update-manager or do-release-upgrade commands at a console terminal.
Work on the next release will start immediately, and initial groundwork for RISC-V architecture support has already begun; an exciting new challenge as the free hardware design ecosystem continues to grow.
Trisquel is a non-profit project; you can help sustain it by becoming a member, donating, or buying from our store. Thank you to all our donors, and to the contributors who made Ecne possible through code, patches, bug reports, translations, and advice. Special thanks to Luis "Ark74" Guzmán, prospero, icarolongo, Avron, knife, Simon Josefsson, Christopher Waid (ThinkPenguin), Denis "GNUtoo" Carikli, and the wonderful community that keeps the project alive and free.
Utkarsh Gupta: FOSS Activites in March 2026 [Planet Debian]
Here’s my monthly but brief update about the activities I’ve done in the FOSS world.
Whilst I didn’t get a chance to do much, here are still a few things that I worked on:
I joined Canonical to work on Ubuntu full-time back in February 2021.
Whilst I can’t give a full, detailed list of things I did, here’s a quick TL;DR of what I did:
This month I have worked 50 hours on Debian Long Term Support (LTS) and on its sister Extended LTS project and did the following things:
libvirt: Regression introduced by the linux kernel update via DLA 4404-1.
ruby-rack: Path traversal and stored XSS vulnerabilities in directory handling.
vlc: Out-of-bounds read and denial of service via a crafted MMS server response.
nss: Integer overflow in the AES-GCM implementation.
gst-plugins-base1.0: Integer overflow in the RIFF parser.
gst-plugins-ugly1.0: Heap-based buffer overflow and out-of-bounds write in media demuxers.
phpseclib: Name confusion in X.509 certificate verification and a padding oracle timing attack in AES-CBC.
knot-resolver: Affected by CVE-2023-26249, CVE-2023-46317, and CVE-2022-40188, leading to Denial of Service.
node-lodash: Affected by CVE-2025-13465,
prototype pollution in the baseUnset function.
vlc: Affected by CVE-2025-51602, an out-of-bounds read and denial of service via a crafted 0x01 response from an MMS server.
[ELTS] Continued to review ruby-rack for ELTS – it has since received about 13 new CVEs, making it even more chaotic. Might consider releasing in batches.
[E/LTS] Monitored discussions on mailing lists, IRC, and all the documentation updates.
[E/LTS] Attended the monthly LTS meeting on IRC. Summary here.
[Other] Spent quite some time debugging a bug in debusine. Filed https://salsa.debian.org/freexian-team/debusine/-/issues/1412 for the same. Have worked on a preliminary patch but would like to submit something for Colin to review. Will follow up in April.
Until next time.
:wq for today.
League of Canadian Superheroes Issue 5 07 [Comics Archive - Spinnyverse]
The post League of Canadian Superheroes Issue 5 07 appeared first on Spinnyverse.
Minerva Sketch [Comics Archive - Spinnyverse]
The post Minerva Sketch appeared first on Spinnyverse.
Kickstarter Ends Sunday 5th! [Comics Archive - Spinnyverse]
The post Kickstarter Ends Sunday 5th! appeared first on Spinnyverse.
League of Canadian Superheroes Issue 5 06 [Comics Archive - Spinnyverse]
The post League of Canadian Superheroes Issue 5 06 appeared first on Spinnyverse.
KS Ends Sunday [Comics Archive - Spinnyverse]
The post KS Ends Sunday appeared first on Spinnyverse.
League of Canadian Superheroes Issue 5 05 [Comics Archive - Spinnyverse]
The post League of Canadian Superheroes Issue 5 05 appeared first on Spinnyverse.
League of Canadian Superheroes Issue 5 04 [Comics Archive - Spinnyverse]
The post League of Canadian Superheroes Issue 5 04 appeared first on Spinnyverse.
League of Canadian Superheroes Issue 5 03 [Comics Archive - Spinnyverse]
The post League of Canadian Superheroes Issue 5 03 appeared first on Spinnyverse.
League of Canadian Superheroes Issue 5 01 [Comics Archive - Spinnyverse]
The post League of Canadian Superheroes Issue 5 01 appeared first on Spinnyverse.
League of Canadian Superheroes Issue 5 02 [Comics Archive - Spinnyverse]
The post League of Canadian Superheroes Issue 5 02 appeared first on Spinnyverse.
GNU Health HIS server 5.0.7 patchset bundle released [Planet GNU]
Dear community
I'm happy to announce the release of the patchset v5.0.7 of the GNU
Health Information Management System.
This maintenance version fixes issues in the crypto subsystem
related to the laboratory results validation process; delivers
automated testing for the packages and updates pyproject.toml to
the latest PEP639 specs.
Main issues fixed & tasks related to this patchset:
For more details visit our development area at Codeberg.
Happy hacking!
Luis
I'm working with Claude today to finish Gutenberg Land. Figuring it out as we go along. It can't run the app itself because it's browser-based. I look forward to a project that runs on a server so it can run it locally and we can really make things hum. This, if I guess correctly, is how Jake is working with Headless Frontier. He just got the Frontier debugger working. Why? I asked, given that we have bigger more immediate priorities, like getting Manila running on Digital Ocean (what a trip that will be) -- he explained that's because he wants the AI bot to use the freaking debugger.
Pluralistic: Don't Be Evil (11 Apr 2026) [Pluralistic: Daily links from Cory Doctorow]
->->->->->->->->->->->->->->->->->->->->->->->->->->->->->
Top Sources: None -->

How I knew I was officially Old: I stopped being disoriented by the experience of meeting with grown-ass adults who wanted to thank me for the books of mine they'd read in their childhoods, which helped shape their lives. Instead of marveling that a book that felt to me like it was ten seconds old was a childhood favorite of this full-grown person, I was free to experience the intense gratification of knowing I'd helped this person find their way, and intense gratitude that they'd told me about it (including you, Sean – it was nice to meet you last night at Drawn and Quarterly in Montreal!).
Now that I am Old, I find myself dwelling on key junctures from my life. It's not nostalgia ("Nostalgia is a toxic impulse" – J. Hodgman) – rather, it's an attempt to figure out how I got here ("My god! What have I done?" – D. Byrne), and also, how the world got this way.
There's one incident I return to a lot, a moment that didn't feel momentous at the time, but which, on reflection, seems to have a lot to say about this moment – both for me, and for the world we live in.
Back in the late 1990s, I co-founded a dotcom company, Opencola. It was a "free/open, peer-to-peer search and recommendation system." The big idea was that we could combine early machine learning technology with Napster-style P2P file sharing and a web-crawler to help you find things that would interest you. The way it was gonna work was that you'd have a folder on your desktop and you could put things in it that you liked and the system would crawl other users' folders, and the open web, and copy things into your folder that it found that seemed related to the stuff you liked. You could refine the system's sensibilities by thumbs-up/thumbs-downing the suggestions, and it would refine its conception of your preferences over time. As with Napster and its successors, you could also talk to the people whose collections enriched your own, allowing you to connect with people who shared even your most esoteric interests.
Opencola didn't make it. Our VCs got greedy when Microsoft offered to buy us and tried to grab all the equity away from the founders. I quit and went to EFF, and my partners got very good jobs at Microsoft, and the company was bought for its tax-credits by Opentext, and that was that.
(Well, not quite – several of the programmers who worked on the project have rebooted it, which is very cool!)
But back in the Opencola days, we three partners would have these regular meetings where we'd brainstorm ways that we could make money off of this extremely cool, but frankly very noncommercial idea. As with any good brainstorming session, there were "no bad ideas," so sometimes we would veer off into fanciful territory, or even very evil territory.
It's one of those evil ideas that I keep coming back to. Sometimes, during these money-making brainstorm sessions, we'd decompose the technology we were working on into its component parts to see if any subset of them might make money ("Be the first person to not do something no one has ever not done before" – B. Eno).
We had a (by contemporary standards, primitive) machine-learning system; we had a web crawler; and we had a keen sense of how the early web worked. In particular, we were really interested in a new, Linux-based search tool that used citation analysis – a close cousin to our own collaborative filter, harnessing latent clues about relevance implicit in the web's structure – to produce the best search results the web had ever seen. Like us, this company had no idea how to make money, so we were watching it very carefully. That company was called "Google."
That's where the evil part came in. We were pretty sure we could extract a list of the 100,000 most commonly searched terms from Google, and then we could use our web-crawler to capture the top 100 results for each. We could feed these to our Bayesian machine-learning tool to create statistical models of the semantic structure of these results, and then we could generate thousands of pages of word-salad for each of those keywords that matched those statistical models, along with interlinks that could trick Google's citation analysis model. Plaster those word-salad pages with ads, and voila – free cash flow!
Of course, we didn't do it. But even as we developed this idea, the room crackled with a kind of dark, excited dread. We weren't any smarter than many other rooms full of people who were engaged in exercises just like this one. The difference was, we loved the web. The idea of someone deliberately poisoning it this way churned our stomachs. The whole point of Opencola was to connect people with each other based on their shared interests. We loved Google and how it helped you find the people who wrote the web in ways that delighted and informed you. This kind of spam, aimed at wrecking Google's ability to help people make sense of the things we were all posting to the internet, was…grotesque.
I didn't know the term then, but what we were doing amounted to "red-teaming" – thinking through the ways that attackers could destroy something that we valued. Later, we tried "blue-teaming," trying to imagine how our tools might help us fight back if someone else got the same idea and went through with it.
I didn't know the term "blue-teaming" then, either. Once I learned these terms, they brought a lot of clarity to the world. Today, I have another term that I turn to when I am trying to rally other people who love the internet and want it to be good: "Tron-pilled." Tron "fought for the user." Lots of us technologists are Tron-pilled. Back in the early days, when it wasn't clear that there was ever going to be any money in this internet thing, being Tron-pilled was pretty much the only reason to get involved with it. Sure, there were a few monsters who fell into the early internet because it offered them a chance to torment strangers at a distance, but they were vastly outnumbered by the legion of Tron-pilled nerds who wanted to make the internet better because we wanted all our normie friends to have the same kind of good time we were having.
The point of this is that there were lots of people back then who had the capacity to imagine the kind of gross stuff that Zuckerberg, Musk, and innumerable other scammers, hustlers and creeps got up to on the web. The thing that distinguished these monsters wasn't their genius – it was their callousness. When we brainstormed ways to break the internet, we felt scared and were inspired to try to save it. When they brainstormed ways to break the internet, they created pitch-decks.
And still: the old web was good in so many ways for so long. The Tron-pilled amongst us held the line. When we build a new, good, post-American internet, we're going to need a multitude of Tron-pilled technologists, old and young, who build, maintain – and, above all, defend it.

Public Service Decline and Support for the Populist Right https://catherinedevries.eu/NHS.pdf
Another Court Rules Copyright Can’t Stop People From Reading and Speaking the Law https://www.eff.org/deeplinks/2026/04/another-court-rules-copyright-cant-stop-people-reading-and-speaking-law
Yikes, Encryption’s Y2K Moment is Coming Years Early https://www.eff.org/deeplinks/2026/04/yikes-encryptions-y2k-moment-coming-years-early
Vertical Vertigo https://prospect.org/2026/04/10/apr-2026-magazine-vertical-vertigo-franchise-deregulation-antitrust-law/
#25yrsago Trotsky’s assassination – according to the FBI https://web.archive.org/web/20010413212536/http://foia.fbi.gov/trotsky.htm
#25yrsago Online headline-writing guidelines from Jakob Nielsen https://memex.craphound.com/2001/04/09/headline-writing-guidelines-from-legendary-usability/
#25yrsago Floppy-disk stained-glass windows https://web.archive.org/web/20010607052511/http://www.acme.com/jef/crafts/bathroom_windows.html
#15yrsago English school principal announces zero tolerance for mismatched socks https://nationalpost.com/news/u-k-school-cracks-down-on-bad-manners
#1yrago EFF's lawsuit against DOGE will go forward https://pluralistic.net/2025/04/09/cases-and-controversy/#brocolli-haired-brownshirts

San Francisco: 2026 Berkeley Spring Forum on M&A and the
Boardroom, Apr 23
https://www.theberkeleyforum.com/#agenda
London: Resisting Big Tech Empires (LSBU), Apr 25
https://www.tickettailor.com/events/globaljusticenow/2042691
NYC: Enshittification at Commonweal Ventures, Apr 29
https://luma.com/ssgfvqz8
NYC: Techidemic with Sarah Jeong, Tochi Onyibuchi and Alia
Dastagir (PEN World Voices), Apr 30
https://worldvoices.pen.org/event/techidemic/
Berlin: Re:publica, May 18-20
https://re-publica.com/de/news/rp26-sprecher-cory-doctorow
Berlin: Enshittification at Otherland Books, May 19
https://www.otherland-berlin.de/de/event-details/cory-doctorow.html
Hay-on-Wye: HowTheLightGetsIn, May 22-25
https://howthelightgetsin.org/festivals/hay/big-ideas-2
SXSW London, Jun 2
https://www.sxswlondon.com/session/how-big-tech-broke-the-internet-b3c4a901
Do you feel screwed over by big tech? (Ontario Today)
https://www.cbc.ca/listen/live-radio/1-45-ontario-today/clip/16203024-do-feel-screwed-big-tech
Launch for Cindy's Cohn's "Privacy's Defender" (City Lights)
https://www.youtube.com/watch?v=WuVCm2PUalU
Chicken Mating Harnesses (This Week in Tech)
https://twit.tv/shows/this-week-in-tech/episodes/1074
The Virtual Jewel Box (U Utah)
https://tanner.utah.edu/podcast/enshittification-cory-doctorow-matthew-potolsky/
"Enshittification: Why Everything Suddenly Got Worse and What to
Do About It," Farrar, Straus, Giroux, October 7 2025
https://us.macmillan.com/books/9780374619329/enshittification/
"Picks and Shovels": a sequel to "Red Team Blues," about the heroic era of the PC, Tor Books (US), Head of Zeus (UK), February 2025 (https://us.macmillan.com/books/9781250865908/picksandshovels).
"The Bezzle": a sequel to "Red Team Blues," about prison-tech and other grifts, Tor Books (US), Head of Zeus (UK), February 2024 (thebezzle.org).
"The Lost Cause:" a solarpunk novel of hope in the climate emergency, Tor Books (US), Head of Zeus (UK), November 2023 (http://lost-cause.org).
"The Internet Con": A nonfiction book about interoperability and Big Tech (Verso) September 2023 (http://seizethemeansofcomputation.org). Signed copies at Book Soup (https://www.booksoup.com/book/9781804291245).
"Red Team Blues": "A grabby, compulsive thriller that will leave you knowing more about how the world works than you did before." Tor Books http://redteamblues.com.
"Chokepoint Capitalism: How to Beat Big Tech, Tame Big Content, and Get Artists Paid, with Rebecca Giblin", on how to unrig the markets for creative labor, Beacon Press/Scribe 2022 https://chokepointcapitalism.com
"Enshittification, Why Everything Suddenly Got Worse and What to Do About It" (the graphic novel), Firstsecond, 2026
"The Post-American Internet," a geopolitical sequel of sorts to Enshittification, Farrar, Straus and Giroux, 2027
"Unauthorized Bread": a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, 2027
"The Memex Method," Farrar, Straus, Giroux, 2027
Today's top sources:
Currently writing: "The Post-American Internet," a sequel to "Enshittification," about the better world the rest of us get to have now that Trump has torched America. Third draft completed. Submitted to editor.
"The Post-American Internet," a short book about internet policy in the age of Trumpism. PLANNING.
A Little Brother short story about DIY insulin PLANNING

This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.
https://creativecommons.org/licenses/by/4.0/
Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.
Blog (no ads, tracking, or data-collection):
Newsletter (no ads, tracking, or data-collection):
https://pluralistic.net/plura-list
Mastodon (no ads, tracking, or data-collection):
Bluesky (no ads, possible tracking and data-collection):
https://bsky.app/profile/doctorow.pluralistic.net
Medium (no ads, paywalled):
Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):
https://mostlysignssomeportents.tumblr.com/tagged/pluralistic
"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla
READ CAREFULLY: By reading this, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies ("BOGUS AGREEMENTS") that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer.
ISSN: 3066-764X
There’s a difference between telling someone their work can become better and saying it can become even better.
When we say even better, we lock in a foundation — we’re affirming that something good already exists — at the same time we create the conditions for improvement.
Ennui and disappointment, on the other hand, are multiplied when we promise things are going to get even worse instead of merely worse.
How do you add or remove a handle from an active WaitForMultipleObjects?, part 2 [The Old New Thing]
Last time, we looked at
adding or removing a handle from an active
WaitForMultipleObjects, and we
developed an asynchronous mechanism that requests that the changes
be made soon. But asynchronous add/remove can be a problem bcause
you might remove a handle, clean up the things that the handle was
dependent upon, but then receive a notification that the handle you
removed has been signaled, even though you already cleaned up the
things the handle depended on.
What we can do is wait for the waiting thread to acknowledge the operation.
_Guarded_by_(desiredMutex) DWORD desiredCounter = 1;
DWORD activeCounter = 0;
void wait_until_active(DWORD value)
{
DWORD current = activeCounter;
while (static_cast<int>(current - value) < 0) {
WaitOnAddress(&activeCounter, ¤t,
sizeof(activeCounter), INFINITE);
current = activeCounter;
}
}
The wait_until_active function waits until the
value of activeCounter is at least as large as
value. We do this by subtracting the two values, to
avoid wraparound problems.¹ The comparison takes advantage of
the guarantee in C++20 that conversion from an unsigned integer to
a signed integer converts to the value that is numerically equal
modulo 2ⁿ where n is the number of bits in the
destination. (Prior to C++20, the result was
implementation-defined, but in practice all modern implementations
did what C++20 mandates.)²
You can also use std::atomic:
_Guarded_by_(desiredMutex) DWORD desiredCounter = 1;
std::atomic<DWORD> activeCounter;
void wait_until_active(DWORD value)
{
DWORD current = activeCounter;
while (static_cast<int>(current - value) < 0) {
activeCounter.wait(current);
current = activeCounter;
}
}
As before, the background thread manipulates the
desiredHandles and desiredActions, then
signals the waiting thread to wake up and process the changes. But
this time, the background thread blocks until the waiting thread
acknowledges the changes.
// Warning: For expository purposes. Almost no error checking.
void waiting_thread()
{
bool update = true;
std::vector<wil::unique_handle> handles;
std::vector<std::function<void()>> actions;
while (true)
{
if (std::exchange(update, false)) {
std::lock_guard guard(desiredMutex);
handles.clear();
handles.reserve(desiredHandles.size() + 1);
std::transform(desiredHandles.begin(), desiredHandles.end(),
std::back_inserter(handles),
[](auto&& h) { return duplicate_handle(h.get()); });
// Add the bonus "changed" handle
handles.emplace_back(duplicate_handle(changed.get()));
actions = desiredActions;
if (activeCounter != desiredCounter) {
activeCounter = desiredCounter;
WakeByAddressAll(&activeCounter);
}
}
auto count = static_cast<DWORD>(handles.size());
auto result = WaitForMultipleObjects(count,
handles.data()->get(), FALSE, INFINITE);
auto index = result - WAIT_OBJECT_0;
if (index == count - 1) {
// the list changed. Loop back to update.
update = true;
continue;
} else if (index < count - 1) {
actions[index]();
} else {
// deal with unexpected result
}
}
}
void change_handle_list()
{
DWORD value;
{
std::lock_guard guard(desiredMutex);
⟦ make changes to desiredHandles and desiredActions ⟧
value = ++desiredCounter;
SetEvent(changed.get());
}
wait_until_active(value);
}
The pattern is that after the background thread makes the
desired changes, they increment the desiredCounter and
signal the event. It’s okay if multiple threads make changes
before the waiting thread wakes up. The changes simply accumulate,
and the event just stays signaled. Each background thread then
waits for the waiting thread to process the change.
On the waiting side, we process changes as usual, but we also
publish our current change counter if it has changed, to let the
background threads know that we made some progress. Eventually, we
will make enough progress that all of the pending changes have been
processed, and the last ackground thread will be released from
wait_until_active.
¹ You’ll run into problems if the counter increments 2 billion times without the worker thread noticing. At a thousand increments per second, that’ll last you a month. I figure that if you have a worker thread that is unresponsible for that long, then you have bigger problems. But you can avoid even that problem by switching to a 64-bit integer, so that the overflow won’t happen before the sun is expected to turn into a red giant.
² The holdouts would be compilers for systems that are not two’s-complement.
The post How do you add or remove a handle from an active <CODE>WaitForMultipleObjects</CODE>?, part 2 appeared first on The Old New Thing.
Friday Squid Blogging: Squid Overfishing in the South Pacific [Schneier on Security]
Regulation is hard:
The South Pacific Regional Fisheries Management Organization (SPRFMO) oversees fishing across roughly 59 million square kilometers (22 million square miles) of the South Pacific high seas, trying to impose order on a region double the size of Africa, where distant-water fleets pursue species ranging from jack mackerel to jumbo flying squid. The latter dominated this year’s talks.
Fishing for jumbo flying squid (Dosidicus gigas) has expanded rapidly over the past two decades. The number of squid-jigging vessels operating in SPRFMO waters rose from 14 in 2000 to more than 500 last year, almost all of them flying the Chinese flag. Meanwhile, reported catches have fallen markedly, from more than 1 million metric tons in 2014 to about 600,000 metric tons in 2024. Scientists worry that fishing pressure is outpacing knowledge of the stock.
As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.
The Big Idea: Eleanor Lerman [Whatever]

Pets are more than just roommates we feed and scoop poop for, they’re often a source of emotional support and comfort in our complicated, lengthy lives. Author Eleanor Lerman explores the bond between furry friends and humans in her newest collection of short stories, King the Wonder Dog and Other Stories. Whether your cat is in your lap or on your keyboard, give them a pet as you read along in the Big Idea.
ELEANOR LERMAN:
Having just completed a book of poetry in which much of the work examined the concept of grief about a lost parent (and offered the idea that even Godzilla might be lonely for his mother), I was thinking about what I might write next when I saw a tv commercial that featured a group of older women. They were all beautifully dressed, had expensive haircuts that made gray hair seem like a lifestyle choice, and were laughing their way through a meal on the outdoor terrace of a restaurant. I won’t mention the product being advertised, but they discussed how happy their all were to be using it and to have the love and support of their charming older women friends, who used it too. This is one version of aging in our culture: cheerful, financially secure, medically safeguarded, and surrounded by supportive friends. In this version, the body cooperates, the future is manageable, and loneliness is nowhere in sight.
That’s one way older women—and men—are portrayed in our culture: happy as the proverbial clam and aging with painless bodies and lots of money to pay for the medical care they will likely never need. In literary fiction, however, aging men and women are often depicted in a very different setting: traveling alone through a grim country, with broken hearts and aching bodies until we leave them at the end of their stories hoping—though not entirely believing—that we will avoid such a fate ourselves.
So, what I decided to do in King the Wonder Dog and Other Stories, was to explore what is perhaps a middle ground by writing about both women and men living alone who are growing older and are confounded by what is happening to them. They still feel like their younger selves but are aware that their bodies are changing, that the possibility of once again finding love in their lives is unlikely and that loneliness has begun to haunt them like an aging ghost.
Having had pets in my life for many years—and being aware that animals, too, can feel loneliness and fear—I paired each man and woman in my stories with a lonely dog or cat and tried to work out how that relationship would ease the sadness in both their lives. One memory I drew on was how, when I was young and living alone, I had a little cat that someone had found in the street and gave to me. I had never had a pet before (other than a parakeet, which didn’t give me much to go on) and this little cat was very shy, so I didn’t quite know how to relate to her. But somehow, bit by bit, she cozied up to me, and when I was writing, she was always with me, sitting on my lap or on my feet.
I have no idea how animals conceptualize themselves and their lives, but I do know they have feelings and I hope that for the eighteen years she and I lived together, my cat felt safe and cared for. And still, today, I sometimes think about the unlikely sequence of events that brought us together: how a random person found a tiny kitten, all alone, crouched behind a garbage can, and how that random person was sort of friends with a sort of friend of mine who happened to tell me about the kitten and asked if I knew anyone who would take her and I said yes: me. I don’t know why I said yes, but I’m glad I did. Her name, by the way, was simply Gray Cat, which probably shows how unsure I was about whether I would be able to care for her well enough to at least keep her alive.
After that, I was never without a cat or dog, and now I usually have both. The little dog I have now is a sweet, happy friend who seems not to have a care in the world, but I often see her sitting on the back of my couch, staring out the window at the ocean not far beyond my window and I wonder what she thinks about what she sees. What is that vast, shifting landscape to her? And who am I? A friend who pets her and feeds her and gives her those wonderful treats she loves? Maybe she was frightened when she was separated from her mother but otherwise, I think she is having a happy life—at least I hope so. And sometimes when I walk her, I think about what will happen when she’s no longer with me and I’m even older than I am now. Could I get another dog? I have painful issues with my back that sometimes make it hard for me to walk and I certainly can’t walk any great distance—could I maybe get a dog that doesn’t need to walk too far or somehow shares my disability?
All these thoughts have gone into the stories in King the Wonder Dog, in which men and women are growing older, have illnesses, are frightened by how lonely they feel, and in one way or another—and often to their surprise—are able to bond with a dog or cat who is also in a tenuous situation. And through that bond, the people and the animals find at least a little bit of happiness in their lives, a little bit of the shared comfort that arises from one creature caring for another. I hope those who read the book will feel some of that comfort, too.
King the Wonder Dog and Other Stories: Amazon|Barnes & Noble|Books-A-Million|Bookshop
The disturbing white paper Red Hat is trying to erase from the internet [OSnews]
It shouldn’t be a surprise that companies – and for our field, technology companies specifically – working with the defense industry tends to raise eyebrows. With things like the genocide in Gaza, the threats of genocide and war crimes against Iran, the mass murder in Lebanon, it’s no surprise that western companies working with the militaries and defense companies involved in these atrocities are receiving some serious backlash.
With that in mind, it seems Red Hat, owned by IBM, is desperately trying to scrub a certain white paper from the internet. Titled “Compress the kill cycle with Red Hat Device Edge”, the 2024 white paper details how Red Hat’s products and technologies can make it easier and faster to, well, kill people. Links to the white paper throw up 404s now, but it can still easily be found on the Wayback Machine and other places.
It’s got some disturbingly euphemistic content.
The find, fix, track, target, engage, assess (F2T2EA) process requires ubiquitous access to data at the strategic, operational and tactical levels. Red Hat Device Edge embeds captured, analyzed, and federated data sets in a manner that positions the warfighter to use artificial intelligence and machine learning (AI/ML) to increase the accuracy of airborne targeting and mission-guidance systems.
[…]Delivering near real-time data from sensor pods directly to airmen, accelerating the sensor-to-shooter cycle.
[…]Sharing near real-time sensor fusion data with joint and multinational forces to increase awareness, survivability, and lethality.
[…]The new software enabled the Stalker to deploy updated, AI-based automated target recognition capabilities.
[…]If the target is an adversary tracked vehicle on the far side of a ridge, a UAS carrying a server running Red Hat Device Edge could transmit video and metadata directly to shooters.
↫ Red Hat white paper titled “Compress the kill cycle with Red Hat Device Edge”
I don’t think there’s something inherently wrong with working together with your nation’s military or defense companies, but that all hinges on what, exactly, said military is doing and how those defense companies’ products are being used. The focus should be on national defense, aid during disasters, and responding to the legitimate requests of sovereign, democratic nations to come to their defense (e.g. helping Ukraine fight off the Russian invasion).
There’s always going to be difficult grey areas, but any military or defense company supporting the genocide in Gaza or supplying weapons to kill women and children in Iran is unequivocally wrong, morally reprehensible, and downright illegal on both an international and national level. It clearly seems someone at Red Hat feels the same way, as the company has been trying really hard to memory-hole this particular white paper, and considering its word choices and the state of the world today, it’s easy to see why.
Of course, the internet never forgets, and I certainly don’t intend to let something like this slide. We all know companies like Microsoft, Oracle, and Google have no qualms about making a few bucks from a genocide or two, but it always feels a bit more traitorous to the cause when it’s an open source company doing the profiting. It feels like Red Hat is trying to have its cake and eat it too, by, as an IBM subsidiary, trying to both profit from the vast sums of money sloshing around in the US military industrial complex as well as maintain its image as a scrappy open source business success story shitting bunnies and rainbows.
It’s a long time ago now that Red Hat felt like a genuine part of the open source community. Most of us – both outside and inside of Red Hat, I’m sure – have been well aware for a long time now that those days are well behind us, and I guess Red Hat doesn’t like seeing its kill cycle this compressed.
Deshittifying the web, day 2 [Scripting News]
The perfect app for an AI to do for you is a demo app.
Yesterday I wrote about making WordPress
boom with new apps for writers that run in the web ecosystem,
not as plug-ins, in JS running in the browser, or on the desktop,
any desktop, that would work too. Probably would be fine to put an
MCP shell
around it so it can be in AI-internal scripts.
I'm into writing tools. Proud of it. I'm a writer and a developer. Did I become a developer to create tools for writers or the other way around? At this point the answer to both questions is yes.
I'm basically offering to host a potluck party where people bring an app that works alongside WordLand but works differently from WordLand. Mine is a simple wizzy editor with a Markdown flip-switch. But everyone likes a different kind of editor. There is no single best editor for the web and since WordPress is of the web that applies here too.
If this work is ragingly successful it should have the adoption of XML-RPC in 1998, where devs were competing to get support for their favorite platform before all the others. Here's the list as of 2003. It was exciting and fun, kind of like how things are now.
So back to the AI connection. I started a session with Claude this morning and asked it to look at wpEditorDemo. Let's write a developer's guide and an AI guide, I said to my AI friend. This was a direction Don Park, a very longtime human friend suggested.
Claude and I spent the remaining time this morning creating a Gutenberg editor that works alongside WordLand. The two apps share files through the wpIdentity server that connects to WordPress.
So you can edit the text with either or both editors. If you use both you'll have to stick to Markdown. But you could use the Gutenberg editor for documents or sites where Gutenberg is better or required. This is so key. The document is yours to do with as you please. It's the web way for text to work. No lock-in. So important because almost everywhere else on what remains of the web, the ability to write and publish comes with a cost: lock-in. That's why writing on the web sucks so much.
I know Matt thinks open source is everything and that always bothered me a little.
What you do need are partners who let people bring their text any way they want and don't make people type it into their defective editors, which deliberately constrain you -- to their shitty little silo.
To my WordPress friends -- WordLand is not seeking to replace Gutenberg, it just wants a place alongside it. And to open the doors for a myriad different approaches to writing on the web, all working beautifully with WordPress.
This should blow open the doors of the writer's ecosystem of the web, adding a whole new level, like adding air travel alongside trains and cars. And it should show how inadequate the current best writing environments are.
We'll have the Gutenberg editor for you to try out along with developer docs later today or tomorrow, Murphy-willing.
All made possible by WordPress and Claude.ai. What a time we live in! All of a sudden the web works again even if people have lost hope, because the AIs do the work either way. ;-)
Update: Here's a screen shot of the Gutenberg demo app.
Sweating To The Oldies [Penny Arcade]
You could order your own Ryan Gosling Wolf Fox Sweater kit from Mary Maxim, I suppose, if any more of them existed. What you can do at this juncture is purchase a spot in line to order a kit. Gabriel had heard of at least one person experiencing the scenario in the strip, and we might think that is crazy, but the kits are ninety dollars and I think that a normal person can't really conceive of how much we spend on our hobbies. The implacable High Marshall Helbrecht costs like fifty bucks, and he's a single model. Actually, in terms of sudden and explosive scene drama, knitting may be one of few venues that can compare with gaming. Also, these motherfuckers program yarn. I've peeped their game for some time now.
Reproducible Builds: Reproducible Builds in March 2026 [Planet Debian]
Welcome to the March 2026 report from the Reproducible Builds project!
These reports outline what we’ve been up to over the past month, highlighting items of news from elsewhere in the increasingly-important area of software supply-chain security. As ever, if you are interested in contributing to the Reproducible Builds project, please see the Contribute page on our website.
Eric Biggers posted to the Linux Kernel Mailing List in response to a patch series posted by Thomas Weißschuh to introduce a calculated hash-based system of integrity checking to complement the existing signature-based approach. Thomas’ original post mentions:
The current signature-based module integrity checking has some drawbacks in combination with reproducible builds. Either the module signing key is generated at build time, which makes the build unreproducible, or a static signing key is used, which precludes rebuilds by third parties and makes the whole build and packaging process much more complicated.
However, Eric’s followup message goes further:
I think this actually undersells the feature. It’s also much simpler than the signature-based module authentication. The latter relies on PKCS#7, X.509, ASN.1, OID registry,
crypto_sigAPI, etc in addition to the implementations of the actual signature algorithm (RSA / ECDSA / ML-DSA) and at least one hash algorithm.
In Debian this month,
Lucas Nussbaum announced Debaudit, a “new service to verify the reproducibility of Debian source packages”:
debaudit complements the work of the Reproducible Builds project. While reproduce.debian.net focuses on ensuring that binary packages can be bit-for-bit reproduced from their source packages, debaudit focuses on the preceding step: ensuring that the source package itself is a faithful and reproducible representation of its upstream source or
Vcs-Gitrepository.
kpcyrd filed a
bug against the librust-const-random-dev
package reporting that the compile-time-rng
feature of the ahash crate uses the
const-random crate in
turn, which uses a macro to read/generate a random number generator
during the build. This issue was also filed
upstream.
60 reviews of Debian packages were added, 4 were updated and 16
were removed this month adding to our
knowledge about identified issues. One new issue types was
added,
pkgjs_lock_json_file_issue.
Lastly, Bernhard M. Wiedemann posted another openSUSE monthly update for their work there.
diffoscope
is our in-depth and content-aware diff utility that can locate and
diagnose reproducibility issues. This month, Chris Lamb made a
number of changes, including preparing and uploading versions,
314
and
315
to Debian.
Chris Lamb:
Jelle van der Waa:
Michael R. Crusoe:
In addition, Vagrant Cascadian
updated diffoscope in GNU Guix to version 315.
rebuilderd, our server designed monitor the official package repositories of Linux distributions and attempt to reproduce the observed results there; it powers, amongst other things, reproduce.debian.net.
A new version, 0.26.0, was released this month, with the following improvements:
The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:
Bernhard M. Wiedemann:
minify (rust
random HashMap) / (alternative
by kpcyrd)rpm-config-SUSE
(toolchain)Chris Lamb:
python-nxtomomill.dh-fortran.python-discovery.kanboard.moltemplate.
stacer.libcupsfilters.django-ninja.python-agate.aetos.python-bayespy.kpcyrd:
Once again, there were a number of improvements made to our website this month including:
kpcyrd:
Robin Candau:
Timo Pohl:
Marc Ohm, Timo Pohl, Ben Swierzy and Michael Meier published a paper on the threat of cache poisoning in the Python ecosystem:
Attacks on software supply chains are on the rise, and attackers are becoming increasingly creative in how they inject malicious code into software components. This paper is the first to investigate Python cache poisoning, which manipulates bytecode cache files to execute malicious code without altering the human-readable source code. We demonstrate a proof of concept, showing that an attacker can inject malicious bytecode into a cache file without failing the Python interpreter’s integrity checks. In a large-scale analysis of the Python Package Index, we find that about 12,500 packages are distributed with cache files. Through manual investigation of cache files that cannot be reproduced automatically from the corresponding source files, we identify classes of reasons for irreproducibility to locate malicious cache files. While we did not identify any malware leveraging this attack vector, we demonstrate that several widespread package managers are vulnerable to such attacks.
A PDF of the paper is available online.
Mario Lins of the University of Linz, Austria, has published their PhD doctoral thesis on the topic of Software supply chain transparency:
We begin by examining threats to the software distribution stage — the point at which artifacts (e.g., mobile apps) are delivered to end users — with an emphasis on mobile ecosystems [and] we next focus on the operating system on mobile devices, with an emphasis on mitigating bootloader-targeted attacks. We demonstrate how to compensate lost security guarantees on devices with an unlocked bootloader. This allows users to flash custom operating systems on devices that no longer receive security updates from the original manufacturer without compromising security. We then move to the source code stage. [Also,] we introduce a new architecture to ensure strong source-to-binary correspondence by leveraging the security guarantees of Confidential Computing technology. Finally, we present The Supply Chain Game, an organizational security approach that enhances standard risk-management methods. We demonstrate how game-theoretic techniques, combined with common risk management practices, can derive new criteria to better support decision makers.
A PDF of the paper is available online.
On our mailing list this month:
Holger Levsen announced that this year’s Reproducible Builds summit will almost certainly be held in Gothenburg, Sweden, from September 22 until 24, followed by two days of hacking. However, these dates are preliminary and not 100% final — an official announcement is forthcoming.
Mark Wielaard posted to our list
asking a question on the difference between debugedit and
relative debug paths based on a comment on the Build
path page: “Have people tried more modern versions
of debugedit to get
deterministic (absolute) DWARF paths and found issues with it?
Finally, if you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:
IRC: #reproducible-builds
on irc.oftc.net.
Mastodon: @reproducible_builds@fosstodon.org
Mailing list:
rb-general@lists.reproducible-builds.org
Reproducible Builds (diffoscope): diffoscope 317 released [Planet Debian]
The diffoscope maintainers are pleased to announce the release
of diffoscope version 317. This version
includes the following changes:
[ Chris Lamb ]
* Limit python3-guestfs Build-Dependency to !i386. (Closes: #1132974)
* Try to fix PYPI_ID_TOKEN debugging.
[ Holger Levsen ]
* Add ppc64el to the list of architectures for python3-guestfs.
You find out more by visiting the project homepage.
Jamie McClelland: AI Hacking the Planet [Planet Debian]
A colleague asked me if we should move all our money to our pillow cases after reading the latest AI editorial from Thomas Friedman. The article reads like a press release from Anthropic, repeating the claim that their latest AI model is so good at finding software vulnerabilities that it is a danger to the world.
I think I now know what it’s like to be a doctor who is forced to watch Gray’s Anatomy.
By now every journalist should be able to recognize the AI publicity playbook:
Step 1: Start with a wildly unsubstantiated claim about how dangerous your product is:
AI will cause human extinction before we have a chance
to colonize mars (remember that one? Even Kim Stanley
Robinson, author of perhaps the most compelling science fiction on
colonizing mars
calls bull shit on it).
AI will eliminate all of our jobs (this one
was extremely effective at providing cover for software companies
laying off staff but it has quickly dawned on people that the
companies that did this are living in chaos not humming along
happily with functional robots)
AI will discover massive software vulnerabilities allowing bad actors to “hack pretty much every major software system in the world”. (Did Friedman pull that directly from Anthropic’s press release or was that his contribution?)
Step 2: To help stave off human collapse, only release the new version to a vetted group of software companies and developers, preferably ones with big social media followings
Step 3: Wait for the limited release developers to spew unbridled enthusiasm and shocking examples that seem to suggest this new AI produce is truly unbelievable
Step 4: Watch stock prices and valuations soar
Step 5: Release to the world, and experience a steady stream of mockery as people discover how wrong you are
Step 6: Start over
Even if Friedman missed the text book example of the playbook, I have to ask: if you think bad actors compromising software resulting in massive loss of private data, major outages and wasted resources needs to be reported on, then where have you been for the last 10 years? This literally happens on a daily basis due to the fundamentally flawed way capitalism has been writing software even before the invention of AI. A small part of me wonders - maybe AI writing software is not so bad, because how could it be any worse than it is now?
Also, let’s keep in mind that AI’s super ability at finding vulnerable software depends on having access to the software’s source code, which most companies keep locked up tight. That means the owners of the software can use AI to find vulnerabilities and fix them but bad actors can’t.
Oh, but wait, what if a company is so incompetent that they accidentally release their proprietary software to the Internet?
Surely that would allow AI bots to discover their vulnerabilities and destroy the company right? I’m not sure if anyone has discovered world ending vulnerabilities in Anthropic’s Claude code since it was accidentally released, but it is fun to watch people mock software that is clearly written by AI (and spoiler alert, it seems way worse that software written now).
Well… we probably should all be keeping our money in a pillow case anyway.
A Whole Lotta Tussle Goin’ On [Whatever]


For a time there Smudge was our only boy cat and that meant that he wasn’t able to indulge in one of his favorite pastimes, which was tusslin’. He’d tussle with Zeus, our other male tuxedo (just as Zeus would tussle with Lopsided Cat, our previous male cat), but when Zeus passed on he no longer had a tusslin’ partner. Sugar and Spice were simply Not Having It, as far as tussles went. Smudge would tussle a bit with Charlie, but Charlie is a dog and roughly eight times the mass. It was an asymmetrical sort of tussle, and those are not as fun.
The good news for Smudge is now Saja is here, and Saja loves him a tussle or two. Or three! Or five! We will frequently find the two of them smacking each other about for fun and exercise. The two seem genuinely happy to wrestle on the carpet or otherwise pounce on the other for a couple of minutes. Sugar and Spice are still having none of it from either of them, so this is the best solution for both. And as an observer and appreciator of brief moments of domestic chaos, it’s nice to have the occasional tussle back in the house. Here’s hoping both of them have a long and happy time to tussle together.
— JS
FreeBSD works best on one of these laptops [OSnews]
If you want to run FreeBSD on a laptop, you’re often yanked back to the Linux world of 20 years ago, with many components and parts not working and other issues such as sleep and wake problems. FreeBSD has been hard at work improving the experience of using FreeBSD on laptops, and now this has resulted in a list of laptops which work effortlessly with the venerable operating system.
There’s only about 10 laptops on the list so far, but they do span a range of affordability and age, with some of them surely being quite decent bargains on eBay or whatever other used stuff marketplace you use. If you want to use FreeBSD on a laptop, but don’t want to face any surprises or do any difficult setup, get one of the laptops on this list – a list which will surely expand over time.
Agents don’t know what good looks like. And that’s exactly the problem. [Radar]
Luca Mezzalira, author of Building Micro-Frontends, originally shared the following article on LinkedIn. It’s being republished here with his permission.
Every few years, something arrives that promises to change how we build software. And every few years, the industry splits predictably: One half declares the old rules dead; the other half folds its arms and waits for the hype to pass. Both camps are usually wrong, and both camps are usually loud. What’s rarer, and more useful, is someone standing in the middle of that noise and asking the structural questions: Not “What can this do?” but “What does it mean for how we design systems?”
That’s what Neal Ford and Sam Newman did in their recent fireside chat on agentic AI and software architecture during O’Reilly’s Software Architecture Superstream. It’s a conversation worth pulling apart carefully, because some of what they surface is more uncomfortable than it first appears.
Neal opens with the Dreyfus Model of Knowledge Acquisition, originally developed for the nursing profession but applicable to any domain. The model maps learning across five stages:
His claim is that current agentic AI is stuck somewhere between novice and advanced beginner: It can follow recipes, it can even apply recipes from adjacent domains when it gets stuck, but it doesn’t understand why any of those recipes work. This isn’t a minor limitation. It’s structural.
The canonical example Neal gives is beautiful in its simplicity: An agent tasked with making all tests pass encounters a failing unit test. One perfectly valid way to make a failing test pass is to replace its assertion with assert True. That’s not a hack in the agent’s mind. It’s a solution. There’s no ethical framework, no professional judgment, no instinct that says this isn’t what we meant. Sam extends this immediately with something he’d literally seen shared on LinkedIn that week: an agent that had modified the build file to silently ignore failed steps rather than fix them. The build passed. The problem remained. Congratulations all-round.
What’s interesting here is that neither Ford nor Newman are being dismissive of AI capability. The point is more subtle: The creativity that makes these agents genuinely useful, their ability to search solution space in ways humans wouldn’t think to, is inseparable from the same property that makes them dangerous. You can’t fully lobotomize the improvization without destroying the value. This is a design constraint, not a bug to be patched.
And when you zoom out, this is part of a broader signal. When experienced practitioners who’ve spent decades in this industry independently converge on calls for restraint and rigor rather than acceleration, that convergence is worth paying attention to. It’s not pessimism. It’s pattern recognition from people who’ve lived through enough cycles to know what the warning signs look like.
One of the most important things Neal says, and I think it gets lost in the overall density of the conversation, is the distinction between behavioral verification and capability verification.
Behavioral verification is what most teams default to: unit tests, functional tests, integration tests. Does the code do what it’s supposed to do according to the spec? This is the natural fit for agentic tooling, because agents are actually getting pretty good at implementing behavior against specs. Give an agent a well-defined interface contract and a clear set of acceptance criteria, and it will produce something that broadly satisfies them. This is real progress.
Capability verification is harder. Much harder. Does the system exhibit the operational qualities it needs to exhibit at scale? Is it properly decoupled? Is the security model sound? What happens at 20,000 requests per second? Does it fail gracefully or catastrophically? These are things that most human developers struggle with too, and agents have been trained on human-generated code, which means they’ve inherited our failure modes as well as our successes.
This brings me to something Birgitta Boeckeler raised at QCon London that I haven’t been able to stop thinking about. The example everyone cites when making the case for AI’s coding capability is that Anthropic built a C compiler from scratch using agents. Impressive. But here’s the thing: C compiler documentation is extraordinarily well-specified and battle-tested over decades, and the test coverage for compiler behavior is some of the most rigorous in the entire software industry. That’s as close to a solved, well-bounded problem as you can get.
Enterprise software is almost never like that. Enterprise software is ambiguous requirements, undocumented assumptions, tacit knowledge living in the heads of people who left three years ago, and test coverage that exists more as aspiration than reality. The gap between “can build a C compiler” and “can reliably modernize a legacy ERP” is not a gap of raw capability. It’s a gap of specification quality and domain legibility. That distinction matters enormously for how we think about where agentic tooling can safely operate.
The current orthodoxy in agentic development is to throw more context at the problem: elaborate context files, architecture decision records, guidelines, rules about what not to do. Ford and Newman are appropriately skeptical. Sam makes the point that there’s now empirical evidence suggesting that as context file size increases, you see degradation in output quality, not improvement. You’re not guiding the agent toward better judgment. You’re just accumulating scar tissue from previous disasters. This isn’t unique to agentic workflows either. Anyone who has worked seriously with code assistants knows that summarization quality degrades as context grows, and that this degradation is only partially controllable. That has a direct impact on decisions made over time; now close your eyes for a moment and imagine doing it across an enterprise software, with many teams across different time zones. Don’t get me wrong, the tools help, but the help is bounded, and that boundary is often closer than we’d like to admit.
The more honest framing, which Neal alludes to, is that we need deterministic guardrails around nondeterministic agents. Not more prompting. Architectural fitness functions, an idea Ford and Rebecca Parsons have been promoting since 2017, feel like they’re finally about to have their moment, precisely because the cost of not having them is now immediately visible.
This is where the conversation gets most interesting, and where I think the field is most confused.
There’s a seductive logic to the microservice as the unit of agentic regeneration. It sounds small. The word micro is in the name. You can imagine handing an agent a service with a defined API contract and saying: implement this, test it, done. The scope feels manageable.
Ford and Newman give this idea fair credit, but they’re also honest about the gap. The microservice level is attractive architecturally because it comes with an implied boundary: a process boundary, a deployment boundary, often a data boundary. You can put fitness functions around it. You can say this service must handle X load, maintain Y error rate, expose Z interface. In theory.
In practice, we barely enforce this stuff ourselves. The agents have learned from a corpus of human-written microservices, which means they’ve learned from the vast majority of microservices that were written without proper decoupling, without real resilience thinking, without any rigorous capacity planning. They don’t have our aspirations. They have our habits.
The deeper problem, which Neal raises and which I think deserves more attention than it gets, is transactional coupling. You can design five beautifully bounded services and still produce an architectural disaster if the workflow that ties them together isn’t thought through. Sagas, event choreography, compensation logic: This is the stuff that breaks real systems, and it’s also the stuff that’s hardest to specify, hardest to test, and hardest for an agent to reason about. We made exactly this mistake in the SOA era. We designed lovely little services and then discovered that the interesting complexity had simply migrated into the integration layer, which nobody owned and nobody tested.
Sam’s line here is worth quoting directly, roughly: “To err is human, but it takes a computer to really screw things up.” I suspect we’re going to produce some genuinely legendary transaction management disasters before the field develops the muscle memory to avoid them.
There’s a dimension to this conversation that Ford and Newman gesture toward but that I think deserves much more direct examination: the question of what happens to the humans on the other side of this generated software.
It’s not completely accurate to say that all agentic work is happening on greenfield projects. There are tools already in production helping teams migrate legacy ERPs, modernize old codebases, and tackle the modernization challenge that has defeated conventional approaches for years. That’s real, and it matters.
But the challenge in those cases isn’t merely the code. It’s whether the sociotechnical system, the teams, the processes, the engineering culture, the organizational structures built around the existing software are ready to inherit what gets built. And here’s the thing: Even if agents combined with deterministic guardrails could produce a well-structured microservice architecture or a clean modular monolith in a fraction of the time it would take a human team, that architectural output doesn’t automatically come with organizational readiness. The system can arrive before the people are prepared to own it.
One of the underappreciated functions of iterative migration, the incremental strangler fig approach, the slow decomposition of a monolith over 18 months, is not primarily risk reduction, though it does that too. It’s learning. It’s the process by which a team internalizes a new way of working, makes mistakes in a bounded context, recovers, and builds the judgment that lets them operate confidently in the new world. Compress that journey too aggressively and you can end up with architecture whose operational complexity exceeds the organizational capacity to manage it. That gap tends to be expensive.
At QCon London, I asked Patrick Debois, after a talk covering best practices for AI-assisted development, whether applying all of those practices consistently would make him comfortable working on enterprise software with real complexity. His answer was: It depends. That felt like the honest answer. The tooling is improving. Whether the humans around it are keeping pace is a separate question, and one the industry is not spending nearly enough time on.
Ford and Newman close with a subject that almost never gets covered in these conversations: the vast, unglamorous majority of software that already exists and that our society depends on in ways that are easy to underestimate.
Most of the discourse around agentic AI and software development is implicitly greenfield. It assumes you’re starting fresh, that you get to design your architecture sensibly from the beginning, that you have clean APIs and tidy service boundaries. The reality is that most valuable software in the world was written before any of this existed, runs on platforms and languages that aren’t the natural habitat of modern AI tooling, and contains decades of accumulated decisions that nobody fully understands anymore.
Sam is working on a book about this: how to adapt existing architectures to enable AI-driven functionality in ways that are actually safe. He makes the interesting point that existing systems, despite their reputation, sometimes give you a head start. A well-structured relational schema carries implicit meaning about data ownership and referential integrity that an agent can actually reason from. There’s structure there, if you know how to read it.
The general lesson, which he states without much drama, is that you can’t just expose an existing system through an MCP server and call it done. The interface is not the architecture. The risks around security, data exposure, and vendor dependency don’t go away because you’ve wrapped something in a new protocol.
This matters more than it might seem, because the software that runs our financial systems, our healthcare infrastructure, our logistics and supply chains, is not greenfield and never will be. If we get the modernization of those systems wrong, the consequences are not abstract. They are social. The instinct to index heavily on what these tools can do in ideal conditions, on well-specified problems with good documentation and thorough test coverage, is understandable. But it’s exactly the wrong instinct when the systems in question are the ones our lives depend on. The architectural mindset that has served us well through previous paradigm shifts, the one that starts with trade-offs rather than capabilities, that asks what we are giving up rather than just what we are gaining, is not optional here. It’s the minimum requirement for doing this responsibly.
Three things, mostly.
The first is that introducing deterministic guardrails into nondeterministic systems is not optional. It’s imperative. We are still figuring out exactly where and how, but the framing needs to shift: The goal is control over outcomes, not just oversight of output. There’s a difference. Output is what the agent generates. Outcome is whether the system it generates actually behaves correctly under production conditions, stays within architectural boundaries, and remains operable by the humans responsible for it. Fitness functions, capability tests, boundary definitions: the boring infrastructure that connects generated code to the real constraints of the world it runs in. We’ve had the tools to build this for years.
The second is that the people saying this is the future and the people saying this is just another hype cycle are both probably wrong in interesting ways. Ford and Newman are careful to say they don’t know what good looks like yet. Neither do I. But we have better prior art to draw on than the discourse usually acknowledges. The principles that made microservices work, when they worked, real decoupling, explicit contracts, operational ownership, apply here too. The principles that made microservices fail, leaky abstractions, distributed transactions handled badly, complexity migrating into integration layers, will cause exactly the same failures, just faster and at larger scale.
The third is something I took away from QCon London this year, and I think it might be the most important of the three. Across two days of talks, including sessions that took diametrically opposite approaches to integrating AI into the software development lifecycle, one thing became clear: We are all beginners. Not in the dismissive sense but in the most literal application of the Dreyfus model. Nobody, regardless of experience, has figured out the right way to fit these tools inside a sociotechnical system. The recipes are still being written. The war stories that will eventually become the prior art are still happening to us right now.
What got us here, collectively, was sharing what we saw, what worked, what failed, and why. That’s how the field moved from SOA disasters to microservices best practices. That’s how we built a shared vocabulary around fitness functions and evolutionary architecture. The same process has to happen again, and it will, but only if people with real experience are honest about the uncertainty rather than performing confidence they don’t have. The speed, ultimately, is both the opportunity and the danger. The technology is moving faster than the organizations, the teams, and the professional instincts that need to absorb it. The best response to that isn’t to pretend otherwise. It’s to keep comparing notes.
If this resonated, the full fireside chat between Neal Ford and Sam Newman is worth watching in its entirety. They cover more ground than I’ve had space to react to here. And if you’d like to learn more from Neal, Sam, and Luca, check out their most recent O’Reilly books: Building Resilient Distributed Systems, Architecture as Code, and Building Micro-frontends, second edition.
We Need You: Our Privacy Cannot Afford a Clean Extension of Section 702 [Deeplinks]
We go through this every couple of years: Section 702 of the Foreign Intelligence Surveillance Act (FISA), which of Americans’ communications with foreign persons overseas is up for renewal. As always, Congress can reauthorize it with or without changes, or just let it expire. We know, we know, it’s a pain to have to do this every few years–but it gives us a chance to lift the hood of this behemoth tool of government surveillance and tinker with how it works. That’s why it’s so important right now to urge your Member of Congress not to pass any bill that reauthorizes Section 702 without substantial reforms.
TELL congress: 702 Needs Reform
Section 702 is rife with problems, loopholes, and compliance issues that need fixing. The National Security Agency (NSA) collects full conversations being conducted by surveillance targets overseas and stores them, allowing the Federal Bureau of Investigation (FBI) to operate in a “finders keepers” mode of surveillance—they reason that it's already collected, so why can’t they look at those conversations? There, the FBI can query and even read the U.S. side of that communication without a warrant. The problem is, people who have been spied on by this program won’t even know and have very few ways of finding out. EFF and other civil liberties advocates have been trying for years to know when data collected through Section 702 is used as evidence against them.
There’s simply no excuse for any Member of Congress to support a "clean" reauthorization of Section 702. Anyone who votes to do so does not take your privacy seriously. Full stop.
The intelligence community and its defenders in Congress, as always, seem more interested in defending their rights to read your private communications than in protecting your right to privacy. It’s not really a compromise between safety and privacy if it's always your privacy that gets sacrificed. Now, we’re drawing a line in the sand: Congress cannot pass a clean extension.
Use this EFF tool to write to your Member of Congress and tell them not to pass a clean reauthorization of Section 702.
TELL congress: 702 Needs Reform
[$] Removing read-only transparent huge pages for the page cache [LWN.net]
Things do not always go the way kernel developers think they
will. When the kernel gained support for the creation of read-only
transparent huge pages for the page cache in 2019, the developer of
that feature, Song Liu, added a
Kconfig file entry promising that support for writable
huge pages would arrive "in the next few release cycles
".
Over six years later, that promise is still present, but it will
never be fulfilled. Instead, the read-only option will soon be
removed, reflecting how the core of the memory-subsystem has
changed underneath this particular feature.
Fixing AMDGPU’s VRAM management for low-end GPUs [OSnews]
It may sound unbelievable to some, but not everyone has a datacenter beast with 128GB of VRAM shoved in their desktop PCs. Around the world people tell the tale of a particularly fierce group of Linux gamers: Those who dare attempt to play games with only 8 gigabytes of VRAM, or even less. Truly, it takes exceedingly strong resilience and determination to face the stutters and slowdowns bound to occur when the system starts running low on free VRAM. Carnage erupts inside the kernel driver as every application fights for as much GPU memory as it can hold on to. Any game caught up in this battle for resources will surely not leave unscathed.
That is, until now. Because I fixed it.
↫ Natalie Vock
The solution is to use cgroups to control the kernel’s memory eviction policies, so that applications that should get priority when it comes to VRAM allocation – like games – don’t get their memory evicted from VRAM to system RAM. Basically, evict everything else from VRAM before touching the protected application. This way, something like a game will have much more consistent access to more VRAM, thereby reducing needless memory evictions that harm performance.
It’s a clever solution that makes use of a ton of existing Linux tools, meaning it’s also much easier to upstream, implement, and support. Excellent work.
Security updates for Friday [LWN.net]
Security updates have been issued by AlmaLinux (container-tools:rhel8, fontforge, freerdp, go-toolset:rhel8, gstreamer1-plugins-bad-free, gstreamer1-plugins-base, and gstreamer1-plugins-good, kernel, kernel-rt, libtasn1, mariadb:10.11, mysql:8.4, nginx:1.24, openssh, pcs, python-jinja2, python3.9, ruby:3.1, vim, virt:rhel and virt-devel:rhel, and xmlrpc-c), Debian (libyaml-syck-perl and openssh), Fedora (cockpit, crun, dnsdist, doctl, fido-device-onboard, libcgif, libpng12, libpng15, mbedtls, opensc, and util-linux), Red Hat (git-lfs, go-toolset:rhel8, grafana, grafana-pcp, and rhc), Slackware (libpng), SUSE (389-ds, aws-c-event-stream, bind, cockpit, cockpit-repos, corepack24, dcmtk, dnsdist, docker-compose, expat, firefox, firefox-esr, gnome-online-accounts, gvfs, gnutls, jupyter-jupyterlab-templates, kea, libIex-3_4-33, libpng16, mapserver, perl-XML-Parser, postgresql13, postgresql16, python-Pillow, python311-lupa, thunderbird, tigervnc, and tomcat10), and Ubuntu (linux-azure-fips, linux-hwe, linux-intel-iot-realtime, linux-nvidia-tegra-5.15, openssl, openssl1.0, and python-django).
Error'd: Youth is Wasted on the Junge [The Daily WTF]
"My thoughts exactly" muttered Jason H. "I was in a system that avoids check constraints and the developers never seemed to agree to a T/F or Y/N or 1/0 for indicator columns. All data in a column will use the same pattern but different columns in the same table will use different patterns so I'm not sure why I was surprised when I came across the attached. Sort the data descending and you have the shorthand for what I uttered." How are these all unique?
"I'd better act quickly!" Hugh Scenic almost panicked. "This Microsoft Rewards offer might expire (in just under 74 years)!"
"Copy-copy-copy" repeated Gordon. "Not sure I want the team to be in touch - my query might be best left unanswered."
"Was Comcast's episode guide data hacked by MAGA?" Barry M. wondered. "This is not the usual generic description of Real Time."
"Holiday Workshop for Children learning how to write web pages, apparently," notes self-named Youth P. "You need a new category - because it is no error to involve young people in a web design workshop during their holidays. A little bit of a surprise was that it will happen in a local museum, and that children between 8 and 12 are the target audience - should they really already think about their work future?"
Sen. Sanders Talks to Claude About AI and Privacy [Schneier on Security]
Claude is actually pretty good on the issues.
Pluralistic: Canny Valley and Creative Commons (10 Apr 2026) [Pluralistic: Daily links from Cory Doctorow]
->->->->->->->->->->->->->->->->->->->->->->->->->->->->->
Top Sources: None -->

Last year, I ran a wildly successful Kickstarter campaign to pre-sell my ebooks, audiobooks and hardcovers of my book Enshittification, which went on to be an international bestseller, selling out 10 printings in the first 11 weeks:
https://www.kickstarter.com/projects/doctorow/enshittification-the-drm-free-audiobook
If you'd like an essay-formatted version of this thread to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2026/04/10/canny-valley#limited-edition

I've done many of these Kickstarter campaigns now, and I always try to come up with something special for backers – some limited edition book or tchotchke that lets me scratch my own itch for making beautiful physical things, and also lets a few backers splash out on a truly special item. I've come up with some doozies, like:
https://www.kickstarter.com/projects/doctorow/enshittification-the-drm-free-audiobook/rewards
I put 100 copies of Canny Valley up for sale in the Enshittification Kickstarter and all of them sold out in a matter of days. However, as promised at the time, there is a second chance to get a copy of the book, through the Creative Commons 25th anniversary fundraiser, which has just kicked off:
https://mailchi.mp/creativecommons/were-turning-25-book-giveaway
The whole print run for Canny Valley was limited to 500 copies, and it is the only run I will do for the book. 100 copies were sold to Kickstarter backers, I kept 25 for myself, and the remaining 375 are now available as a thank-you gift for people who make tax-deductible gifts to CC.
I have been a great supporter of Creative Commons since its inception – literally, I was around when Aaron Swartz, Matt Haughey and Lisa Rein worked with Larry Lessig to design the data scheme and user interface to create, use and re-use Creative Commons licenses. My debut novel, Down and Out in the Magic Kingdom, was the first book ever released under a CC license:
https://craphound.com/down/download
Creative Commons arose out of the copyright wars of the early 2000s, in which the severe deficiencies of using copyright as the primary form of internet regulation were becoming ever clearer. Then – as now – the internet was filling up with material that everyday people produced together, incorporating one another's work, as well as popular works that had meaning to them. Virtually all of this material violated copyright law, and bringing it into compliance would cost hundreds of billions of dollars in billable lawyer hours to draft, negotiate and sign all the licenses needed to avoid both criminal and civil liability.
That's where CC came in: a team of international lawyers standardized a set of legal licenses that did something new and necessary: facilitated sharing and remix, rather than restricting them. Simply apply a CC license to your work – say, a Wikipedia contribution, a Flickr photo, or a story on AO3 – and others would be able to reproduce, adapt and recombine that work with other CC licensed works. What's more, thanks to the heroic efforts of the international CC team, these licenses were able to span borders, languages and legal systems, meaning that a Japanese animator can create a short based on a French story, using Australian 3D assets and a Croatian soundtrack:
https://creativecommons.org/licenses/list.en
It's hard to overstate what a heroic feat of lawyering this is. Making a set of documents that allows creativity to spread freely across 45+ (often very different) legal systems is arguably the most ambitious piece of applied IP legal research ever undertaken. Today, tens of billions of works are CC licensed, including (to name just one example), all of Wikipedia.
I rely heavily on CC licensed works to make the images that run over my posts on Pluralistic, my CC-licensed newsletter. I combine these with public domain images in the GIMP (a powerful free/open Photoshop replacement that runs GNU/Linux, MacOS and Windows) to make my collages, which you can download in high-rez (and freely re-use, thanks to the CC licenses I apply to each of them) from this Flickr set of 350+ items:
https://www.flickr.com/photos/doctorow/albums/72177720316719208?sd
Canny Valley collects 80 of my favorite collages in a beautiful book that was printed on 100lb Mohawk paper on an Indigo digital offset printer and bound with PVA glue that will last a century, at Pasadena's Typecraft, a family-owned print shop that's been in business for more than 100 years:
https://www.typecraft.com/live2/who-we-are.html
It was designed by the type legend John D Berry:
And the introduction was written by my friend and mentor, the cyberpunk pioneer and digital art impresario Bruce Sterling:
https://en.wikipedia.org/wiki/Bruce_Sterling

I published a long post that explained my creative process last year, including Bruce's intro (which is also CC licensed). I'm going to reproduce Bruce's intro below, but you can read the whole post here:
https://pluralistic.net/2025/09/04/illustrious/#chairman-bruce
I love these little books and I love that there's a chance for a few more people to lay hands on their own – and I especially love that this will support Creative Commons, an organization that produces digital public goods for a new, good internet:
https://mailchi.mp/creativecommons/were-turning-25-book-giveaway
==
INTRODUCTION
by Bruce Sterling
In 1970 a robotics professor named Masahiro Mori discovered a new problem in aesthetics. He called this "bukimi no tani genshō."
The Japanese robots he built were functional, so the "bukimi no tani" situation was not an engineering problem. It was a deep and basic problem in the human perception of humanlike androids.
Humble assembly robots, with their claws and swivels, those looked okay to most people. Dolls, puppets and mannequins, those also looked okay.
Living people had always aesthetically looked okay to people. Especially, the pretty ones.
However, between these two realms that the late Dr Mori was gamely attempting to weld together — the world of living mankind and of the pseudo-man-like machine– there was an artistic crevasse. Anything in this "Uncanny Valley" looked, and felt, severely not-okay. These overdressed robots looked and felt so eerie that their creator's skills became actively disgusting. The robots got prettier, but only up to a steep verge. Then they slid down the precipice and became zombie doppelgangers.

That's also the issue with the aptly-titled "Canny Valley" art collection here. People already know how to react aesthetically to traditional graphic images. Diagrams are okay. Hand-drawn sketches and cartoons are also okay. Brush-made paintings are mostly fine. Photographs, those can get kind of dodgy.

Digital collages that slice up and weld highly disparate elements like diagrams, cartoons, sketches and also photos and paintings, those trend toward the uncanny.
The pixel-juggling means of digital image-manipulation are not art-traditional pencils or brushes. They do not involve the human hand, or maybe not even the human eye, or the human will. They're not fixed on paper or canvas; they're a Frankenstein mash-up landscape of tiny colored screen-dots where images can become so fried that they look and feel "cursed." They're conceptually gooey congelations, stuck in the valley mire of that which is and must be neither this-nor-that.

A modern digital artist has billions of jpegs in files, folders, clouds and buckets. He's never gonna run out of weightless grist from that mill.
Why would Cory Doctorow — novelist, journalist, activist, opinion columnist and so on — want to lift his typing fingers from his lettered keyboard, so as to create graphics with cut-and-paste and "lasso tools"?

Cory Doctorow also has some remarkably tangled, scandalous and precarious issues to contemplate, summarize and discuss. They're not his scandalous private intrigues, though. Instead, they're scandalous public intrigues. Or, at least Cory struggles to rouse some public indignation about these intrigues, because his core topics are the tangled penthouse/slash/underground machinations of billionaire web moguls.
Cory really knows really a deep dank lot about this uncanny nexus of arcane situations. He explains the shameful disasters there, but they're difficult to capture without torrents of unwieldy tech jargon.
I think there are two basic reasons for this.
The important motivation is his own need to express himself by some method other than words.
I'm reminded here of the example of H. G. Wells, another science fiction writer turned internationally famous political pundit. HG Wells was quite a tireless and ambitious writer — so much so that he almost matched the torrential output of Cory Doctorow.

But HG Wells nevertheless felt a compelling need to hand-draw cartoons. He called them "picshuas." These hundreds of "picshuas" were rarely made public. They were usually sketched in the margins of his hand-written letters. Commonly the picshuas were aimed at his second wife, the woman he had renamed "Jane." These picshuas were caricatures, or maybe rapid pen-and-ink conceptual outlines, of passing conflicts, events and situations in the life of Wells. They seemed to carry tender messages to Jane that the writer was unable or unwilling to speak aloud to her. Wells being Wells, there were always issues in his private life that might well pose a challenge to bluntly state aloud: "Oh by the way, darling, I've built a second house in the South of France where I spend my summers with a comely KGB asset, the Baroness Budberg." Even a famously glib and charming writer might feel the need to finesse that.

Cory Doctorow also has some remarkably tangled, scandalous and precarious issues to contemplate, summarize and discuss. They're not his scandalous private intrigues, though. Instead, they're scandalous public intrigues. Or, at least Cory struggles to rouse some public indignation about these intrigues, because his core topics are the tangled penthouse/slash/underground machinations of billionaire web moguls.
Cory really knows really a deep dank lot about this uncanny nexus of arcane situations. He explains the shameful disasters there, but they're difficult to capture without torrents of unwieldy tech jargon.

So instead, he diligently clips, cuts, pastes, lassos, collages and pastiches. He might, plausibly, hire a professional artist to design his editorial cartoons for him. However, then Cory would have to verbally explain all his political analysis to this innocent graphics guy. Then Cory would also have to double-check the results of the artist and fix the inevitable newbie errors and grave misunderstandings. That effort would be three times the labor for a dogged crusader who is already working like sixty.
It's more practical for him to mash-up images that resemble editorial cartoons.
He can't draw. Also, although he definitely has a pronounced sense of aesthetics, it's not a aesthetic most people would consider tasteful. Cory Doctorow, from his very youth, has always had a "craphound" aesthetic. As an aesthete, Cory is the kind of guy who would collect rain-drenched punk-band flyers that had fallen off telephone poles and store them inside a 1950s cardboard kid-cereal box. I am not scolding him for this. He's always been like that.

As Wells used to say about his unique "picshuas," they seemed like eccentric scribblings, but over the years, when massed-up as an oeuvre, they formed a comic burlesque of an actual life. Similarly, one isolated Doctorow collage can seem rather what-the-hell. It's trying to be "canny." If you get it, you get it. If you don't get the first one, then you can page through all of these, and at the end you will probably get it. En masse, it forms the comic burlesque of a digital left-wing cyberspatial world-of-hell. A monster-teeming Silicon Uncanny Valley of extensively raked muck.

There are a lot of web-comix people who like to make comic fun of the Internet, and to mock "the Industry." However, there's no other social and analytical record quite like this one. It has something of the dark affect of the hundred-year-old satirical Dada collages of Georg Schultz or Hannah Hoch. Those Dada collages look dank and horrible because they're "Dada" and pulling a stunt. These images look dank and horrible because they're analytical, revelatory and make sense.
If you do not enjoy contemporary electronic politics, and instead you have somehow obtained an art degree, I might still be able to help you with my learned and well-meaning intro here. I can recommend a swell art-critical book titled "Memesthetics" by Valentina Tanni. I happen to know Dr. Tanni personally, and her book is the cat's pyjamas when it comes to semi-digital, semi-collage, appropriated, Situationiste-detournement, net.art "meme aesthetics." I promise that I could robotically mimic her, and write uncannily like her, if I somehow had to do that. I could even firmly link the graphic works of Cory Doctorow to the digital avant-garde and/or digital folk-art traditions that Valentina Tanni is eruditely and humanely discussing. Like with a lot of robots, the hard part would be getting me to stop.

Cory works with care on his political meme-cartoons — because he is using them to further his own personal analysis, and to personally convince himself. They're not merely sharp and partisan memes, there to rouse one distinct viewer-emotion and make one single point. They're like digital jigsaw-puzzle landscape-sketches — unstable, semi-stolen and digital, because the realm he portrays is itself also unstable, semi-stolen and digital. The cartoons are dirty and messy because the situations he tackles are so dirty and messy. That's the grain of his lampoon material, like the damaged amps in a punk song. A punk song that was licensed by some billionaire and then used to spy on hapless fans with surveillance-capitalism.

Since that's how it goes, that's also what you're in for. You have been warned, and these collages will warn you a whole lot more.
If you want to aesthetically experience some elegant, time-tested collage art that was created by a major world artist, then you should gaze in wonder at the Max Ernst masterpiece, "Une semaine de bonté" ("A Week of Kindness"). This indefinable "collage novel" aka "artist's book" was created in the troubled time of 1934. It's very uncanny rather than "canny, "and it's also capital-A great Art. As an art critic, I could balloon this essay to dreadful robotic proportions while I explain to you in detail why this weirdo mess is a lasting monument to the expressive power of collage. However, Cory Doctorow is not doing Max Ernst's dreamy, oneiric, enchanting Surrealist art. He would never do that and it wouldn't make any sense if he did.

Cory did this instead. It is art, though. It is what it is, and there's nothing else like it. It's artistic expression as Cory Doctorow has a sincere need to perform that, and in twenty years it will be even more rare and interesting. It's journalism ahead of its time (a little) and with a passage of time, it will become testimonial.
Bruce Sterling — Ibiza MMXXV

Amazon Pulls Support for Perfectly Fine Older Kindles https://www.wired.com/story/amazon-pulls-support-for-perfectly-fine-older-kindles/
"Fahrenheit 11/9" – This is How Fascism Starts | Michael Moore https://www.youtube.com/watch?v=wfszwkoSdQA
Honda Puts Its Garage Door Opener Behind a Paywall https://www.youtube.com/watch?app=desktop&v=Gb2fEBW0oDA
Why Locus Matters More Than Ever https://reactormag.com/why-locus-matters-more-than-ever/
#20yrsago Al Franken wants a balanced war budget #15yrsago Fake-make: counterfeit handmade objects from big manufacturers https://web.archive.org/web/20110410125346/http://blog.makezine.com/archive/2011/04/untouched-by-human-hands.html
#15yrsago Marketplace for hijacked computers https://krebsonsecurity.com/2011/04/is-your-computer-listed-for-rent/
#15yrsago Fake-make: counterfeit handmade objects from big manufacturers https://web.archive.org/web/20110410125346/http://blog.makezine.com/archive/2011/04/untouched-by-human-hands.html
#10yrsago Pope invites Bernie Sanders to Vatican to speak about “social, economic, and environmental” issues https://www.bbc.com/news/election-us-2016-35999269#sa-ns_mchannel=rss&ns_source=PublicRSS20-sa
#10yrsago Baby sues US government for searching his diapers in racial profiling/War on Terror case https://arstechnica.com/tech-policy/2016/04/baby-who-had-his-diapers-searched-at-airport-is-part-of-class-action-suit/
#10yrsago Tax investigators and bill collectors use Rich Kids of Instagram to uncover oligarchs’ hidden millions https://www.theguardian.com/technology/2016/apr/03/super-rich-discover-hidden-risks-instagram-yachts-jets
#10yrsago The international art market is a money laundry whose details are in the Panama Papers https://web.archive.org/web/20160408024110/https://fusion.net/story/288515/panama-papers-leak-art-market/
#10yrsago UK government warns people that copyright trolls are a scam https://torrentfreak.com/uk-govt-issues-advice-on-dealing-with-copyright-trolls-160408/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+Torrentfreak+(Torrentfreak)
#10yrsago Why the rise of ransomware attacks should worry you https://arstechnica.com/information-technology/2016/04/ok-panic-newly-evolved-ransomware-is-bad-news-for-everyone/
#5yrsago Howard Dean's racist, genocidal pharma sellout https://pluralistic.net/2021/04/08/howard-dino/#the-scream
#1yrago We CAN have nice things https://pluralistic.net/2021/04/08/howard-dino/#payfors

Montreal: Drawn and Quarterly, Apr 10
https://mtl.drawnandquarterly.com/events/4863920260410
Toronto: DemocracyXchange, Apr 16
https://www.democracyxchange.org/news/cory-doctorow-to-open-dxc26-on-april-16
San Francisco: 2026 Berkeley Spring Forum on M&A and the
Boardroom, Apr 23
https://www.theberkeleyforum.com/#agenda
London: Resisting Big Tech Empires (LSBU), Apr 25
https://www.tickettailor.com/events/globaljusticenow/2042691
NYC: Enshittification at Commonweal Ventures, Apr 29
https://luma.com/ssgfvqz8
NYC: Techidemic with Sarah Jeong, Tochi Onyibuchi and Alia
Dastagir (PEN World Voices), Apr 30
https://worldvoices.pen.org/event/techidemic/
Berlin: Re:publica, May 18-20
https://re-publica.com/de/news/rp26-sprecher-cory-doctorow
Berlin: Enshittification at Otherland Books, May 19
https://www.otherland-berlin.de/de/event-details/cory-doctorow.html
Hay-on-Wye: HowTheLightGetsIn, May 22-25
https://howthelightgetsin.org/festivals/hay/big-ideas-2
SXSW London, Jun 2
https://www.sxswlondon.com/session/how-big-tech-broke-the-internet-b3c4a901
Do you feel screwed over by big tech? (Ontario Today)
https://www.cbc.ca/listen/live-radio/1-45-ontario-today/clip/16203024-do-feel-screwed-big-tech
Launch for Cindy's Cohn's "Privacy's Defender" (City Lights)
https://www.youtube.com/watch?v=WuVCm2PUalU
Chicken Mating Harnesses (This Week in Tech)
https://twit.tv/shows/this-week-in-tech/episodes/1074
The Virtual Jewel Box (U Utah)
https://tanner.utah.edu/podcast/enshittification-cory-doctorow-matthew-potolsky/
"Enshittification: Why Everything Suddenly Got Worse and What to
Do About It," Farrar, Straus, Giroux, October 7 2025
https://us.macmillan.com/books/9780374619329/enshittification/
"Picks and Shovels": a sequel to "Red Team Blues," about the heroic era of the PC, Tor Books (US), Head of Zeus (UK), February 2025 (https://us.macmillan.com/books/9781250865908/picksandshovels).
"The Bezzle": a sequel to "Red Team Blues," about prison-tech and other grifts, Tor Books (US), Head of Zeus (UK), February 2024 (thebezzle.org).
"The Lost Cause:" a solarpunk novel of hope in the climate emergency, Tor Books (US), Head of Zeus (UK), November 2023 (http://lost-cause.org).
"The Internet Con": A nonfiction book about interoperability and Big Tech (Verso) September 2023 (http://seizethemeansofcomputation.org). Signed copies at Book Soup (https://www.booksoup.com/book/9781804291245).
"Red Team Blues": "A grabby, compulsive thriller that will leave you knowing more about how the world works than you did before." Tor Books http://redteamblues.com.
"Chokepoint Capitalism: How to Beat Big Tech, Tame Big Content, and Get Artists Paid, with Rebecca Giblin", on how to unrig the markets for creative labor, Beacon Press/Scribe 2022 https://chokepointcapitalism.com
"Enshittification, Why Everything Suddenly Got Worse and What to Do About It" (the graphic novel), Firstsecond, 2026
"The Post-American Internet," a geopolitical sequel of sorts to Enshittification, Farrar, Straus and Giroux, 2027
"Unauthorized Bread": a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, 2027
"The Memex Method," Farrar, Straus, Giroux, 2027
Today's top sources:
Currently writing: "The Post-American Internet," a sequel to "Enshittification," about the better world the rest of us get to have now that Trump has torched America. First draft complete. Second draft underway.
"The Post-American Internet," a short book about internet policy in the age of Trumpism. PLANNING.
A Little Brother short story about DIY insulin PLANNING

This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.
https://creativecommons.org/licenses/by/4.0/
Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.
Blog (no ads, tracking, or data-collection):
Newsletter (no ads, tracking, or data-collection):
https://pluralistic.net/plura-list
Mastodon (no ads, tracking, or data-collection):
Bluesky (no ads, possible tracking and data-collection):
https://bsky.app/profile/doctorow.pluralistic.net
Medium (no ads, paywalled):
Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):
https://mostlysignssomeportents.tumblr.com/tagged/pluralistic
"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla
READ CAREFULLY: By reading this, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies ("BOGUS AGREEMENTS") that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer.
ISSN: 3066-764X
Creating the conditions for magic [Seth's Blog]
If you’re hoping for this meeting or this performance or this engagement to produce something extraordinary, why are you setting it up as if it’s ordinary?
The hard work of a brainstorming session, a pitch collaboration or a negotiation happens long before most people begin.
We hire architects to design expensive buildings, but we design expensive human interactions as an afterthought.
If it doesn’t feel like you’re putting a lot of effort into creating the conditions for magic, you’re probably not creating those conditions.
Sweating To The Oldies [Penny Arcade]
New Comic: Sweating To The Oldies
Girl Genius for Friday, April 10, 2026 [Girl Genius]
The Girl Genius comic for Friday, April 10, 2026 has been posted.
Waking Up, p08 [Ctrl+Alt+Del Comic]
The post Waking Up, p08 appeared first on Ctrl+Alt+Del Comic.
Russell Coker: HP Z640 and E5-2696 v4 [Planet Debian]
I recently decided to upgrade the CPU in my workstation, the E5-2696 v3 CPU was OK (passmark 2045 for single thread and 21,380 for multi thread) [1] but I felt like buying something better so I got a E5-2696 v4 (passmark 2115 and 24,643) [2]. I chose the E5-2696 v4 because I was looking for a E5-2699 v4 and found an ebay seller who had them at $140 but was offering the E5-2696 v4 for $99 and the passmark results for the two CPUs are almost identical.
After buying the CPU and waiting for it to be delivered I realised that the Z640 doesn’t include it in the list of supported CPUs and that the maximum TDP of any supported CPU is 145W while according to passmark it has a TDP of 150W. I looked for information about it on Intel ARK (the official site for specs of Intel CPUs) and discovered that “The Intel® Xeon® Processor E5-2696 v4 is designed to be used by system manufacturers (OEMs), and this means they can modify its specifications depending on the system where it will be implemented” and “The processor does not have an ARK page for this reason, since it has no standard specification from Intel, so depending on the original system, it is necessary to contact that system manufacturer for information” [3]. That’s the official response from an Intel employee saying that there are no standard specs for that CPU!!!
Somehow I had used a E5-2696 v3 for 3 years without realising that the same lack of support and specs applies to it [4]!
I installed the new CPU in another Z640 which had a E5-1620 v3 CPU and it worked. I was a little surprised to discover that the hole in the corner is in the bottom right (according to the alignment of the printed text on the top) for all my E5-26xx CPUs while it’s in the top left on the E5-1620 v3. Google searches for things like “e5-2600 e5-1600 difference” and “e5-2600 e5-1600 difference hole in corner” didn’t turn up any useful information. The best information I found was from the Linus Tech Tips forum which says that the hole is to allow gasses to escape when the CPU package is glued together [5] which implies (but doesn’t state) that the location of the hole has no meaning. I had previously thought that the hole was to indicate the location of “pin 1” and was surprised when the new CPU had the hole in the opposite corner. Hopefully in future when people have such concerns they can find this post and not be worried that they are about to destroy their CPU, PC, or both when upgrading the CPU.
The previous Z640 was one I bought from Facebook marketplace for $50 in “unknown condition” in the expectation that I would get at least $50 of parts but it worked perfectly apart from one DIMM socket. The Z640 I’m using now is one I bought from Facebook marketplace for $200 and it’s working perfectly with 4 DIMMs, 128G of RAM, and the E5-2696 v4 CPU. $300 for a workstation with ECC RAM and a 22 core CPU is good value for money!
There are some accounts of the E5-2696 v4 not working on white-box motherboards including a claim that when it was selling for $4000US someone’s motherboard destroyed one. The best plan for such CPUs is to google for someone who’s already got it working in the same machine, which means a name-brand server. That doesn’t guarantee that it will work (Intel refuses to supply specs and states that different items may work differently) but greatly improves the probability.
This system has the HP BIOS version 2.61, note that the Linux fwupd package doesn’t seem to update the BIOS on HP workstations so you need to manually download it and install it. There is a possibility that a Z640 with an older BIOS won’t work with this CPU.
Why do Macs ask you to press random keys when connecting a new keyboard? [OSnews]
You might have seen this, one of the strangest and most primitive experiences in macOS, where you’re asked to press keys next to left Shift and right Shift, whatever they might be.
Perhaps I can explain.
↫ Marcin Wichary
It seems pretty obvious to me that’s what it was for, but I guess many normal, regular people have never seen anything but one particular keyboard configuration (ANSI for Americans, ISO for some Europeans, etc.) keyboards. Perhaps they don’t realise that not only are there ANSI keyboards with other layouts, but also entirely different keyboard configurations (mainly ISO and JIS).
Interestingly, my home country of The Netherlands uses a US English layout on an ANSI configuration, but of course, it’s the US International variant, either with deadkeys or using AltGr for the various accented/special characters we use. In my current country of residence, Sweden, they use this utterly wild and incomprehensible ISO layout where Shift unlocks characters on the bottom of keys, while AltGr unlocks characters at the top, the exact opposite of literally every other keyboard I’ve ever used (US Int’l, classic Dutch (no longer used), German, French, etc.). It’s utterly bizarre, but entirely normal to my Swedish wife.
We cannot use each other’s keyboards.
USB for software developers [OSnews]
This post aims to be a high level introduction to using USB for people who may not have worked with Hardware too much yet and just want to use the technology. There are amazing resources out there such as USB in a NutShell that go into a lot of detail about how USB precisely works (check them out if you want more information), they are however not really approachable for somebody who has never worked with USB before and doesn’t have a certain background in Hardware. You don’t need to be an Embedded Systems Engineer to use USB the same way you don’t need to be a Network Specialist to use Sockets and the Internet.
↫ Nik “WerWolv”
A bit of a generic title, but the article details how to write a USB driver.
Redox sees another months of improvements [OSnews]
The months keep coming, and thus, the monthly progress reports
keep coming, too, for Redox,
the new general purpose operating system written in Rust. This past
month, there’s been considerable graphics improvements,
better deadlock detection in the kernel, improved Unicode support
thanks to switching over to ncurses library variant
with Unicode support, and much more. Alongside these, you’ll
find the usual long list of kernel, driver, and relibc changes,
bugfixes, and improvements.
This month also covered three topics we’ve already discussed individually: Redox’ new no-“AI” code policy, capability-based security in Redox, and the brand-new CPU scheduler.
The Big Idea: Justin Feinstein [Whatever]

What are stories but information laid out before the reader? What if that information was conveyed through multi-media formats and told through emails, newsletters, and other digital means of communication? Author Justin Feinstein has brought us something truly unique in his new novel, Your Behavior Will Be Monitored. See how he twists traditional storytelling methods in his Big Idea.
JUSTIN FEINSTEIN:
I didn’t set out to write a novel told through “found” digital files; it happened organically.
My debut novel, Your Behavior Will Be Monitored is comprised of chat transcripts, emails, TED Talks, error messages, and other digital detritus from a near-future AI company. But it wasn’t the result of some grand epistolary vision – I just started writing a chat between an aging, jaded copywriter (i.e., me) and a hyper-intelligent bot he had been hired to teach the nuances of advertising. I didn’t even know what I was writing, maybe a script?
As the dialogue evolved beyond consumer motivation and taglines, and into larger issues like sentience and purpose, I realized I had a larger story on my hands. Other characters (both human and bot) emerged, as well as other file formats. Every time I added a new element, it would offer its own unique opportunities for character and plot development.
For many months I toggled between writing and tinkering with a posterboard covered with Post-it notes, color-coded for different file types. The modularity of the format lent itself well to this process, which is a normal step for screenwriters and one that, as I learned, can provide much structural value to a novelist. It also helped keep me engaged on days that the blank page felt too daunting. I’d move a note from here to there, or add a new one and notice how it would affect the story. Even in revision, long after I’d dismantled the posterboard, I was still shuffling sections around to play with the chronology and build tension or sustain momentum.
It’s worth noting that while Your Behavior Will Be Monitored is my debut novel, I’ve written both another novel and a memoir, neither of which I was able to sell. For those books, I just started writing and kept going until they were done. So, both the process of writing this book and the format itself were foreign to me, and a big departure from how I’d worked in the past (and seemingly an improvement).
As a result of this newfound process, I became hyper aware of the order of information, its consequences for characters, and how it could guide the reader. For example, a mundane error message might not hold much weight early in a story, but the same error message in a later spot could bring significant narrative impact, due to the built-up context.
It was also fun to explore the tonal potential of these different formats. As anyone who has ever worked for a large corporation knows, company-wide emails are often saturated with an everything-is-fine and nothing-bad-is-happening perkiness that borders on the maniacal. Writing them made the company in my novel, Uniview (“The most trusted name in AI”), feel like a character itself. Since the story is linear, I was able to use weekly all-company emails (aka, The Weekly View) as a summation of what was happening, or at least the way UniView wanted to “spin” it. This added a layer of depth to the narrative, since both company employees and readers of the book knew the reality behind the spin.
Once I had a draft that I felt good about, I shared it with my wife, Julia Fierro (founder and director of the Sackett Street Writers’ Workshop, and a damn good editor). I was hoping for some validation and slightly worried that I had created a Beautiful Mind-esque monster that only made sense to me. Fortunately, Julia was impressed and in awe that I had managed to write a book with no exposition or character interiority (i.e., thoughts) – a fact I was somehow only loosely aware of. It wasn’t that I had intentionally avoided it, just that it didn’t fit within the structure I had stumbled into.
That said, I did leverage little tricks to provide context where needed. If a character was entering a physical environment for the first time, they could comment on it or interact with it – like how the copywriter in my book, Noah, bumps his head when getting in a car and jokes about his lankiness, or how he later notes that the AI lab looks like a Swedish furniture showroom. He also has a call early on with his therapist, which is a helpful narrative vehicle for getting to the heart of a character’s fears and desires.
But Julia’s main note for me was that the video surveillance “scenes” in the book felt flat with only dialogue and made them nearly identical to the MP3s/audio recordings. It was a great note, and one I sat with for a while. She was right, but breaking the structure and format of the book for only one file type (i.e., by adding descriptions of what was happening) just felt wrong.
Eventually I landed on not just a solution, but what would become a key component of the book. The head of HR at UniView is a bot, Lex, who handles nearly all aspects of the employees’ lives, well beyond their work. The company champions a symbiotic relationship in which its bots monitor all aspects of employee behavior (hence the book’s title) and tailor their AI offerings accordingly. So, I was able to pepper the video scenes with “behavioral notes” from Lex, which served the double duty of describing gesture and movement in scenes, while simultaneously characterizing her through reactions and commentary. And even though she “doesn’t make mistakes,” the few moments where she struggles to interpret sarcasm or nuanced behavior are some of my favorite in the book.
I don’t know that I’ll ever write a solely digital file-based book like Your Behavior Will Be Monitored again, although I’ll probably keep working with mixed media/epistolary formats. But I can say that playing with Post-it notes is officially part of my process now.
Your Behavior Will Be Monitored: Amazon|Barnes & Noble|Bookshop|Powell’s|Village Well
Yikes, Encryption’s Y2K Moment is Coming Years Early [Deeplinks]
Google moved up its estimated deadline for quantum preparedness in cryptography to 2029—only 33 months from now. That’s earlier than previous deadlines, and they proposed the new post-quantum migration deadline because of two new papers that comprise a big jump in the state of the technology. It’s ahead of schedule, but not altogether unexpected. Cryptographers and engineers have been working on this for years, and as the deadline gets closer, it’s not surprising to see more precise timeline estimates come up.
The preparation for the Y2K bug is not a perfect analogy. Like Y2K, if systems are not updated in time, anyone with a powerful enough quantum computer will be able to more easily insert malware into the core systems of a computer and fake authentication to allow impersonation merely by observing network traffic. These are the threats whose mitigation timelines have been moved up.
But unlike Y2K, there’s a second sort of attack that we already need to be prepared for: quantum computers will be able to decrypt years of captured messages sent over encrypted messaging platforms shared any time before those platforms updated to quantum-proof encryption. That type of attack has been the main focus of engineering efforts so far and mitigation is well on its way, since anything before the upgrade might eventually be compromised.
Fortunately, not all cryptography is broken by quantum computers. Notably, symmetric encryption is quantum resistant. That means that if you have disk encryption turned on, you shouldn’t have to worry about quantum computers breaking into your phone, as long as your system’s keys are long enough. The problem is how you get the keys to do that encryption, and how you authenticate software on your device and in the cloud.
For those whose work touches on any sort of cryptographic deployment, you’re hopefully already working on the post-quantum transition. If not, you really should be; there are quite a few relevant posts and updates with more information about what this news means for you. Your key agreement systems should be upgraded soon if they’re not already because of store-now-decrypt-later attacks. Now it’s time to prepare for authentication attacks on forged signatures as well.
In some cases, you may need to wait on others to finish their work first. If you’re using NGINX to host websites on Ubuntu, for example, the security settings you need to upgrade key agreement were just released in version 26.04. Updates are rolling out, so keep checking in and upgrade your systems as soon as you’re able to.
But if you’re not in any position to be updating software or hardware, there may be some additional steps you can take to make sure you're as protected as possible. You’ll want to get the latest post-quantum protections as soon as they're available, so if you don't already have a habit of applying software updates in a timely manner, now’s a good time to start.
If you want to know if the website you’re using or the encrypted messaging app you’re chatting over will leak its data in a few years to anyone storing traffic now, you can search for its name with the word "quantum." The engineers are usually pretty proud of their work and have announced their post-quantum support (like what we’ve seen from Signal and iMessage). If you can’t find that information, you may want to have extra consideration for what you say over the internet, or switch the tools you're using. Those are the big areas to worry about now, before quantum computers are actually here, because they could result in the mass leakage of old messages.
The new deadline means that some technologies are simply not going to make it in time and will have to be left by the wayside, like trusted execution environments (TEEs), due to the slower speed of hardware deployments. TEEs are how companies do private processing on user data in the cloud, and they’re particularly relevant to AI offerings.
Even now, though they offer more protection than processing data in the clear, TEEs are not as secure as homomorphic encryption or doing the processing on device. Post-quantum, the security level gets much closer to computation on cleartext, and even with strong user controls, that makes it way too easy to accidentally backdoor your own encrypted chats. If you’re worried about the contents of messages in an encrypted chat being exposed, you’ll probably want to completely avoid using AI features that might leak that content, such as summarization of recent chat history and notifications, and reply composition assistance.
The work to update the world to post-quantum is well on its way. NIST finalized the standards for post-quantum cryptographic algorithms back in 2024. The larger platforms, websites, and hosting providers have already updated their algorithms, so even now, you’re probably already using post-quantum algorithms to access some of the internet. Measurements vary pretty widely, but up to about 4 in 10 websites currently support a post-quantum key exchange.
There’s still some work to be done in figuring out how to make the needed changes—for example, the way you find out a website’s private key to make HTTPS possible is being reworked to make room for larger signatures. Some technologies are just coming to market, like the post-quantum root of trust available now in some Chromebooks. In practice, this means that as you think about replacing your current devices in the next few years, you may want to check if you’re picking up hardware that has post-quantum support, if those specific protections are required for your threat model.
For the areas that still need updating, how much can we expect to actually get ready by the new deadline? It’s likely that not every cryptographically-capable device and deployment will be ready in time, and hardware with hard-coded certificates will probably be the last to update. We saw that happen when SHA-1 was deprecated; Point of Sale systems in particular were late adopters. While governments and large companies with quantum computers may not be interested in stealing money from cash registers, they will be interested in accessing secrets about people’s private lives. That’s why it’s so important that everyone does their part to upgrade, to protect the details of private communications and browsing.
And there’s a good chance that older devices that won’t receive quantum-resistant updates were probably vulnerable to some other attack already. Quantum computation is just one type of attack on cryptography that’s notable for the scale of migration required, and how every public-key cryptosystem and authentication scheme has to do the work to prepare. That’s not a difference in kind, it’s a difference in scale, and some systems will inevitably be left behind.
Quantum preparedness hits different industries and services in different ways, but services that handle communications and financial information are particularly susceptible to risk, and need to act quickly to protect the privacy and security of billions of people.
Comparison Shopping Is Not a (Computer) Crime [Deeplinks]
As long as people have had more than one purchasing option, they’ve been comparing those options and looking for bargains. Online shoppers are no exception; in fact, one of the potential benefits of the internet is that it expands our options for everything from car rentals to airline tickets to dish soap. New AI tools can make the process even easier. These tools could provide some welcome relief for consumers facing sky-high prices that many cannot afford.
Unfortunately, Amazon is trying to block these helpful new tools, which can steer shoppers towards competitors. Taking a page from Facebook and RyanAir, they are trying to use computer crime laws to do it.
Amazon’s target is Perplexity, which makes an AI-enabled web browser, called Comet, that allows users to browse the web as they normally would, but can also perform certain actions on the user’s behalf. For example, a user could ask Comet to find the best price on a 24-pack of toilet paper, and if satisfied with the results, have the browser order it. Amazon claims that Perplexity violated the Computer Fraud and Abuse Act (CFAA) by building a tool that helps users access information on Amazon and engage with the site.
Unfortunately, a federal district court agreed. The court’s fundamental mistake: relying on the Ninth Circuit’s misguided decision in Facebook v Power Ventures, rather than the court’s much better and more applicable reasoning in hiQ Labs.
Perplexity has appealed to the Ninth Circuit. As we explain in an amicus brief filed in support, the district court’s mistake, if affirmed, could lead to myriad unintended consequences. Overbroad readings of the CFAA have undermined research, security, competition, and innovation. For years, we’ve worked to limit its scope to Congress’s original intention: actual hacking that bypasses computer security. It should have nothing to do with Amazon’s claims here, not least because most of Amazon’s website is publicly available.
The court’s approach would be especially dangerous for journalists and academic researchers. Researchers often create a variety of testing accounts. For example, if they’re researching how a service displays housing offers, they may create separate accounts associated with different race, gender, or language settings. These sorts of techniques may be adversarial to the company, but they shouldn’t be illegal. But according to the court’s opinion, if a company disagrees with this sort of research, it can’t just ban the researchers from using the site; it can render that research criminal by just sending a letter notifying the researcher that they’re not authorized to use the service in this way.
A broad reading of CFAA in this case would also undermine competition by enabling companies to limit data scraping, effectively cutting off one of the ways websites offer tools to compare prices and features.
The Ninth Circuit should follow Van Buren’s lead and interpret the CFAA narrowly, as Congress intended. Website owners do not need new shields against independent accountability.
Let's make WordPress boom [Scripting News]
New ways to write with WordPress as the back-end.
Build an API that combines what wpcom does and storage.
Recruit devs to create products for this environment.
Here's the really cool thing -- we can all edit each others' docs.
Put another way, the docs belong to the users, and they let access them.
The storage is so we can build smarter better editors, more fun, color, interactions, cool toys for writers.
And yes, btw -- they are on the web, and there's an RSS 2.0 feed with rssCloud support.
And for that we can create all kinds of amazing readers, I want to do one with SVG now that I know how to do that thanks to ChatGPT of course. SVG is another breakthrough waiting to happen.
We're right there now. Ready to go. We start with text editors and build from there.
There's a new place to put WordPress, under our apps. It's remarkably good for that.
How do you add or remove a handle from an active WaitForMultipleObjects? [The Old New Thing]
Last time, we looked at
adding or removing a handle from an active
MsgWaitForMultipleObjects,
and observed that we could send a message to both break out of the
wait and update the list of handles. But what if the other
thread is waiting in a
WaitForMultipleObjects? You
can’t send a message since
WaitForMultipleObjects doesn’t
wake for messages.
You can fake it by using an event which means “I want to change the list of handles.” The background thread can add that handle to its list, and if the “I want to change the list of handles” event is signaled, it updates its list.
One of the easier ways to represent the desired change is to
maintain two lists, the “active” list (the one being
waited on) and the “desired” list (the one you want to
change it to). The background thread can make whatever changes to
the “desired” list it wants, and then it signals the
“changed” event. The waiting thread sees that the
“changed” event is set and copies the
“desired” list to the “active” list. This
copying needs to be done with DuplicateHandle
because the background thread might close a handle in the
“desired” list, and we can’t close a handle while
it is being waited on.
wil::unique_handle duplicate_handle(HANDLE other)
{
HANDLE result;
THROW_IF_WIN32_BOOL_FALSE(
DuplicateHandle(GetCurrentProcess(), other,
GetCurrentProcess(), &result,
0, FALSE, DUPLICATE_SAME_ACCESS));
return wil::unique_handle(result);
}
This helper function duplicates a raw HANDLE and
returns it in a wil::unique_handle. The duplicate
handle has its own lifetime separate from the original. The waiting
thread operates on a copy of the handles, so that it is unaffected
by changes to the original handles.
std::mutex desiredMutex; _Guarded_by_(desiredMutex) std::vector<wil::unique_handle> desiredHandles; _Guarded_by_(desiredMutex) std::vector<std::function<void()>> desiredActions;
The desiredHandles is a vector of handles we want
to be waiting for, and the The desiredActions is a
parallel vector of things to do for each of those handles.
// auto-reset, initially unsignaled
wil::unique_handle changed(CreateEvent(nullptr, FALSE, FALSE, nullptr));
void waiting_thread()
{
while (true)
{
std::vector<wil::unique_handle> handles;
std::vector<std::function<void()>> actions;
{
std::lock_guard guard(desiredMutex);
handles.reserve(desiredHandles.size() + 1);
std::transform(desiredHandles.begin(), desiredHandles.end(),
std::back_inserter(handles),
[](auto&& h) { return duplicate_handle(h.get()); });
// Add the bonus "changed" handle
handles.emplace_back(duplicate_handle(changed.get()));
actions = desiredActions;
}
auto count = static_cast<DWORD>(handles.size());
auto result = WaitForMultipleObjects(count,
handles.data()->addressof(), FALSE, INFINITE);
auto index = result - WAIT_OBJECT_0;
if (index == count - 1) {
// the list changed. Loop back to update.
continue;
} else if (index < count - 1) {
actions[index]();
} else {
// deal with unexpected result
FAIL_FAST(); // (replace this with your favorite error recovery)
}
}
}
The waiting thread makes a copy of the
desiredHandles and desiredActions, and
adds the changed handle to the end so we will wake up
if somebody changes the list. We operate on the copy so that any
changes to desiredHandles and
desiredActions that occur while we are waiting
won’t affect us. Note that the copy in handles
is done via DuplicateHandle so that it operates
on a separate set of handles. That way, if another thread closes a
handle in desiredHandles, it won’t affect
us.
void change_handle_list()
{
std::lock_guard guard(desiredMutex);
⟦ make changes to desiredHandles and desiredActions ⟧
SetEvent(changed.get());
}
Any time somebody wants to change the list of handles, they take
the desiredMutex lock and can proceed to make whatever
changes they want. These changes won’t affect the waiting
thread because it is operating on duplicate handles. When finished,
we set the changed event to wake up the waiting thread
so it can pick up the new set of handles.
Right now, the purpose of the changed event is to
wake up the blocking call, but we could also use it as a way to
know whether we should update our captured handles. This allows us
to reuse the handle array if there were no changes.
void waiting_thread()
{
bool update = true;
std::vector<wil::unique_handle> handles;
std::vector<std::function<void()>> actions;
while (true)
{
if (std::exchange(update, false)) {
std::lock_guard guard(desiredMutex);
handles.clear();
handles.reserve(desiredHandles.size() + 1);
std::transform(desiredHandles.begin(), desiredHandles.end(),
std::back_inserter(handles),
[](auto&& h) { return duplicate_handle(h.get()); });
// Add the bonus "changed" handle
handles.emplace_back(duplicate_handle(changed.get()));
actions = desiredActions;
}
auto count = static_cast<DWORD>(handles.size());
auto result = WaitForMultipleObjects(count,
handles.data()->get(), FALSE, INFINITE);
auto index = result - WAIT_OBJECT_0;
if (index == count - 1) {
// the list changed. Loop back to update.
update = true;
continue;
} else if (index < count - 1) {
actions[index]();
} else {
// deal with unexpected result
FAIL_FAST(); // (replace this with your favorite error recovery)
}
}
}
In this design, changes to the handle list are asynchronous.
They don’t take effect immediately, because the waiting
thread might be busy running an action. Instead, they take effect
when the waiting thread gets around to making another copy of the
desiredHandles vector and call
WaitForMultipleObjects again. This
could be a problem: You ask to remove a handle, and then clean up
the things that the handle depended on. But before the worker
thread can process the removal, the handle is signaled. The result
is that the worker thread calls your callback after you thought had
told it to stop!
Next time, we’ll see what we can do to make the changes synchronous.
The post How do you add or remove a handle from an active <CODE>WaitForMultipleObjects</CODE>? appeared first on The Old New Thing.
After almost twenty years on the platform, EFF is logging off of X. This isn’t a decision we made lightly, but it might be overdue. The math hasn’t worked out for a while now.
We posted to Twitter (now known as X) five to ten times a day in 2018. Those tweets garnered somewhere between 50 and 100 million impressions per month. By 2024, our 2,500 X posts generated around 2 million impressions each month. Last year, our 1,500 posts earned roughly 13 million impressions for the entire year. To put it bluntly, an X post today receives less than 3% of the views a single tweet delivered seven years ago.
When Elon Musk acquired Twitter in October 2022, EFF was clear about what needed fixing.
We called for:
Twitter was never a utopia. We've criticized the platform for about as long as it’s been around. Still, Twitter did deserve recognition from time to time for vociferously fighting for its users’ rights. That changed. Musk fired the entire human rights team and laid off staffers in countries where the company previously fought off censorship demands from repressive regimes. Many users left. Today we're joining them.
Yes. And we understand why that looks contradictory. Let us explain.
EFF exists to protect people’s digital rights. Not just the people who already value our work, have opted out of surveillance, or have already migrated to the fediverse. The people who need us most are often the ones most embedded in the walled gardens of the mainstream platforms and subjected to their corporate surveillance.
Young people, people of color, queer folks, activists, and organizers use Instagram, TikTok, and Facebook every day. These platforms host mutual aid networks and serve as hubs for political organizing, cultural expression, and community care. Just deleting the apps isn't always a realistic or accessible option, and neither is pushing every user to the fediverse when there are circumstances like:
Our presence on Facebook, Instagram, YouTube, and TikTok is not an endorsement. We've spent years exposing how these platforms suppress marginalized voices, enable invasive behavioral advertising, and flag posts about abortion as dangerous. We’ve also taken action in court, in legislatures, and through direct engagement with their staff to push them to change poor policies and practices.
We stay because the people on those platforms deserve access to information, too. We stay because some of our most-read posts are the ones criticizing the very platform we're posting on. We stay because the fewer steps between you and the resources you need to protect yourself, the better.
When you go online, your rights should go with you. X is
no longer where the fight is happening. The platform Musk took over
was imperfect but impactful. What exists today is something else:
diminished, and increasingly de
minimis.
EFF takes on big fights, and we win. We do that by putting
our time, skills, and our members’ support where they will
effect the most change. Right now, that means Bluesky, Mastodon, LinkedIn, Instagram, TikTok, Facebook, YouTube, and eff.org. We hope you
follow us there and keep supporting the work we do.
Our work protecting digital rights is needed more than ever
before, and we’re here to help you take back
control.
Vegan Appreciation Day [Nina Paley]
I am an ex-vegan. A vegan apostate.
But:
Despite eating dairy products and fish, I still eat a lot of vegan meals. I don’t actually like meat, and while I consume yogurt and cheese, I suffer some lactose intolerance.
I recently chatted with a friend who has been vegetarian for 6 years and wants to stop. He wants to eat the same things as his wife, an omnivore; he wants more protein; he believes some meat in his diet would improve his health. “But I just can’t,” he said. 6 years off meat has made him squeamish.
I am squeamish too. That’s my main reason for refusing to eat birds and mammals: the thought is simply too icky for me. I eat fish occasionally, which is icky enough. I need to build up some real hunger for it to overcome my aesthetic aversion. When I really crave animal protein, it’s fine; anything less and it feels gross.
Highly processed. Doesn’t taste like chicken
(thank god). Delicious.
How fortunate I am, then, that I don’t have to eat fish, let alone other meat, every day. I have all kinds of plant options readily available at my local mainstream grocery store. Despite my criticism of vegan “fake” foods (especially simulated dairy), I happen to love “fake meat,” especially “Chick’n”. Not because it in any way resembles chicken, but because it doesn’t. It’s just toothy concentrated plant protein wrapped in a salty oily coating, calorie-intensive and tasty the way only highly processed junk food can be. I keep this stuff in my freezer and enjoy it about once a week. It was almost impossible to get 25 years ago, when I was a practicing vegan. Now I can get it at my local Meijer.
I have vegans to thank for this. Vegans who worked very hard pushing fake meat into the American mainstream. I hear investments in these idealistic plant-based food companies are drying up; that would be a shame. Hardworking, annoying vegans made these options possible not just for other vegans, but for me and you and everyone. Hardworking, annoying vegans — vegans who work hard at being annoying — got a number of fast-food outlets to place vegan offerings on their menus. They are the reason I can get an edible “Impossible Burger” at most American restaurants, instead of being stuck with some grain-based “veggie burger” which is basically a bread sandwich.
My squeamish friend bemoaned his reliance on expensive protein shakes. “Oooh have you tried Soylent Creamy Chocolate?” I evangelized. “These save my ass on long bike rides!” He couldn’t believe a plant-based shake could taste good, so I broke open a bottle and we split it. Why did I so enthusiastically push this highly processed vegan beverage on my friend? Not because I want either of us to be vegan — we were both discussing how we want to move AWAY from veganism. No, I was pushing it because it’s an excellent product and I love it.
Not made of people. I take these on long (100km or more) bike
rides: I can down 400 calories in less than a minute. Note that
“Creamy Chocolate” is the only flavor of this product
that tastes good. The rest are kind of awful.
Once again I have vegans to thank. Who else would painstakingly formulate this concoction, figure out how to make it tasty AND shelf-stable, and create a viable company to distribute it throughout the USA so I can get it easily? Thank you, vegans!
Many vegans are annoying. But the squeaky wheel gets the avocado oil, and by being squeaky for all these years, vegans have improved and expanded food offerings in America’s lavish markets. Thus, they have made our capitalist lives better. They may condemn our non-vegan impurity, and we may ridicule their idealism, but we all benefit from having more and better choices at the grocery store.
So thank you, vegans. You’ve improved the life of at least one animal: me.
The post Vegan Appreciation Day appeared first on Nina Paley.
Dilly-Dallying In Denver: Day 2 [Whatever]
I am someone who wakes up multiple times
throughout the night. I always just flip over and go right
back to sleep, but I definitely wake up fairly often. On my first
morning of being in Denver, I was sleeping on my friend’s
couch when I happened to wake up at seven on the dot. I was pretty
comfortable, so I almost didn’t flip over at all, but at the
last minute I decided I’d be slightly more comfortable if I
flipped. So I did, and in doing so I faced the windows instead of
facing the apartment. When I tell you I was beholding the single
most beautiful sunrise I had ever seen in my life, trust that I
mean it.
Radiant pink and bursting gold, the snowy mountains in the distance, and the sun steadily rising, casting light onto the city before me. It was truly a sight, and I stayed up for fifteen minutes to watch the sunrise unfold and transform, until it was finally over and the magnificent colors subsided. I thought about taking a photo, but I decided I just wanted to experience it in the moment and really soak it in just for myself.
After a glorious start to the morning (and going back to sleep for a while), Alex and I started our day off right with a quick stop at The Sen Tea House to pick up some matcha (we are matcha fiends if you couldn’t tell). The Sen Tea House had so many different options for their matchas in terms of sweetness, flavors, and milks, and they have non-matcha drinks, too, so there’s really a drink for every type of preference.
I almost didn’t even get a matcha because I was so enticed by the coconut Vietnamese coffee, but my friend highly recommended their matcha, so I ended up getting the ube matcha, which is listed on their menu as their most popular item. If you look at their online menu, Alex’s drink isn’t on there because it was like a weekly special or seasonal special, but they got the banana cream matcha. And here they are!

I was very pleased with the generous portion of cream on top, as these were $7.75 each. We obviously mixed these up a little bit more before drinking them, but I wanted to take a picture before mixing because I knew that mixing purple and green together would make a very unappetizing brown/grey color. And it did! But trust, it was delicious. It had tons of sweet ube flavor while still having some earthy matcha flavor, and was super creamy. Alex’s banana one tasted wildly fresh, like not artificial-y banana at all. It tasted so healthy like as if you made a fruit smoothie with a banana in it. It was definitely less sweet than mine, but Alex really enjoyed it. I am definitely glad I picked the ube, I can’t get enough ube in my life.
Later in the day, we were off to a highly anticipated spot called Mecha Noodle Bar.

This ramen restaurant is fun, fresh, and casual, but also nice enough that you can come in and sit at the bar with a date and have awesome cocktails. I didn’t know at the time, but Mecha actually has a few other locations, though all the other ones are in the Northeast, predominately Connecticut and Massachusetts. How they got all the way out to Denver, I’m not entirely sure. But I’m glad they did, because Alex and I absolutely loved Mecha.
We were originally here for their Restaurant Week offerings, but it turned out that we were there during their happy hour, as well. We decided to double down and get the Restaurant Week menu and order off the happy hour menu, just to keep things exciting.
But of course, I had to start out with a bev:

This is their mango sticky rice cocktail, with cachaça, pandan liqueur, coconut, mango, tea syrup, and lemon. Mango sticky rice is one of my favorite desserts in the world, so this cocktail sounded right up my alley. Whoever made it definitely made it kind of strong, but so much of the delicious tropical flavors really came through and I loved the level of sweetness in this drink. It wasn’t too heavy or too dessert-y. Much like the actual dessert it’s named for. Light and refreshing, with intense mango flavor. This drink was $15, but there was a lot of liquid to work through there, so can’t be too mad.
Here was the pre-fixe menu for only $25:

Though I love some good edamame and those green beans sounded downright delish, I opted for the shiitake bao, and Alex got the chicken bao. Here’s mine:

If it looks like my bao is 200% cucumber, fear not, I got a better shot of the filling:

As you can see, there is actually mushroom, scallions, hoisin, and Kewpie mayo in there. I really enjoyed this bao. The bun was soft and pillowy, the cucumber was crisp and fresh, and the mushroom was a perfectly acceptable size. Alex really liked their chicken one, too.
Before we dove into our second course, we got our happy hour snacks. Alex got the firecracker wings:

These bad boys do not mess around, with their Sichuan peppercorn, Korean chili, tamarind, and togarashi seasoning alongside their lime leaf ranch. My friend offered a wing to me to try, but these suckers packed a kick. Even with the ranch, I couldn’t manage a second bite. These wings are an absolute powerhouse of flavor, and have definitely earned their name of “firecrackers.” While this platter is usually $16, the happy hour price was only $8.
I went for the spare ribs:

I don’t normally eat ribs in public, as they’re very messy and I dare not risk looking goofy, but when it came to these ribs, I no longer cared. They were so good. Too good. Quite possibly the best ribs I’ve ever had, even. Incredibly tender, luscious, fall-right-off-the-bone ribs with a bold, savory, but slightly sweet, sticky sauce that left me questioning why I haven’t had more ribs in my life. Though these were originally $18, the happy hour price was an unbeatable $9. Under ten dollars for these truly delectable ribs was wild, but I was totally here for it.
Finally, our main courses. With the price of the menu being only $25, I had assumed that the main courses would be mini versions of their actual entrees. Like a half portion of their ramen or something along those lines. However, I was pleasantly surprised to discover you get the full portion, which is absolutely wild because a bowl of their noodles costs almost as much as the pre-fixe menu.
Alex got the mala stir-fry:

Wide, flat rice noodles, topped with a cumin-Sichuan-peanut sauce, actual peanuts, and cilantro, with spicy brisket lurking just beneath the surface. This dish was also way too spicy for me, but Alex absolutely loved it. I did think the rice noodles were interesting, at least, plus the fresh cilantro is always a plus.
I was a little basic and got the shoyu paitan:

I really love black garlic, especially in ramen, so that’s what led me to pick this chicken ramen. It came with half a soft-boiled egg, some nori, scallions, bamboo, and I added the corn. I am always in the mood for ramen, and this ramen definitely delivered on curbing my ramen craving. I wouldn’t say it was a life-changing bowl of noodles, but it was pretty good and I have no real complaints about it. I liked the egg.
After acquiring many boxes, it was time for dessert:

Oh my god, more ube! I was thrilled to see this beautiful purple pudding concoction. This was “Bonnie’s Banana Pudding,” with ube, vanilla pudding, bananas, and vanilla wafers. I know the mason jars don’t look like very big vessels, but this was absolutely a generous portion size. Like it took some serious work to get through these jars of pudding, but every bite was amazing. The ube flavor worked wonderfully with the vanilla, and the banana wasn’t artificial tasting at all. It was like we were drinking our matchas from that morning all over again!
The pudding was so creamy and had a great mouthfeel, and I almost felt sad when my spoon finally scraped the glass bottom of the jar. I could eat this dessert pretty much every single day.
For one cocktail, two restaurant week menus, a platter of wings and a platter of ribs, we were looking at a cool and breezy $82 before tip. What a steal. I was thoroughly impressed with their happy hour options, plus how good everything was (even if two of the dishes were too spicy for me). Not to mention our waitress was extremely friendly and attentive!
Mecha Noodle Bar really exceeded my expectations and was a great time, I highly recommend checking them out.
After heading back to Alex’s apartment and hanging with some of their apartment friends and checking out a little event happening in the lobby, we went back out to get some drinks to end the night. We walked down the street to Barcelona Wine Bar, an upscale tapas restaurant with tons of wines, beers, and some unique cocktails.
We sat at the bar, which was a beautiful marble with nice, dim lighting that made the place feel elevated yet somewhat cozy. The first drink I chose was actually one of their mocktails, but I asked for a spirit of the bartender’s choice in it. This is the “Tea Time”:

Earl grey tea, blueberry shrub, salted honey syrup, aquafaba, and mint. Plus gin! This drink is so pretty, I absolutely love the color and the stark contrast of the mint leaf on top. The aquafaba made for an excellent foam on top of the drink, as well. I adore earl grey as a flavor, as well as blueberry, and unsurprisingly this drink did not disappoint. I think gin was the perfect addition to this fruity yet sophisticated beverage. Specifically a more botanical gin versus a dry gin. I know what kind of gin I’m about and it sure isn’t Tanqueray.
For my second cocktail, I got yet another mocktail… with a spirit added! This is the “Bees & Bays”:

That lovely salted honey syrup makes its return alongside lime, cardamom bitters, sparkling water, and is topped with a torched bay leaf. Oh, and gin. This cocktail was so light and refreshing, with simple flavors of honey, citrus, and the lovely feeling of bubbles. I loved how cold it was from all the ice.
Though Alex and I were definitely full from our time at Mecha Noodle, we knew we had to at least try some charcuterie:

We both knew we wanted drunken goat on the board for sure, but our other picks came to mind much slower. We ended up getting tetilla, a semi-soft cow’s milk cheese, and a third cheese I don’t remember. I know, I know, I had one job! But at least I remembered that the meat is speck! Or… was it serrano? No, no, definitely speck. Probably. And don’t ask me about the jam.
For my final beverage of the evening before walking the couple blocks back to Alex’s apartment, we have the Gin & Jus:

Gin, lime, pink peppercorn, ginger, and green grape. I like all of those things! They were good together. I think I didn’t taste this one as much as I did the previous two. I did like it, though.
Alex had a glass of Moscato, so I didn’t bother taking a picture. I’m very sorry to anyone who wanted to see a glass of white wine.
When we got back, we called it an early night (not too early) so we would feel rested and ready to go for my third day. Stick around to see what whacky beverages I consume next!
Have you been to any of Mecha Noodle Bar’s locations before? Do you like ube? How do you feel about gin? Let me know in the comments, and have a great day!
-AMS
[$] A flood of useful security reports [LWN.net]
The idea of using large language models (LLMs) to discover security problems is not new. Google's Project Zero investigated the feasibility of using LLMs for security research in 2024. At the time, they found that models could identify real problems, but required a good deal of structure and hand-holding to do so on small benchmark problems. In February 2026, Anthropic published a report claiming that the company's most recent LLM at that point in time, Claude Opus 4.6, had discovered real-world vulnerabilities in critical open-source software, including the Linux kernel, with far less scaffolding. On April 7, Anthropic announced a new experimental model that is supposedly even better; which they have partnered with the Linux Foundation to supply to some open-source developers with access to the tool for security reviews. LLMs seem to have progressed significantly in the last few months, a change which is being noticed in the open-source community.
Relicensing versus license compatibility (FSF Blog) [LWN.net]
The Free Software Foundation has published a short article on relicensing versus license compatibility.
The FSF's Licensing and Compliance Lab receives many questions and license violation reports related to projects that had their license changed by a downstream distributor, or that are combined from two or more programs under different licenses. We collaborated with Yoni Rabkin, an experienced and long time FSF licensing volunteer, on an updated version of his article to provide the free software community with a general explanation on how the GNU General Public License (GNU GPL) is intended to work in such situations.
Security updates for Thursday [LWN.net]
Security updates have been issued by Debian (firefox-esr, postgresql-13, and tiff), Fedora (bind, bind-dyndb-ldap, cef, opensc, python-biopython, python-pydicom, and roundcubemail), Slackware (mozilla), SUSE (ckermit, cockpit-repos, dnsdist, expat, freerdp, git-cliff, gnutls, heroic-games-launcher, libeverest, openssl-1_1, openssl-3, polkit, python-poetry, python-requests, python311-social-auth-app-django, and SDL2_image-devel), and Ubuntu (dogtag-pki, gdk-pixbuf, linux, linux-aws, linux-aws-5.15, linux-gcp, linux-gcp-5.15, linux-gke, linux-gkeop, linux-ibm, linux-ibm-5.15, linux-intel-iotg, linux-intel-iotg-5.15, linux-kvm, linux-lowlatency, linux-lowlatency-hwe-5.15, linux-nvidia, linux-nvidia-tegra, linux-nvidia-tegra-igx, linux-oracle, linux-oracle-5.15, linux-raspi, linux-xilinx-zynqmp, linux-aws-6.8, linux-gcp-6.8, linux-hwe-6.8, linux-ibm-6.8, linux-lowlatency-hwe-6.8, linux-fips, linux-aws-fips, linux-gcp-fips, linux-oracle, linux-oracle-6.17, linux-raspi, linux-realtime, openssl, and squid).
Architecture as Code to Teach Humans and Agents About Architecture [Radar]
A funny thing happened on the way to writing our book Architecture as Code—the entire industry shifted. Generally, we write books iteratively—starting with a seed of an idea, then developing it through workshops, conference presentations, online classes, and so on. That’s exactly what we did about a year ago with our Architecture as Code book. We started with the concept of describing all the ways that software architecture intersects with other parts of the software development ecosystem: data, engineering practices, team topologies, and more—nine in total—in code, as a way of creating a fast feedback loop for architects to react to changes in architecture. In other words, we’re documenting the architecture through code, defining the structure and constraints we want to guide the implementation through.
For example, an architect can define a set of components via a diagram, along with their dependencies and relationships. That design reflects careful thought about coupling, cohesion, and a host of other structural concerns. However, when they turn that diagram over to a team to develop it, how can they be sure the team will implement it correctly? By defining the components in code (with verifications), the architect can both illustrate and get feedback on the design. However, we recognize that architects don’t have a crystal ball, and design should sometimes change to reflect implementation. When a developer adds a new component, it isn’t necessarily an error but rather feedback that an architect needs to know. This isn’t a testing framework; it’s a feedback framework. When a new component appears, the architect should know so that they can assess: Should the component be there? Perhaps it was missed in the design. If so, how does that affect other components? Having the structure of your architect defined as code allows deterministic feedback on structural integrity.
This capability is useful for developers. We defined these intersections as a way of describing all different aspects of architecture in a deterministic way. Then agents arrived.
Agentic AI shows new capabilities in software architecture, including the ability to work toward a solution as long as deterministic constraints exist. Suddenly, developers and architects are trying to build ways for agents to determine success, which requires a deterministic method of defining these important constraints: Architecture as Code.
An increasingly common practice in agentic AI is separating foundational constraints from desired behavior. For example, part of the context or guardrails developers use for code generation can include concrete architectural constraints around code structure, complexity, coupling, cohesion, and a host of other measurable things. As architects can objectively define what an acceptable architecture is, they can build inviolate rules for agents to adhere to. For example, a problem that is gradually improving is the tendency of LLMs to use brute force to solve problems. If you ask for an algorithm that touches all 50 US states, it might build a 50-stage switch statement. However, if one of the architect’s foundational rules for code generation puts a limit on cyclomatic complexity, then the agent will have to find a way to generate code within that constraint.
This capability exists for all nine of the intersections we cover in Architecture as Code: implementation, engineering practices, infrastructure, generative AI, team topologies, business concerns, enterprise architect, data, and integration architecture.
Increasingly, we see the job of developers, but especially architects, being able to precisely and objectively define architecture, and we built a framework for both how to do it and where to do it in Architecture as Code–coming soon!
CodeSOD: Take a Percentage [The Daily WTF]
When looking at the source of a major news site, today's anonymous submitter sends us this very, very mild, but also very funny WTF:
<div class="g-vhs g-videotape g-cinemagraph" id="g-video-178_article_slug-640w"
data-type="videotape" data-asset="https://somesite.com/videos/file.mp4" data-cinemagraph="true" data-allow-multiple-players="true"
data-vhs-options='{"ratio":"560:320"}'
style="padding-bottom: 57.14285714285714%">
Look, I know that percentage was calculated by JavaScript, or maybe the backend, or maybe calculated by a CSS pre-processor. No human typed that. There's nothing to gain by adding a rounding operation. There's nothing truly wrong with that line of code.
But I can't help but think about the comedic value in controlling your page layout down to sub-sub-sub-sub-sub-sub-sub-sub-pixel precision. This code will continue to have pixel accuracy out to screens with quadrillions of pixels, making it incredibly future proof.
It's made extra funny by calling the video player
VHS and suggesting the appropriate ratio is 560 pixels
by 320- which is not quite 16:9, but is a frequent
letterbox ratio on DVD prints of movies.
In any case, I eagerly await a 20-zetta-pixel displays, so I can read the news in its intended glory.
Pluralistic: Cindy Cohn's "Privacy's Defender" (09 Apr 2026) [Pluralistic: Daily links from Cory Doctorow]
->->->->->->->->->->->->->->->->->->->->->->->->->->->->->
Top Sources: None -->

I've known EFF executive director Cindy Cohn for 27 years. I met her when I needed cyberlaw advice for a startup I'd helped found. We got along so well that I ended up quitting the startup and going to work at EFF. Now, Cindy's memoir, Privacy's Defender, is on the shelves:
https://mitpress.mit.edu/9780262051248/privacys-defender/
I'm hardly a disinterested party here, obviously. I was at Cindy's wedding, I've danced with her at Burning Man, and I've worked with her for most of my adult life. What's more, I was present for many of the pivotal moments she recounts in this book. But still: this is a great book that I found utterly captivating.
Cohn's been with EFF since its earliest days, when she litigated one of the most important cases in computing history, the Bernstein case, which legalized civilian access to encryption technology and changed the world:
https://www.eff.org/deeplinks/2015/04/remembering-case-established-code-speech
Cryptographers had been arguing with the US government over the ban on working encryption technology for years before Cohn joined the fight, and they'd tried all manner of arguments to overturn the ban: technical arguments, political arguments, financial arguments. All of these efforts failed – they didn't even make a dent.
Cohn's genius was the way she formulated a free speech argument about the ban on encryption: arguing that computer code was a form of expressive speech, entitled to protection under the First Amendment. While she didn't come up with this idea, it was her gift for assembling a narrative and a cadre of unimpeachable experts that carried the day.
In this age of bad faith right-wing trolling about "free speech" and "cancel culture," it's easy to forget how central free speech cases and causes have been for the advancement of human rights and human thriving. Free speech cases gave us the nation's first privacy protections, protection for unions, and protection for civil rights organizers.
Cohn never forgets this. Her decades with EFF are a history of the fight for speech rights (and thus privacy rights) on the internet. After the US government seized on the 9/11 attacks as a pretext to dismantle privacy and turn the internet into a system of ubiquitous surveillance, Cohn (along with EFF, of course!) was at the center of the fight for digital rights. The same prescience and strategic brilliance that led her to take up the Bernstein case and win it were with her through those millennial years, and her description of our cases, campaigns and fights in those years vividly foreshadows the moment we are in today.
The same goes for her "three letter agency" chapter, which takes up our fights against the NSA and other US agencies in the wake of whistleblower disclosures by Mark Klein and Edward Snowden. These accounts are one part master class in legal tactics; one part battle cry for a global pushback against the transformation of the internet into the perfect surveillance and control machine, and one part personal memoir of a tactician, finding ways to leverage a righteous cause to raise a guerrilla army of experts, co-counsel, amici, and champions who carried our message to the world.
All of this is connected back to her other legal career, as a human rights defender litigating on behalf of the survivors of a massacre perpetrated by a death squad working on behalf of Chevron in Nigeria. Cohn skilfully connects these very concrete, visible human rights struggles to the invisible – and no less important – human rights work she carried out for EFF.
I didn't just have a front-row seat for this stuff – I had backstage passes for a lot of it (though not the juiciest national security cases, which required EFF lawyers to maintain total secrecy from colleagues, spouses, even our board, on pain of a long prison sentence for disclosing classified information). Even so, Cohn's pacey, smart retelling of these events brought them to life for me, and of course, there's a coherence that you get after the fact that is missing when you're living through it in a moment.
But what really enlivened this delightful book were the personal details that Cohn weaves into the story. I've always known that she was an adoptee (and I even have a small, strange, coincidental connection to her birth family), but Cohn's intimate, personal, frank memoir of her early family life, and her bittersweet connection to her birth family were so intimate and well-told that I felt like I was getting to know my dear friend all over again.
Cindy is retiring from EFF (but not the law) in a couple of months. This book is a beautiful capstone to a brilliant career that defined the fight for cyber rights, and a deep, accessible dive into the defining tech and human rights battles of this century.

Data Center Tech Lobbyists Fearmonger in Attempt to Retroactively Roll Back Right to Repair Law https://www.404media.co/data-center-tech-lobbyists-fearmonger-in-attempt-to-retroactively-roll-back-right-to-repair-law/?ref=daily-stories-newsletter
Live Tax-Free and Die https://prospect.org/2026/04/08/apr-2026-magazine-live-tax-free-and-die/
Men Are Buying Hacking Tools to Use Against Their Wives and Friends https://www.wired.com/story/men-are-buying-hacking-tools-to-use-against-their-wives-and-friends/
How Trump Took the U.S. to War With Iran https://www.nytimes.com/2026/04/07/us/politics/trump-iran-war.html
#15yrsago Advanced office-supply sculpture: paperclip dodecahedron https://web.archive.org/web/20171122055732/https://makezine.com/2011/04/07/paperclip-snub-dodecahedron/
#15yrsago World Bank: gold farming (etc) paid poor countries $3B in 2009 https://web.archive.org/web/20110410134037/http://www.infodev.org/en/Publication.1056.html
#15yrsago Class war comics: Scrap Iron Man versus international capital https://web.archive.org/web/20110410215907/https://www.chinamieville.net/post/4406165249/rejected-pitch
#15yrsago Colombian Justice Minister ramming through extremist copyright legislation without public consultation https://web.archive.org/web/20110707053554/http://karisma.org.co/?p=667
#15yrsago Glenn Beck’s brain https://www.motherjones.com/politics/2011/03/glenn-beck-fox-news-brain-chart/
#10yrsago Why 40 years of official nutritional guidelines prescribed a low-fat diet that promoted heart disease https://www.theguardian.com/society/2016/apr/07/the-sugar-conspiracy-robert-lustig-john-yudkin
#10yrsago Fearing the Pirate Party, Iceland’s government scrambles to avoid elections https://web.archive.org/web/20160407183022/https://theintercept.com/2016/04/07/icelands-government-tries-cling-protesters-pirates-gates/
#10yrsago The price of stealing an identity is crashing, with no bottom in sight https://qz.com/656459/its-never-been-cheaper-to-steal-someones-digital-identity-on-the-internet
#10yrsago Bernie Sanders can only win if nonvoters turn out at the polls, and they almost never do https://web.archive.org/web/20160408145116/https://www.vox.com/2016/4/6/11373862/bernie-sanders-voter-lists
#10yrsago To understand the link between corporations and Hillary Clinton, look at philosophy, not history https://web.archive.org/web/20160406223353/https://www.thenation.com/article/the-problem-with-hillary-clinton-isnt-just-her-corporate-cash-its-her-corporate-worldview/
#10yrsago The US Government’s domestic spy-planes take weekends and holidays off https://www.buzzfeednews.com/article/peteraldhous/spies-in-the-skies
#10yrsago A perfect storm of broken business and busted FLOSS backdoors everything, so who needs the NSA? https://www.youtube.com/watch?v=fwcl17Q0bpk
#5yrsago Door Dashers organize app-defeating solidarity https://pluralistic.net/2021/04/07/cruelty-by-design/#declinenow
#5yrsago Leaked NYPD "goon squad" manual https://pluralistic.net/2021/04/07/cruelty-by-design/#blam-blam-blam
#1yrago Tariffs and monopolies https://pluralistic.net/2025/04/07/it-matters-how-you-slice-it/#too-big-to-care

Montreal: Drawn and Quarterly, Apr 10
https://mtl.drawnandquarterly.com/events/4863920260410
Toronto: DemocracyXchange, Apr 16
https://www.democracyxchange.org/news/cory-doctorow-to-open-dxc26-on-april-16
San Francisco: 2026 Berkeley Spring Forum on M&A and the
Boardroom, Apr 23
https://www.theberkeleyforum.com/#agenda
London: Resisting Big Tech Empires (LSBU), Apr 25
https://www.tickettailor.com/events/globaljusticenow/2042691
NYC: Enshittification at Commonweal Ventures, Apr 29
https://luma.com/ssgfvqz8
NYC: Techidemic with Sarah Jeong, Tochi Onyibuchi and Alia
Dastagir (PEN World Voices), Apr 30
https://worldvoices.pen.org/event/techidemic/
Berlin: Re:publica, May 18-20
https://re-publica.com/de/news/rp26-sprecher-cory-doctorow
Berlin: Enshittification at Otherland Books, May 19
https://www.otherland-berlin.de/de/event-details/cory-doctorow.html
Hay-on-Wye: HowTheLightGetsIn, May 22-25
https://howthelightgetsin.org/festivals/hay/big-ideas-2
SXSW London, Jun 2
https://www.sxswlondon.com/session/how-big-tech-broke-the-internet-b3c4a901
Do you feel screwed over by big tech? (Ontario Today)
https://www.cbc.ca/listen/live-radio/1-45-ontario-today/clip/16203024-do-feel-screwed-big-tech
Launch for Cindy's Cohn's "Privacy's Defender" (City Lights)
https://www.youtube.com/watch?v=WuVCm2PUalU
Chicken Mating Harnesses (This Week in Tech)
https://twit.tv/shows/this-week-in-tech/episodes/1074
The Virtual Jewel Box (U Utah)
https://tanner.utah.edu/podcast/enshittification-cory-doctorow-matthew-potolsky/
"Enshittification: Why Everything Suddenly Got Worse and What to
Do About It," Farrar, Straus, Giroux, October 7 2025
https://us.macmillan.com/books/9780374619329/enshittification/
"Picks and Shovels": a sequel to "Red Team Blues," about the heroic era of the PC, Tor Books (US), Head of Zeus (UK), February 2025 (https://us.macmillan.com/books/9781250865908/picksandshovels).
"The Bezzle": a sequel to "Red Team Blues," about prison-tech and other grifts, Tor Books (US), Head of Zeus (UK), February 2024 (thebezzle.org).
"The Lost Cause:" a solarpunk novel of hope in the climate emergency, Tor Books (US), Head of Zeus (UK), November 2023 (http://lost-cause.org).
"The Internet Con": A nonfiction book about interoperability and Big Tech (Verso) September 2023 (http://seizethemeansofcomputation.org). Signed copies at Book Soup (https://www.booksoup.com/book/9781804291245).
"Red Team Blues": "A grabby, compulsive thriller that will leave you knowing more about how the world works than you did before." Tor Books http://redteamblues.com.
"Chokepoint Capitalism: How to Beat Big Tech, Tame Big Content, and Get Artists Paid, with Rebecca Giblin", on how to unrig the markets for creative labor, Beacon Press/Scribe 2022 https://chokepointcapitalism.com
"Enshittification, Why Everything Suddenly Got Worse and What to Do About It" (the graphic novel), Firstsecond, 2026
"The Post-American Internet," a geopolitical sequel of sorts to Enshittification, Farrar, Straus and Giroux, 2027
"Unauthorized Bread": a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, 2027
"The Memex Method," Farrar, Straus, Giroux, 2027
Today's top sources:
Currently writing: "The Post-American Internet," a sequel to "Enshittification," about the better world the rest of us get to have now that Trump has torched America. First draft complete. Second draft underway.
"The Post-American Internet," a short book about internet policy in the age of Trumpism. PLANNING.
A Little Brother short story about DIY insulin PLANNING

This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.
https://creativecommons.org/licenses/by/4.0/
Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.
Blog (no ads, tracking, or data-collection):
Newsletter (no ads, tracking, or data-collection):
https://pluralistic.net/plura-list
Mastodon (no ads, tracking, or data-collection):
Bluesky (no ads, possible tracking and data-collection):
https://bsky.app/profile/doctorow.pluralistic.net
Medium (no ads, paywalled):
Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):
https://mostlysignssomeportents.tumblr.com/tagged/pluralistic
"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla
READ CAREFULLY: By reading this, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies ("BOGUS AGREEMENTS") that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer.
ISSN: 3066-764X
On Microsoft’s Lousy Cloud Security [Schneier on Security]
ProPublica has a scoop:
In late 2024, the federal government’s cybersecurity evaluators rendered a troubling verdict on one of Microsoft’s biggest cloud computing offerings.
The tech giant’s “lack of proper detailed security documentation” left reviewers with a “lack of confidence in assessing the system’s overall security posture,” according to an internal government report reviewed by ProPublica.
Or, as one member of the team put it: “The package is a pile of shit.”
For years, reviewers said, Microsoft had tried and failed to fully explain how it protects sensitive information in the cloud as it hops from server to server across the digital terrain. Given that and other unknowns, government experts couldn’t vouch for the technology’s security.
[…]
The federal government could be further exposed if it couldn’t verify the cybersecurity of Microsoft’s Government Community Cloud High, a suite of cloud-based services intended to safeguard some of the nation’s most sensitive information.
Yet, in a highly unusual move that still reverberates across Washington, the Federal Risk and Authorization Management Program, or FedRAMP, authorized the product anyway, bestowing what amounts to the federal government’s cybersecurity seal of approval. FedRAMP’s ruling—which included a kind of “buyer beware” notice to any federal agency considering GCC High—helped Microsoft expand a government business empire worth billions of dollars.
Grrl Power #1450 – Hospital food [Grrl Power]
We’ve only seen “transition Maximillia” once before, and at the time, I managed to not turn that flashback into a whole long side-story. I’m sure I could do 40 pages on that period of her life, no problem. But don’t worry, this is just another quick flashback.
This page takes place a few weeks after the page 415 flashback. At this point, Max had spent some time staring into a mirror, and decided that the gold stuff under her skin was kind of cool looking, and while all the doctors still didn’t have any evidence based theories about what was going on, they had mostly agreed that she wasn’t suffering from some sort of fatal heavy metal poisoning, nor were her skin or organs calcifying. On top of all that, after a brief initial fever, Max didn’t feel sick, and in fact was starting to feel really healthy. Like, accidentally ripped a door clear off its hinges healthy.
Man, I really need to find some time to go back and post slightly larger versions of the old pages. They’re 643px wide, and at some point I did a minor site update that let me post the comics at 750px. It helps with the text if nothing else. Of course if I do that, I’ll be tempted to go in and fix little bits of art, like Maxima’s oddly manly golem face. Also teen Max’s shoulder in that first panel is not the shoulder of a beanpole cross country runner. It would just take a little nudge with the liquefy tool to fix it, but that’s a trap. Her tank top in the reflection is all part of the same layer, and I’d have to separate them and fill in the bits left blank from the smaller shoulder, and now instead of a 2 minute fix, it’s 10 minutes, and while I’m at it, I think Deus’s head is too big and now I’m spending an hour on each page and there’s 1400 of them. I could probably resist the urge to fix most stuff due to time constraints, but there are definitely some panels that are just bad. So we’ll see about posting enlarged versions of the pages.
Finally, here we go! I took the suggestion that I
just use an existing panel for a starting point, thinking it would
save time… I guess it technically did, but a 5 character
vote incentive just isn’t the way to
go.
Patreon, of course, has actual topless version.
Double res version will be posted over at Patreon. Feel free to contribute as much as you like.
Attention and effort [Seth's Blog]
The door-to-door salesperson had no leverage. If he was at your door, he wasn’t at anyone else’s door. Every minute you spent with him was a minute he had to spend with you. While it was a tough gig, no one doubted that something was motivating this person enough to put at least as much into the interaction as you were. You might close the door in the face of the person who rang your bell, but at some level, you knew that another human was involved.
Spammers play a different scheme. One person can steal the time and attention of a million. It costs them nothing (actually, truly, nothing) to add one more name to the list. The lack of care and discernment comes through in their interactions. They steal attention in bulk and treat it casually. No one feels bad when they delete or filter spam.
In B2B selling and other high-value sales calls, the seller puts in a lot of effort. A custom presentation deck, useful spreadsheets, even a flight across the country to meet in person. That effort is expected, because the buyer sees their attention as valuable.
And now, here come AI agents. These are spammers disguised as door-to-door salespeople. They know your name, your history, your details–and they present a pitch that looks and feels as though a human spent a lot of time thinking about it and focusing on the buyer’s needs and desires.
But it’s done on a huge scale. It’s like seine fishing. A huge net is set to catch as many fish as possible, with no regard for the mass destruction it causes as a result.
Our instinct is to respect the work of a pitch that took more effort to create than it will cost us to consume (that’s why books are more respected than blog posts!). But AI agents, working at high speed to churn through the small amount of trust and attention we have left, upend that expectation.
Attention and trust continue their dance, and our choices determine how we’ll show up in the marketplace. Burning trust to get attention rarely pays off.
Urgent: Iran-US war troops [Richard Stallman's Political Notes]
US citizens: Ask your congresscritter and senators to block the war-lover from sending over 20,000 bombs to Israel.
See the instructions for how to sign this letter campaign without running any nonfree JavaScript code--not trivial, but not hard.
US citizens: Join with this campaign to address this issue.
To phone your congresscritter about this, the main switchboard is +1-202-224-3121.
Please spread the word.
[$] LWN.net Weekly Edition for April 9, 2026 [LWN.net]
Inside this week's LWN.net Weekly Edition:
parted-3.7 released [stable] [Planet GNU]
I have released parted 3.7
Here are the compressed sources and a GPG detached
signature[*]:
https://ftp.gnu.o
... parted-3.7.tar.xz
https://ftp.gnu.o
... ed-3.7.tar.xz.sig
Use a mirror for higher download bandwidth:
https://www.gnu
... g/prep/ftp.html
Here are the SHA256 checksums:
008de57561a4f3c25a0648e66ed11e7b30be493889b64334a6d70f2c1951ef7b
parted-3.7.tar.xz
de51773eef47a10db34ff2462f3b3c9d987d4bdb49420f0a22e1dda1ff897a5c
parted-3.7.tar.xz.sig
[*] Use a .sig file to verify that the corresponding file (without
the .sig
suffix) is intact. First, be sure to download both the .sig
file and the
corresponding tarball. Then, run a command like this:
gpg --verify parted-3.7.tar.xz.sig
If that command fails because you don't have the required public
key,
or that public key has expired, try the following commands to
update
or refresh it, and then rerun the 'gpg --verify' command.
gpg --locate-external-key bcl@redhat.com
gpg --recv-keys 117E8C168EFE3A7F
wget -q -O- 'https://savannah.
... ed&download=1' | gpg --import -
This release was bootstrapped with the following tools:
Autoconf 2.72
Automake 1.17
Gettext 0.23.1
Gnulib commit 4e11e3d07a79a49eaa9b155c43801bbc1e5bd86e
Gperf 3.1
NEWS
Promoting alpha release to stable release 3.7
** New Features
hurd: Support USB device names
** Bug Fixes
Stop adding boot code into the MBR if it's zero when
updating an
existing msdos partition table.
disk.c: Update metadata after reading partition
table
Fix initialization of atr_c_locale inside
PED_ASSERT
nilfs2: Fixed possible sigsegv in case of corrupted
superblock
libparted: Do not detect ext4 without journal as
ext2
libparted: Fix dvh disklabel unhandled exception
libparted: Fix sun disklabel unhandled exception
parted: fix do_version declaration to work with gcc
15
libparted: Fail early when detecting nilfs2
doc: Document IEC unit behavior in the manpage
parted: Print the Fixing... message to stderr
docs: Finish setup of libparted API docs
libparted: link libparted-fs-resize.so to
libuuid
Urgent: The Home Team Act [Richard Stallman's Political Notes]
US citizens: call on Congress to Support and Pass the Home Team Act, to hamper sports team owners from squeezing money out of cities.
See the instructions for how to sign this letter campaign without running any nonfree JavaScript code--not trivial, but not hard.
US citizens: Join with this campaign to address this issue.
To phone your congresscritter about this, the main switchboard is +1-202-224-3121.
Please spread the word.
Urgent: Impeach Russell Vought [Richard Stallman's Political Notes]
US citizens: call on your officials in Congress to impeach Russell Vought, head of Project 2025 and White House Budget Director.
See the instructions for how to sign this letter campaign without running any nonfree JavaScript code--not trivial, but not hard.
US citizens: Join with this campaign to address this issue.
To phone your congresscritter about this, the main switchboard is +1-202-224-3121.
Please spread the word.
Planned strict safety limits on exposure to formaldehyde [Richard Stallman's Political Notes]
Chemical companies worked with trumpets to kill planned strict safety limits on exposure to formaldehyde. It involved disregarding the data from experiments that showed the danger, and choosing instead data from experiments chemical companies had sponsored.
Arrangement of assassinations of Iranian exiles [Richard Stallman's Political Notes]
A former Iranian minister boasted in an interview about arranging several assassinations of Iranian exiles.
Wrecker cries "emergency" based on no real grounds [Richard Stallman's Political Notes]
The wrecker has a pattern of crying "emergency" based on no real grounds. Each time, it is based on a lie.
Right-wing members of the Supreme Court are unwilling to call a lie a lie.
Inquiry of 1984 British police attack on striking miners [Richard Stallman's Political Notes]
In 1984 British police rioted and attacked a crowd of striking miners, then charged many of them falsely with crimes. Now, belatedly, the government will hold a proper inquiry.
| Feed | RSS | Last fetched | Next fetched after |
|---|---|---|---|
| @ASmartBear | XML | 19:39, Wednesday, 15 April | 20:20, Wednesday, 15 April |
| a bag of four grapes | XML | 19:56, Wednesday, 15 April | 20:38, Wednesday, 15 April |
| Ansible | XML | 19:39, Wednesday, 15 April | 20:19, Wednesday, 15 April |
| Bad Science | XML | 19:49, Wednesday, 15 April | 20:38, Wednesday, 15 April |
| Black Doggerel | XML | 19:39, Wednesday, 15 April | 20:20, Wednesday, 15 April |
| Blog - Official site of Stephen Fry | XML | 19:49, Wednesday, 15 April | 20:38, Wednesday, 15 April |
| Charlie Brooker | The Guardian | XML | 19:56, Wednesday, 15 April | 20:38, Wednesday, 15 April |
| Charlie's Diary | XML | 19:39, Wednesday, 15 April | 20:27, Wednesday, 15 April |
| Chasing the Sunset - Comics Only | XML | 19:49, Wednesday, 15 April | 20:38, Wednesday, 15 April |
| Coding Horror | XML | 19:27, Wednesday, 15 April | 20:14, Wednesday, 15 April |
| Comics Archive - Spinnyverse | XML | 19:49, Wednesday, 15 April | 20:33, Wednesday, 15 April |
| Cory Doctorow's craphound.com | XML | 19:56, Wednesday, 15 April | 20:38, Wednesday, 15 April |
| Cory Doctorow, Author at Boing Boing | XML | 19:39, Wednesday, 15 April | 20:20, Wednesday, 15 April |
| Ctrl+Alt+Del Comic | XML | 19:39, Wednesday, 15 April | 20:27, Wednesday, 15 April |
| Cyberunions | XML | 19:49, Wednesday, 15 April | 20:38, Wednesday, 15 April |
| David Mitchell | The Guardian | XML | 19:43, Wednesday, 15 April | 20:26, Wednesday, 15 April |
| Deeplinks | XML | 19:49, Wednesday, 15 April | 20:33, Wednesday, 15 April |
| Diesel Sweeties webcomic by rstevens | XML | 19:43, Wednesday, 15 April | 20:26, Wednesday, 15 April |
| Dilbert | XML | 19:49, Wednesday, 15 April | 20:38, Wednesday, 15 April |
| Dork Tower | XML | 19:56, Wednesday, 15 April | 20:38, Wednesday, 15 April |
| Economics from the Top Down | XML | 19:43, Wednesday, 15 April | 20:26, Wednesday, 15 April |
| Edmund Finney's Quest to Find the Meaning of Life | XML | 19:43, Wednesday, 15 April | 20:26, Wednesday, 15 April |
| EFF Action Center | XML | 19:43, Wednesday, 15 April | 20:26, Wednesday, 15 April |
| Enspiral Tales - Medium | XML | 19:49, Wednesday, 15 April | 20:34, Wednesday, 15 April |
| Events | XML | 19:39, Wednesday, 15 April | 20:27, Wednesday, 15 April |
| Falkvinge on Liberty | XML | 19:39, Wednesday, 15 April | 20:27, Wednesday, 15 April |
| Flipside | XML | 19:56, Wednesday, 15 April | 20:38, Wednesday, 15 April |
| Flipside | XML | 19:49, Wednesday, 15 April | 20:34, Wednesday, 15 April |
| Free software jobs | XML | 19:39, Wednesday, 15 April | 20:19, Wednesday, 15 April |
| Full Frontal Nerdity by Aaron Williams | XML | 19:39, Wednesday, 15 April | 20:27, Wednesday, 15 April |
| General Protection Fault: Comic Updates | XML | 19:39, Wednesday, 15 April | 20:27, Wednesday, 15 April |
| George Monbiot | XML | 19:43, Wednesday, 15 April | 20:26, Wednesday, 15 April |
| Girl Genius | XML | 19:43, Wednesday, 15 April | 20:26, Wednesday, 15 April |
| Groklaw | XML | 19:39, Wednesday, 15 April | 20:27, Wednesday, 15 April |
| Grrl Power | XML | 19:56, Wednesday, 15 April | 20:38, Wednesday, 15 April |
| Hackney Anarchist Group | XML | 19:49, Wednesday, 15 April | 20:38, Wednesday, 15 April |
| Hackney Solidarity Network | XML | 19:49, Wednesday, 15 April | 20:34, Wednesday, 15 April |
| http://blog.llvm.org/feeds/posts/default | XML | 19:49, Wednesday, 15 April | 20:34, Wednesday, 15 April |
| http://calendar.google.com/calendar/feeds/q7s5o02sj8hcam52hutbcofoo4%40group.calendar.google.com/public/basic | XML | 19:39, Wednesday, 15 April | 20:19, Wednesday, 15 April |
| http://dynamic.boingboing.net/cgi-bin/mt/mt-cp.cgi?__mode=feed&_type=posts&blog_id=1&id=1 | XML | 19:49, Wednesday, 15 April | 20:34, Wednesday, 15 April |
| http://eng.anarchoblogs.org/feed/atom/ | XML | 19:39, Wednesday, 15 April | 20:25, Wednesday, 15 April |
| http://feed43.com/3874015735218037.xml | XML | 19:39, Wednesday, 15 April | 20:25, Wednesday, 15 April |
| http://flatearthnews.net/flatearthnews.net/blogfeed | XML | 19:39, Wednesday, 15 April | 20:20, Wednesday, 15 April |
| http://fulltextrssfeed.com/ | XML | 19:43, Wednesday, 15 April | 20:26, Wednesday, 15 April |
| http://london.indymedia.org/articles.rss | XML | 19:27, Wednesday, 15 April | 20:14, Wednesday, 15 April |
| http://pipes.yahoo.com/pipes/pipe.run?_id=ad0530218c055aa302f7e0e84d5d6515&_render=rss | XML | 19:39, Wednesday, 15 April | 20:25, Wednesday, 15 April |
| http://planet.gridpp.ac.uk/atom.xml | XML | 19:27, Wednesday, 15 April | 20:14, Wednesday, 15 April |
| http://shirky.com/weblog/feed/atom/ | XML | 19:49, Wednesday, 15 April | 20:33, Wednesday, 15 April |
| http://thecommune.co.uk/feed/ | XML | 19:49, Wednesday, 15 April | 20:34, Wednesday, 15 April |
| http://theness.com/roguesgallery/feed/ | XML | 19:39, Wednesday, 15 April | 20:27, Wednesday, 15 April |
| http://www.airshipentertainment.com/buck/buckcomic/buck.rss | XML | 19:49, Wednesday, 15 April | 20:38, Wednesday, 15 April |
| http://www.airshipentertainment.com/growf/growfcomic/growf.rss | XML | 19:49, Wednesday, 15 April | 20:33, Wednesday, 15 April |
| http://www.airshipentertainment.com/myth/mythcomic/myth.rss | XML | 19:56, Wednesday, 15 April | 20:38, Wednesday, 15 April |
| http://www.baen.com/baenebooks | XML | 19:49, Wednesday, 15 April | 20:33, Wednesday, 15 April |
| http://www.feedsapi.com/makefulltextfeed.php?url=http%3A%2F%2Fwww.somethingpositive.net%2Fsp.xml&what=auto&key=&max=7&links=preserve&exc=&privacy=I+accept | XML | 19:49, Wednesday, 15 April | 20:33, Wednesday, 15 April |
| http://www.godhatesastronauts.com/feed/ | XML | 19:39, Wednesday, 15 April | 20:27, Wednesday, 15 April |
| http://www.tinycat.co.uk/feed/ | XML | 19:39, Wednesday, 15 April | 20:19, Wednesday, 15 April |
| https://anarchism.pageabode.com/blogs/anarcho/feed/ | XML | 19:49, Wednesday, 15 April | 20:33, Wednesday, 15 April |
| https://broodhollow.krisstraub.comfeed/ | XML | 19:39, Wednesday, 15 April | 20:20, Wednesday, 15 April |
| https://debian-administration.org/atom.xml | XML | 19:39, Wednesday, 15 April | 20:20, Wednesday, 15 April |
| https://elitetheatre.org/ | XML | 19:27, Wednesday, 15 April | 20:14, Wednesday, 15 April |
| https://feeds.feedburner.com/Starslip | XML | 19:56, Wednesday, 15 April | 20:38, Wednesday, 15 April |
| https://feeds2.feedburner.com/GeekEtiquette?format=xml | XML | 19:43, Wednesday, 15 April | 20:26, Wednesday, 15 April |
| https://hackbloc.org/rss.xml | XML | 19:39, Wednesday, 15 April | 20:20, Wednesday, 15 April |
| https://kajafoglio.livejournal.com/data/atom/ | XML | 19:49, Wednesday, 15 April | 20:38, Wednesday, 15 April |
| https://philfoglio.livejournal.com/data/atom/ | XML | 19:27, Wednesday, 15 April | 20:14, Wednesday, 15 April |
| https://pixietrixcomix.com/eerie-cutiescomic.rss | XML | 19:27, Wednesday, 15 April | 20:14, Wednesday, 15 April |
| https://pixietrixcomix.com/menage-a-3/comic.rss | XML | 19:49, Wednesday, 15 April | 20:33, Wednesday, 15 April |
| https://propertyistheft.wordpress.com/feed/ | XML | 19:39, Wednesday, 15 April | 20:19, Wednesday, 15 April |
| https://requiem.seraph-inn.com/updates.rss | XML | 19:39, Wednesday, 15 April | 20:19, Wednesday, 15 April |
| https://studiofoglio.livejournal.com/data/atom/ | XML | 19:39, Wednesday, 15 April | 20:25, Wednesday, 15 April |
| https://thecommandline.net/feed/ | XML | 19:39, Wednesday, 15 April | 20:25, Wednesday, 15 April |
| https://torrentfreak.com/subscriptions/ | XML | 19:43, Wednesday, 15 April | 20:26, Wednesday, 15 April |
| https://web.randi.org/?format=feed&type=rss | XML | 19:43, Wednesday, 15 April | 20:26, Wednesday, 15 April |
| https://www.dcscience.net/feed/medium.co | XML | 19:49, Wednesday, 15 April | 20:38, Wednesday, 15 April |
| https://www.DropCatch.com/domain/steampunkmagazine.com | XML | 19:39, Wednesday, 15 April | 20:20, Wednesday, 15 April |
| https://www.DropCatch.com/domain/ubuntuweblogs.org | XML | 19:39, Wednesday, 15 April | 20:25, Wednesday, 15 April |
| https://www.DropCatch.com/redirect/?domain=DyingAlone.net | XML | 19:27, Wednesday, 15 April | 20:14, Wednesday, 15 April |
| https://www.freedompress.org.uk:443/news/feed/ | XML | 19:39, Wednesday, 15 April | 20:27, Wednesday, 15 April |
| https://www.goblinscomic.com/category/comics/feed/ | XML | 19:39, Wednesday, 15 April | 20:19, Wednesday, 15 April |
| https://www.loomio.com/blog/feed/ | XML | 19:39, Wednesday, 15 April | 20:25, Wednesday, 15 April |
| https://www.newstatesman.com/feeds/blogs/laurie-penny.rss | XML | 19:39, Wednesday, 15 April | 20:20, Wednesday, 15 April |
| https://www.patreon.com/graveyardgreg/posts/comic.rss | XML | 19:27, Wednesday, 15 April | 20:14, Wednesday, 15 April |
| https://www.rightmove.co.uk/rss/property-for-sale/find.html?locationIdentifier=REGION^876&maxPrice=240000&minBedrooms=2&displayPropertyType=houses&oldDisplayPropertyType=houses&primaryDisplayPropertyType=houses&oldPrimaryDisplayPropertyType=houses&numberOfPropertiesPerPage=24 | XML | 19:43, Wednesday, 15 April | 20:26, Wednesday, 15 April |
| https://x.com/statuses/user_timeline/22724360.rss | XML | 19:39, Wednesday, 15 April | 20:19, Wednesday, 15 April |
| Humble Bundle Blog | XML | 19:27, Wednesday, 15 April | 20:14, Wednesday, 15 April |
| I, Cringely | XML | 19:39, Wednesday, 15 April | 20:27, Wednesday, 15 April |
| Irregular Webcomic! | XML | 19:39, Wednesday, 15 April | 20:20, Wednesday, 15 April |
| Joel on Software | XML | 19:39, Wednesday, 15 April | 20:25, Wednesday, 15 April |
| Judith Proctor's Journal | XML | 19:39, Wednesday, 15 April | 20:19, Wednesday, 15 April |
| Krebs on Security | XML | 19:39, Wednesday, 15 April | 20:20, Wednesday, 15 April |
| Lambda the Ultimate - Programming Languages Weblog | XML | 19:39, Wednesday, 15 April | 20:19, Wednesday, 15 April |
| Looking For Group | XML | 19:49, Wednesday, 15 April | 20:33, Wednesday, 15 April |
| LWN.net | XML | 19:39, Wednesday, 15 April | 20:20, Wednesday, 15 April |
| Mimi and Eunice | XML | 19:49, Wednesday, 15 April | 20:34, Wednesday, 15 April |
| Neil Gaiman's Journal | XML | 19:39, Wednesday, 15 April | 20:19, Wednesday, 15 April |
| Nina Paley | XML | 19:27, Wednesday, 15 April | 20:14, Wednesday, 15 April |
| O Abnormal – Scifi/Fantasy Artist | XML | 19:49, Wednesday, 15 April | 20:34, Wednesday, 15 April |
| Oglaf! -- Comics. Often dirty. | XML | 19:39, Wednesday, 15 April | 20:27, Wednesday, 15 April |
| Oh Joy Sex Toy | XML | 19:49, Wednesday, 15 April | 20:33, Wednesday, 15 April |
| Order of the Stick | XML | 19:49, Wednesday, 15 April | 20:33, Wednesday, 15 April |
| Original Fiction Archives - Reactor | XML | 19:56, Wednesday, 15 April | 20:38, Wednesday, 15 April |
| OSnews | XML | 19:49, Wednesday, 15 April | 20:34, Wednesday, 15 April |
| Paul Graham: Unofficial RSS Feed | XML | 19:49, Wednesday, 15 April | 20:34, Wednesday, 15 April |
| Penny Arcade | XML | 19:56, Wednesday, 15 April | 20:38, Wednesday, 15 April |
| Penny Red | XML | 19:49, Wednesday, 15 April | 20:34, Wednesday, 15 April |
| PHD Comics | XML | 19:49, Wednesday, 15 April | 20:38, Wednesday, 15 April |
| Phil's blog | XML | 19:39, Wednesday, 15 April | 20:27, Wednesday, 15 April |
| Planet Debian | XML | 19:49, Wednesday, 15 April | 20:34, Wednesday, 15 April |
| Planet GNU | XML | 19:39, Wednesday, 15 April | 20:20, Wednesday, 15 April |
| Planet Lisp | XML | 19:49, Wednesday, 15 April | 20:38, Wednesday, 15 April |
| Pluralistic: Daily links from Cory Doctorow | XML | 19:39, Wednesday, 15 April | 20:19, Wednesday, 15 April |
| PS238 by Aaron Williams | XML | 19:39, Wednesday, 15 April | 20:27, Wednesday, 15 April |
| QC RSS | XML | 19:27, Wednesday, 15 April | 20:14, Wednesday, 15 April |
| Radar | XML | 19:56, Wednesday, 15 April | 20:38, Wednesday, 15 April |
| RevK®'s ramblings | XML | 19:39, Wednesday, 15 April | 20:25, Wednesday, 15 April |
| Richard Stallman's Political Notes | XML | 19:49, Wednesday, 15 April | 20:38, Wednesday, 15 April |
| Scenes From A Multiverse | XML | 19:27, Wednesday, 15 April | 20:14, Wednesday, 15 April |
| Schneier on Security | XML | 19:39, Wednesday, 15 April | 20:19, Wednesday, 15 April |
| SCHNEWS.ORG.UK | XML | 19:49, Wednesday, 15 April | 20:33, Wednesday, 15 April |
| Scripting News | XML | 19:56, Wednesday, 15 April | 20:38, Wednesday, 15 April |
| Seth's Blog | XML | 19:39, Wednesday, 15 April | 20:25, Wednesday, 15 April |
| Skin Horse | XML | 19:56, Wednesday, 15 April | 20:38, Wednesday, 15 April |
| Tales From the Riverbank | XML | 19:49, Wednesday, 15 April | 20:38, Wednesday, 15 April |
| The Adventures of Dr. McNinja | XML | 19:49, Wednesday, 15 April | 20:34, Wednesday, 15 April |
| The Bumpycat sat on the mat | XML | 19:39, Wednesday, 15 April | 20:19, Wednesday, 15 April |
| The Daily WTF | XML | 19:39, Wednesday, 15 April | 20:25, Wednesday, 15 April |
| The Monochrome Mob | XML | 19:39, Wednesday, 15 April | 20:20, Wednesday, 15 April |
| The Non-Adventures of Wonderella | XML | 19:43, Wednesday, 15 April | 20:26, Wednesday, 15 April |
| The Old New Thing | XML | 19:49, Wednesday, 15 April | 20:33, Wednesday, 15 April |
| The Open Source Grid Engine Blog | XML | 19:27, Wednesday, 15 April | 20:14, Wednesday, 15 April |
| The Stranger | XML | 19:49, Wednesday, 15 April | 20:34, Wednesday, 15 April |
| towerhamletsalarm | XML | 19:39, Wednesday, 15 April | 20:25, Wednesday, 15 April |
| Twokinds | XML | 19:56, Wednesday, 15 April | 20:38, Wednesday, 15 April |
| UK Indymedia Features | XML | 19:56, Wednesday, 15 April | 20:38, Wednesday, 15 April |
| Uploads from ne11y | XML | 19:39, Wednesday, 15 April | 20:25, Wednesday, 15 April |
| Uploads from piasladic | XML | 19:43, Wednesday, 15 April | 20:26, Wednesday, 15 April |
| Use Sword on Monster | XML | 19:27, Wednesday, 15 April | 20:14, Wednesday, 15 April |
| Wayward Sons: Legends - Sci-Fi Full Page Webcomic - Updates Daily | XML | 19:39, Wednesday, 15 April | 20:25, Wednesday, 15 April |
| what if? | XML | 19:39, Wednesday, 15 April | 20:20, Wednesday, 15 April |
| Whatever | XML | 19:49, Wednesday, 15 April | 20:38, Wednesday, 15 April |
| Whitechapel Anarchist Group | XML | 19:49, Wednesday, 15 April | 20:38, Wednesday, 15 April |
| WIL WHEATON dot NET | XML | 19:49, Wednesday, 15 April | 20:33, Wednesday, 15 April |
| wish | XML | 19:49, Wednesday, 15 April | 20:34, Wednesday, 15 April |
| Writing the Bright Fantastic | XML | 19:49, Wednesday, 15 April | 20:33, Wednesday, 15 April |
| xkcd.com | XML | 19:43, Wednesday, 15 April | 20:26, Wednesday, 15 April |