Tim Bradshaw: Making CLOS slot access less slow [Planet Lisp]
Access to slots in CLOS instances is often very slow. It’s probably not possible for it ever to be really fast, but the AMOP MOP does provide a way of making it, at least, less slow.
Here are some benchmarks for accessing fields in objects of various kinds, using SBCL. All of these tests do something equivalent to
(defclass a ()
((i :initform 0 :type fixnum)))
(defclass a/no-fixnum ()
((i :initform 0)))
(defmethod svn ((a a) n)
(declare (type fixnum n)
(optimize speed (safety 0)))
(dotimes (i n)
(incf (the fixnum (slot-value a 'i)))))
(defmethod svn ((a a/no-fixnum) n)
(declare (type fixnum n)
(optimize speed (safety 0)))
(dotimes (i n)
(incf (the fixnum (slot-value a 'i)))))
They then call svn (or equivalent) with a large
value of \(n\), do that a number of times \(m\) and then divide by
\(2 \times n \times m\) to get an average time per access
(incf accesses the slot twice).
For SBCL 2.6.3.178-a190d9710 on ARM64 Apple M1, seconds per access:
slot-value (slot type
fixnum) \(1.20\times 10^{-8}\), ratio \(76\);slot-value (no slot type)
\(1.22\times 10^{-8}\), ratio \(77\);slot-value (single
slot-value-using-class method) \(1.69\times 10^{-8}\),
ratio \(107\);standard-instance-access
\(1.00\times 10^{-9}\), ratio \(6.4\);fixnum)
\(1.57\times 10^{-10}\), ratio \(1.0\);car) \(1.59\times 10^{-10}\),
ratio \(1.0\).These numbers vary slightly, but this gives a good picture of
what is going on. In particular you can see that
slot-value within a method specialised on the class is
more than 70 times slower than access for a structure slot, but if
you can use standard-instance-access it is only about
6 times slower: standard-instance-access speeds things
up by a factor of about 10, which changes CLOS slot access
performance from laughably slow to merely pretty slow.
I’ve written a macro, called with-sia-slots
which is like with-slots but uses
standard-instance-access. It therefore has all the
constraints imposed by that, but it is significantly faster than
with-slots or slot-value. It has some
overhead, as it has to dynamically compute the slot locations: this
is better done outside any inner loop. This means that, for
instance, you probably want to write code that looks like
(with-sia-slots (x) o
(dotimes (i many)
(setf x (... x ...))))
which will mean you only pay the overhead once.
The above tests don’t use with-sia-slots, as
I wrote them partly to see if something like this was worth
writing. However on a current (at the time of writing) SBCL
with-sia-slots is asymptotically about 10 times faster
than with-slots as demonstrated by these tests.
Up to package names it should be portable to any CL with an AMOP-compatible MOP. It can be found in my implementation-specific hacks, linked from here.
Ben Hutchings: FOSS activity in April 2026 [Planet Debian]

Nostalgia can be fatal [Seth's Blog]
For hundreds of years, nostalgia was seen as a serious disease, with doctors across Europe scrambling for a cure. Hundreds of thousands of people died from it.
In the original understanding of the term, it was a sort of homesickness. Soldiers from Switzerland were the first to get the official diagnosis–separated from their friends, family and homes, these young men would suffer from melancholy and would waste away, sometimes fatally.
As it spread, one theory was that it afflicted people from places that were at high altitude. As more humans traveled, often under duress (for example, enslaved people kidnapped from their homes and brought by ship to the new world), the suffering increased.
It’s not hard to see how a sudden, involuntary dislocation could be debilitating. Particularly if home was a place that was insulated from sudden change and fast-moving culture.
Today, future shock is bringing a new, if milder form of the affliction. As technology, jobs and culture shift faster than ever before, it’s understandable that many are yearning for a return to an imagined past. When the future arrives uninvited, it can feel like being pulled from a comfortable village in the middle of the night.
Knowing our peers are encountering challenges with the transitions at work or at home can give us the insight to build the scaffolding they need to find their footing. And perhaps we can offer ourselves a bit of grace as well.
Urgent: Block privatization of US Postal Service [Richard Stallman's Political Notes]
US citizens: call on Congress to block privatization of the US Postal Service.
US citizens: Join with this campaign to address this issue.
To phone your congresscritter about this, the main switchboard is +1-202-224-3121.
Please spread the word.
Urgent: Stop weakening of coal ash protections [Richard Stallman's Political Notes]
US citizens: call on the EPA to stop trying to weaken coal ash protections.
See the instructions for how to sign this letter campaign without running any nonfree JavaScript code--not trivial, but not hard.
Urgent: Block war with Cuba [Richard Stallman's Political Notes]
US citizens: call on your congresscritter and senators to block war with Cuba, and end the humanitarian crisis there.
See the instructions for how to sign this letter campaign without running any nonfree JavaScript code--not trivial, but not hard.
US citizens: Join with this campaign to address this issue.
To phone your congresscritter about this, the main switchboard is +1-202-224-3121.
Please spread the word.
Acetaminophen during pregnancy [Richard Stallman's Political Notes]
Magats claim that taking acetaminophen during pregnancy can cause autism. A large empirical study has found that it doesn't.
Never trust what magats say about medicine. They don't have a commitment to making sure it is true.
Violent abuse that drives women to kill their husbands [Richard Stallman's Political Notes]
Many countries' legal systems take no cognizance of the violent abuse that drives women to kill their husbands as an act of self defense.
Skepticism toward claims about microbiome products [Richard Stallman's Political Notes]
Be skeptical about claims that specific products will improve your microbiome. Science doesn't yet know enough to predict what interventions might be helpful, so those who try to sell you products to intervene in your microbiome are likely to be exceeding actual knowledge.
Indigenous Americans face racial profiling [Richard Stallman's Political Notes]
Indigenous Americans face racial profiling by deportation thugs, who demand proof of citizenship because of their appearance.
Forests transformed from carbon sink to carbon source [Richard Stallman's Political Notes]
*Africa's forests transformed from carbon sink to carbon source, study finds.*
The same has happened to the Amazon forest and in South-East Asia. We have gone over a tipping point and are sliding down to global disaster.
(satire) Books to motivate and inspire [Richard Stallman's Political Notes]
(satire) *The Best Books To Motivate And Inspire You.*
Laws against new data centers [Richard Stallman's Political Notes]
Various US states are working on laws to regulate or prohibit the construction of new data centers. The constructors try to evade public regulation and even public awareness until it is too late. At that time, the data center typically has the right to demand all the electricity and water it needs, which can far exceed what is actually available in the place. That can be catastrophic.
Note how plutocratists threaten states with being labeled as "closed to new business" if they take any steps to stop businesses from wiping the floor with the public there. The public needs to learn to laugh in the face of anyone who advocates that plutocratist view.
Length of summer growing [Richard Stallman's Political Notes]
Around the world, the length of summer is growing by 6 days per decade.
But this may speed up in the future, given that global heating is accelerating.
Pressure on Republican senators by general public hatred of bully [Richard Stallman's Political Notes]
Robert Reich argues that general public hatred of the bully might pressure some Republican senators to vote to remove the bully from office.
Maybe so, but if that makes Vance president, will it really be better?
Jobs for US college graduates scarce in some fields [Richard Stallman's Political Notes]
Jobs for US college graduates have become scarce in some fields where they used to be available.
Back to the Very Very Basics [Whatever]


For reasons that are not important now, I have found myself in the possession of a lightly used but still somewhat recent Asus Chomebook, of the sort that one can pick up for less than $200, with 4GB RAM, 64GB of onboard storage, a less than spectacular screen resolution, and a keyboard without backlighting, which means on this dark gray version that once the lights dim, its usefulness will compromised for all but the most talented of touch-typers. It’s been a while since I’ve used something this basic (I’m writing this piece on it now), and inasmuch as my daily driver laptop is a reasonably specced-out M4 MacBook Air, I was curious how I would feel about it stepping down from that.
Answer: I… don’t hate it? I don’t love it, to be clear, and it’s not something I would likely ever choose over using my Air. And there are some things about it which are pretty egregious, that are clearly the result of this thing clocking in at under $200, most notably a screen that would have to work to be called “washed out,” and a track pad that feels genuinely terrible to use, especially coming from a MacBook, which have what are acknowledged to be the best trackpads in the world. It is as plastic as the day is long, and given the paucity of its RAM and the inevitable end of ChromeOS, this computer is so close to the line between “useful” and “e-waste” that one might as well give it a balancing beam.
On the other hand, the keyboard doesn’t suck to type on; it’s a basic chiclet board but it’s nicely spaced and the keys don’t feel overly mushy. The onboard i/o puts the Air to shame: Both the Air and the Asus have two USB-C ports and a headphone jack, but the ASUS throws in a USB-A and Mini-SD card as well (I don’t suspect that the USB-C ports on the Asus are Thunderbolt, but they can port out to an external display, which ain’t chicken feed). Plus the ASUS webcam has a manual privacy shutter, which, frankly, is a thing every laptop with a camera should have regardless. It’s not the absolute worst! You could spend $200 on much more questionable things!
Every now and again I do the check-in with myself on what might be the bare minimum I would need, in terms of personal possessions, if less than wonderful things came to pass I had to live in deeply reduced circumstances. And without going into great detail about the thinking process about this, one of the things I’ve decided is that if I had an acceptable laptop, that would go a fair way toward my needs in terms of audiovisual entertainment, and personal creativity. A decent laptop is a television, a radio, a window to the world and an instrument of expression.
This Asus is… not up to the task of being my acceptable laptop in this circumstance. Too limited by tech and by software, basically. I’ve been a long time enjoyer of Chromebooks, and loved my Pixelbook from back in the day. But Chrome ultimately never won the argument that a thin client to the Internet was all you would ever need, and now that ChromeOS is going to be folded into Android at some nearish point, it never will. Chromebooks will go into the west as forever the “second laptop,” the one you used when you didn’t have actual work to do.
(What laptop do I think it probably the closest to my Lowest Acceptable Spec? I think at this point it’s obvious: a MacBook Neo, which has all the advantages of a Chromebook, including price point for some mid-spec Chromebooks, and also can run more complex software that one would need for creative work, and not be totally reliant on an online connection to do it. It’s tempting to say the Neo is overhyped at this point, except I don’t think it actually is; at $600, it basically takes a knife to the Chromebook value proposition for everything but barebones educational use. It’s not the laptop I would want — that’s my Air — but it would certainly do.)
Considering that I do have a MacBook Air, and an iPad Pro with a “Magic Keyboard,” which essentially takes care of all my laptop-ish needs, what might I use this little Chromebook for? Basically, as a guest laptop, if someone visiting needs to do something that requires a full-size keyboard or a screen larger than the one on their phone, but didn’t happen to bring their own laptop with them. And… that’s pretty much it? As I said, I don’t want to entirely discount this laptop; it’s better than I expected for less than $200, and it fulfills its own admittedly modest brief perfectly well. It’s just that I don’t know how much longer this particular brief is going to need to be fulfilled.
— JS
Reproducible Builds (diffoscope): diffoscope 318 released [Planet Debian]
The diffoscope maintainers are pleased to announce the release
of diffoscope version 318. This version
includes the following changes:
[ Chris Lamb ]
* Upload to test PyPI integration.
* Bump Standards-Version to 4.7.4.
[ Manuel Jacob ]
* Remove a misleading comment.
You find out more by visiting the project homepage.
Developing a cross-process reader/writer lock with limited readers, part 4: Abandonment [The Old New Thing]
We’ve been building a cross-process reader/writer lock with a cap on the number of readers, we concluded our investigation last time by noting that there is a serious problem that needs to be fixed.
That serious problem is abandonment.
Suppose a process crashes while it holds a shared or exclusive lock on our cross-process reader/writer lock. Semaphores don’t have owners, so if a thread terminates while in possession of a semaphore token, that token is lost forever. For our cross-process reader/writer lock, that means that the maximum number of shared acquirers goes down by one, and exclusive acquisitions will never succeed, since they will be waiting for that last token which will never be returned.
A synchronization object that does have the concept of ownership is the mutex, so we can build our reader/writer lock out of mutexes.
The idea here is that instead of claiming semaphore tokens, we claim mutexes. This means that we need one mutex for each potential shared acquisition, plus one more to avoid the starvation problem.
The outline is
HANDLE sharedMutex;
HANDLE tokenMutexes[MAX_SHARED];
struct TimeoutTracker
{
explicit TimeoutTracker(DWORD timeout)
: m_timeout(timeout) {}
DWORD m_start = GetTickCount();
DWORD Wait(HANDLE h)
{
DWORD elapsed = GetTickCount() - m_start;
if (elapsed > m_timeout) return WAIT_TIMEOUT;
return WaitForSingleObject(h, m_timeout - elapsed);
}
DWORD WaitMultiple(DWORD count, const HANDLE* handles, BOOL waitAll)
{
DWORD elapsed = GetTickCount() - m_start;
if (elapsed > m_timeout) return WAIT_TIMEOUT;
return WaitForMultipleObjects(count, handles, waitAll, m_timeout - elapsed);
}
};
We change the return value of the Wait method so it
returns the wait result rather than a success/failure. We also add
a WaitMultiple method for wrapping
WaitForMultipleObjects.
Next is a handy helper function.
int WaitResultToindex(DWORD result)
{
auto index = result - WAIT_OBJECT_0;
if (index < MAX_SHARED) return static_cast<int>(index);
index = result - WAIT_ABANDONED_0;
if (index < MAX_SHARED) return static_cast<int>(index);
return -1;
}
The WaitResultToIndex function
takes the wait result and returns the index of the acquired mutex,
or -1 if no mutex was acquired.
Notice that this code treats the abandoned the state the same as the normal wait state. We are assuming that the code can recover from inconsistent data somehow. (For example, maybe the shared and exclusive accesses are to control access to a set of files, so the existing code already has to deal with file corruption.)
All that’s left is to implement the outline.
int AcquireShared()
{
WaitForSingleObject(sharedMutex, INFINITE);
auto result = WaitForMultipleObjects(MAX_SHARED, tokenMutexes, FALSE /* bWaitAll */, INFINITE);
ReleaseMutex(sharedMutex);
return WaitResultToIndex(result);
}
void ReleaseShared(int index)
{
ReleaseMutex(tokenMutexes[index]);
}
int AcquireSharedWithTimeout(DWORD timeout)
{
TimeoutTracker tracker(timeout);
DWORD result = tracker.Wait(hSharedMutex);
if (result != WAIT_OBJECT_0) return -1;
result = tracker.WaitMultiple(MAX_SHARED, tokenMutexes, FALSE /* waitAll */);
ReleaseMutex(sharedMutex);
return WaitResultToIndex(result);
}
void AcquireExclusive()
{
WaitForSingleObject(sharedMutex, INFINITE);
auto result = WaitForMultipleObjects(MAX_SHARED, tokenMutexes, TRUE /* bWaitAll */, INFINITE);
ReleaseMutex(sharedMutex);
}
void ReleaseExclusive()
{
for (unsigned i = 0; i < MAX_SHARED; i++) {
ReleaseMutex(tokenMutexes[i]);
}
}
bool AcquireExclusiveWithTimeout(DWORD timeout)
{
TimeoutTracker tracker(timeout);
DWORD result = tracker.Wait(hSharedMutex);
if (result != WAIT_OBJECT_0) return -1;
result = tracker.WaitMultiple(MAX_SHARED, tokenMutexes, TRUE /* waitAll */);
ReleaseMutex(sharedMutex);
return result != WAIT_TIMEOUT;
}
The post Developing a cross-process reader/writer lock with limited readers, part 4: Abandonment appeared first on The Old New Thing.
Malware in Proprietary Software - Latest Additions [Planet GNU]
The initial injustice of proprietary software often leads to
further injustices: malicious
functionalities.
The introduction of unjust techniques in nonfree software, such as
back doors, DRM, tethering, and others, has become ever more
frequent. Nowadays, it is standard practice.
We at the GNU Project show examples of malware that has been
introduced in a wide variety of products and dis-services people
use everyday, and of companies that make use of these
techniques.
Eden: NHS goes to war against open source [LWN.net]
Terence Eden reports that the UK's National Health Service (NHS) is preparing to close almost all of its open-source repositories as a response to LLM tools, such as Anthropic's Mythos, becoming more sophisticated at finding security vulnerabilities. He does not, to put it mildly, agree with the decision:
The majority of code repos published by the NHS are not meaningfully affected by any advance in security scanning. They're mostly data sets, internal tools, guidance, research tools, front-end design and the like. There is nothing in them which could realistically lead to a security incident.
When I was working at NHSX during the pandemic, we were so confident of the safety and necessity of open source, we made sure the Covid Contact Tracing app was open sourced the minute it was available to the public. That was a nationally mandated app, installed on millions of phones, subject to intense scrutiny from hostile powers - and yet, despite publishing the code, architecture and documentation, the open source code caused zero security incidents.
Furthermore, this new guidance is in direct contradiction to the UK's Tech Code of Practice point 3 "Be open and use open source" which insists on code being open.
It's May, and we've been keeping busy [Planet GNU]
All four teams at the Free Software Foundation (FSF) have been working tirelessly the past four months, and we have a lot to show for it!
Joe Marshall: Echoes of the Lisp Listener [Planet Lisp]
The Lisp Machine Listener had an electric close parenthesis. When the user typed a close parenthesis, and this was the close parenthesis that finished the complete form at top level, the form would be sent to the REPL right away with no need to press enter. Here's how to get this behavior with SLY:
(defun my-sly-mrepl-electric-close-paren ()
"Insert ')' and auto-send ONLY if we are closing a top-level Lisp form."
(interactive)
(let ((state (syntax-ppss)))
(insert ")")
;; Safety checks:
;; 1. We were at depth 1 (so we are now at depth 0)
;; 2. We aren't in a string or comment
;; 3. The input actually starts with a paren (it's a form, not a sentence)
(when (and (= (car state) 1)
(not (nth 3 state))
(not (nth 4 state))
(string-match-p "^\\s-*("
(buffer-substring-no-properties (sly-mrepl--mark) (point))))
(sly-mrepl-return))))
Another cool hack is to get the REPL to do double duty as a command line to the LLM chatbot. When you type RET in the REPL, it will check if the input is a complete lisp form. If so, it will send the form to the REPL as normal. If not, it will send the input to the chatbot. Here's how to do this:
(defun my-sly-mrepl-electric-return ()
"Send to Lisp if it's a form/symbol, or wrap in (chat ...) if it's a sentence."
(interactive)
(let* ((beg (marker-position (sly-mrepl--mark)))
(end (point-max))
(input (buffer-substring-no-properties beg end))
(trimmed (string-trim input)))
(cond
;; If it's empty, just do a normal return
((string-blank-p trimmed)
(sly-mrepl-return))
;; If it starts with a paren, quote, or hash, it's definitely a Lisp form
((string-match-p "^\\s-*[(#'\"]" trimmed)
(sly-mrepl-return))
;; If it's a single word (no spaces), treat it as a symbol/form (e.g., *package*)
((not (string-match-p "\\s-" trimmed))
(sly-mrepl-return))
;; Otherwise, it's a sentence. Wrap it and fire.
(t
(delete-region beg end)
(insert (format "(chat %S)" trimmed))
(sly-mrepl-return)))))
Install as follows:
;; Apply to SLY MREPL with a safety check for the mode map (with-eval-after-load 'sly-mrepl (define-key sly-mrepl-mode-map (kbd "RET") 'my-sly-mrepl-electric-return) (define-key sly-mrepl-mode-map (kbd ")") 'my-sly-mrepl-electric-close-paren))
At The Speed of Hell [Penny Arcade]
The only time I feel bad for not having a newer console is when Housemarque drops something. With a pedigree that goes back to the Amiga, they have developed and honed their taste to Jamon Iberico levels - I wouldn't be surprised if they let their games feed on acorns, free-range. They've perfected arcade feel, itself a kind of artisanal, out-of-time style, and now with Returnal and (from Mork tells me) Saros, they've mastered progression as well.
I think Saros is a super fun game whenever it isn’t trying to tell me whatever this story is. The bullet hell gameplay is really well done and if that’s all the game was I would probably love it, but they have layered in this inscrutable story that I find completely uninteresting and unnecessary. Jerry and I often say that something is “too grand for chicken” when trying to explain that a thing can be simple and great without needing an extra layer of gravitas. I will keep playing it, but that is how I feel about Saros.
Urgent: Pass Farm Bill [Richard Stallman's Political Notes]
US citizens: call on your congresscritter and senators to pass a Farm Bill that helps families put food on the table.
See the instructions for how to sign this letter campaign without running any nonfree JavaScript code--not trivial, but not hard.
US citizens: Join with this campaign to address this issue.
To phone your congresscritter about this, the main switchboard is +1-202-224-3121.
Please spread the word.
Urgent: Call senators to vote for S. J. Res. 99 [Richard Stallman's Political Notes]
US citizens: call on your senators to vote for S. J. Res. 99, which would protect authorized foreign workers who have filed for renewal of that authorization from being expelled because government agencies were late in responding.
See the instructions for how to sign this letter campaign without running any nonfree JavaScript code--not trivial, but not hard.
US citizens: Join with this campaign to address this issue.
To phone your congresscritter about this, the main switchboard is +1-202-224-3121.
Please spread the word.
Urgent: Block corrupter's UAE bailout [Richard Stallman's Political Notes]
US citizens: call on your congresscritter and senators to block the corrupter's UAE bailout.
See the instructions for how to sign this letter campaign without running any nonfree JavaScript code--not trivial, but not hard.
US citizens: Join with this campaign to address this issue.
To phone your congresscritter about this, the main switchboard is +1-202-224-3121.
Please spread the word.
Urgent: Support Rashida Tlaib's Lebanon War Powers Resolution [Richard Stallman's Political Notes]
US citizens: call on your representative and senators to support Rashida Tlaib's Lebanon War Powers Resolution.
See the instructions for how to sign this letter campaign without running any nonfree JavaScript code--not trivial, but not hard.
US citizens: Join with this campaign to address this issue.
To phone your congresscritter about this, the main switchboard is +1-202-224-3121.
Please spread the word.
Urgent: Impeach FBI director Patel [Richard Stallman's Political Notes]
US citizens: call on Congress to impeach FBI director Patel.
US citizens: Join with this campaign to address this issue.
To phone your congresscritter about this, the main switchboard is +1-202-224-3121.
Please spread the word.
Urgent: Tax the Rich and EXPAND Social Security [Richard Stallman's Political Notes]
US citizens: call on Instead of Capping Social Security Benefits, Tax the Rich and EXPAND Social Security.
See the instructions for how to sign this letter campaign without running any nonfree JavaScript code--not trivial, but not hard.
Urgent: Shut down corrupter's prescription drug scam [Richard Stallman's Political Notes]
US citizens: call on your state Attorney General to shut down the corrupter's prescription drug scam.
See the instructions for how to sign this letter campaign without running any nonfree JavaScript code--not trivial, but not hard.
A Bridge to Somewhere: How to Link Your Mastodon, Bluesky, or Other Federated Accounts [Deeplinks]
One of the central promises of open social media services is interoperability—the idea that wherever you personally decide to post doesn’t require others to be there just to follow what you have to say. Think of it like a radio broadcast: you want to reach people and don't care where they are or what device they're using. For example, in theory, a Bluesky user can follow someone on Mastodon or Threads without having to create a Mastodon or Threads account. But these systems are still a work in progress, and you might need to tweak a few things to get it working correctly.
Right now, broadcasting your message across social platforms can be a funky experience at best, deliberately broken up by oligopolists. The idea of the open web was baked into the internet via protocols like HTML and RSS that made it easy for anyone to visit a website or follow most blogs. The fact social media isn’t similarly open reflects an intentional choice to privatize the internet.
Bridging and managing your posts so they’re viewable outside a singular source is part of the broader philosophy of POSSE, short for Post Own Site Syndicate Elsewhere (sometimes its Post Own Site, Share Everywhere). Instead of managing several accounts across different services, you post once to one primary site (which might be your personal website, or just one social media account), then set it up so it automatically publishes everywhere else. This way, it doesn’t matter where you or your audience is, and they're not walled off by account registration requirements.
We’ll come back around to POSSE at the end of this post, but for now, let’s assume you just want your current main open social media account to actually have a chance to reach the most people it can.
Because the Fediverse and ATmosphere use different protocols, we need to use a third-party tool so accounts can communicate with each other. For that, we’ll need a bridge. As the name suggests, a bridge can connect one social media account to another, so you can post once and spread your message across several places. This isn’t just some niche concept: major blogging platforms like Wordpress and Ghost integrate posting to the Fediverse.
Bridging is an important facet of POSSE, but also something more people should consider, even if they don’t run their own websites. For example, if you don’t want to create a Threads account just to interact with your one friend who uses that platform, you shouldn’t have to. The good news is, you don’t. There are several bridging services, like Fedisky, RSS Parrot, and pinhole, but Bridgy Fed is currently the simplest to use, so we’ll focus on that.
From your Mastodon account (or other Fediverse account, for
simplicity’s sake we’ll stick to Mastodon throughout),
search for the username @bsky.brid.gy@bsky.brid.gy and
follow that account. Once you do, the account will follow you back
and you’ll be bridged and people can find you from their
Bluesky account. You should also get a DM with your bridged
username. If you don’t see the
@bsky.brid.gy@bsky.brid.gy user when you search, your
Mastodon instance may be blocking the bridging tool.
Threads users who have enabled Fediverse sharing will be able to
find you with your standard Mastodon username (ie,
@your_user_name@mastodon.social), but if they
haven’t enabled sharing, they will not be able to see your
account. While this search is still a beta feature, you might find
it easier to share the full URL, which would look like this:
https://www.threads.net/fediverse_profile/@your_user_name@mastodon.social
People on Bluesky can find you by: Either
searching for your Mastodon username, or if that doesn’t
work, @your_user_name.instance.ap.brid.gy. For
example, if your username is @eff@mastodon.social, it
would appear as @eff.mastodon.social.ap.brid.gy.
An example of a Mastodon username from the Bluesky web client.
Yes, Threads is technically on the Fediverse, and you can bridge your Threads account to Mastodon or Bluesky (unless you’re in Europe, where the feature is disabled), but it’s a different process than on Bluesky and Mastodon.
@bsky.brid.gy@bsky.brid.gy account (it may take some
digging to find it, but if that doesn’t work you can try
visiting the profile
page directly. People on Mastodon (or other Fediverse accounts) and
Bluesky can find you by: Mastodon users can find you at,
@your_threads_username@threads.net while Bluesky users
will find you at,
@your_threads_username.threads.net.ap.brid.gy
(seriously, that will be the username). Note that some Mastodon
instances may block Threads users entirely.
An example of a Threads username from the Mastodon web client.
An example of a Threads username from the Bluesky web client.
From your Bluesky (or other ATProto) account, search for the username, “@ap.brid.gy” and follow that account. Once you do, the account will follow you back and you’ll be bridged, so people can follow you from Mastodon or other Fediverse accounts. You should also get a DM with your bridged username.
People on Mastodon (or other Fediverse account) and
Threads can find you by: Your username will appear as
@your_bluesky_username@bsky.brid.gy. For example, if
your Bluesky username is @eff@bsky.social, it would
appear as @eff.bksy.social@bsky.brid.gy.
An example of a Bluesky username from the Mastodon web client.
You can bridge more than social media accounts. If you have your own website, you can bridge that too (as long as it supports microformats and webmention, or an Atom or RSS feed. If you have a blog, there’s a good chance you’re already good to go). When you do so, the bridged account will either post the full text (or image) of whatever you post to your personal site, or a link to that content, depending on how your website is set up. You’ll also probably want to log into your Bridgy user page so you can manage the account.
Where people can find your bridged account:
Usually, a user can just search for your website’s URL on
their decentralized social network of choice, or enter it on the
Bridgy Fed page. But if
that doesn’t work, they can try
@yourdomain.com@web.brid.gy from Mastodon or
@yourdomain.com.web.brid.gy from Bluesky.
An example of a bridged website username in the Mastodon web client.

As mentioned up top, there’s a lot more you can do, and an increasing number of tools are making this process simpler. Bridgy Fed is one way to post to more places from a single account, but it’s far from the only way to do so. Here are just a few examples.
Of course, there are plenty of other tools, blogging platforms, and other utilities out there to help facilitate posting and bridging accounts, with new ones coming along every day.
With proper support, time, and effort, eventually we will all be able to seamlessly interact across platforms, take our follows and followers to other services when a platform no longer suits our needs, and interact with a variety of web content regardless of what platform hosts it. Until then, we still need to do some DIY work, support the services we want to succeed, and push for more platforms and services to support federated protocols.
BTW, I pointed to the Wikipedia page for XML-RPC, and noticed that they point to an archive.org copy of a very old version of the website, instead of the updated site which has new reference code written in JavaScript. The old version of the site used Frontier, which is where XML-RPC was developed, but it's not in wide use these days, JavaScript is. Could someone update the Wikipedia page to change the link to the current XML-RPC site? I'm reluctant to do it myself because that's somewhat against the rules.
Dirk Eddelbuettel: binb 0.0.8 on CRAN: Maintenance [Planet Debian]

The eight release of the binb package, and first in two years, is now on CRAN and in r2u. binb regroups four rather nice themes for writing LaTeX Beamer presentations much more easily in (R)Markdown. As a teaser, a quick demo combining all four themes is available; documentation and examples are in the package.
This release contains regular internal updates to continuous
integration, URLs reference and switch to Authors@R. The trigger
for the release, though, was a small updated need when very recent
pandoc versions (as shipped with RStudio) are used
which require a new variable declaration in the LaTeX template
files in order to process uncaptioned tables. The summary of
changes follows.
Changes in binb version 0.0.8 (2026-05-01)
Small updates to documentation URLs and continuous integration
The package now uses Authors@R in DESCRIPTION
Newer pandoc versions are accommodated by adding a required counter variable in the latex template file
CRANberries provides a summary of changes to the previous version. For questions or comments, please use the issue tracker at the GitHub repo.
This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub. You can also sponsor my Tour de Shore 2026 ride in support of the Maywood Fine Arts Center.
[$] Version-controlled databases using Prolly trees [LWN.net]
Modern database and filesystems make pervasive use of B-trees, which are tree structures optimized for storing sorted lists of keys and values on block devices. Dolt is an Apache 2.0-licensed project that makes clever use of a variant of a B-tree to support efficient version control for an entire database. The data structure it uses could well be of interest to other projects.
Welcome to May, the fairest month of all in the NYC area.
Almost every day in May is delicious. And today is esp fair, the
Knicks moved on to the next round of the playoffs with a
record-setting decisive
victory over the Hawks of Atlanta. The next opponent is either
the Celtics of Boston or the Sixers of Philadelphia. I was already
burned out on the playoffs last week, but I'm rejuvenated. Let's
go. And I apologize about all the "realistic" things I said about
Brunson. He caught fire in the last three games, and showed he has
the determination we need to go all the way this year. The Knicks
are great because unlike the Yankees or Mets, they unify the city.
And as everyone learns, NYC is so huge that the fanbase can pack
arenae all over
the US, as they chanted the name of OG and MVP for
Brunson
and Dooooooce when McBride shoots.
If they don't get to the finals it will not be for lack of talent
or determination. There will be luck and Acts of God involved in
the outcome.
On this day in 2016 I wrote a screed on Facebook saying how I wanted to turn it into a blogging platform including the how and why. The arguments are roughly the same ones about how I want Bluesky to stop paying homage to the limits of Twitter and cozy up to the web and let's do writing for real, undo the damage caused by Twitter in its over 20-year life. The requests in both case fell on deaf ears. So we are where we were in 2016, we have to replace Bluesky with the writing system of the web. And there is a silver lining to Automattic's excursion into a mini-version of WordPress that looks and behaves like Twitter. They used RSS to glue the systems together. It was convenient, and that's one of the major selling points of RSS, it is convenient. It's supported everywhere (except the offspring of Twitter). So thanks for that. I'm still glued to this cause. I don't want to retire until writing on the web gets back on track.
Apparently Substack does not implement MCP, which is basically the XML-RPC of AI. According to ChatGPT they have a limited API that some independent developers have bridged to MCP. But as you would expect from a tight silo like Substack, the API lets you read but not write. They want you to use their editor, what they don't want is to be one of 20 distributors of your writing. They want an exclusive and they get it.
The backup of this blog for April in OPML format.
Security updates for Friday [LWN.net]
Security updates have been issued by AlmaLinux (fence-agents), Debian (chromium, dovecot, and kernel), Fedora (chromium, dotnet10.0, dotnet8.0, dotnet9.0, emacs, glow, jfrog-cli, openbao, pyp2spec, python3.6, rust-rustls-webpki, vhs, and xen), Oracle (grafana, grafana-pcp, PackageKit, sudo, vim, and xorg-x11-server), Red Hat (rhc), SUSE (avahi, bouncycastle, chromium, container-suseconnect, firewalld, gdk-pixbuf, grafana, java-25-openjdk, kernel, libixml11, libmozjs-140-0, libpng12-0, libsodium, libssh, mariadb, Mesa, ntfs-3g_ntfsprogs, openCryptoki, openexr, packagekit, prometheus-postgres_exporter, python-jwcrypto, python-mako, python-Pygments, python-pynacl, python311, python311-pyOpenSSL, python315, radare2, sed, and vim), and Ubuntu (kmod and zulucrypt).
Error'd: Parametric Projection [The Daily WTF]
Roger C. gets on second base with an unforced error. "Not only is the content too large, the error message informing us of this is also too large to fit the visible space. A layered, double WTF."
"AWS Spellcheck Fail!" alerts Peter "If only someone at AWS knew the correct paramters to activate the spellcheck."
"How long is too long for a job to be open? " wonders Lincoln K. "I didn't even know LinkedIn existed 61 years ago, let alone was accepting postings... Though only 81 applicants in that time is hardly an impressive turn-out." For a "Vice President Operations and Quality Control", no less.
An anonymous Richard reports "This came through my door. On a card that, in order to get to my door, had my full address printed on it, including my
."Oenophile Abroad Michael R. shares "My Macbook broke after being "exposed" to red wine. As a German in London it pleases me so see that the repair shop offers this time granularity."
League of Canadian Superheroes – Issue 5 – 16 [Comics Archive - Spinnyverse]
The post League of Canadian Superheroes – Issue 5 – 16 appeared first on Spinnyverse.
A Ransomware Negotiator Was Working for a Ransomware Gang [Schneier on Security]
Someone pleaded guilty to secretly working for a ransomware gang as he negotiated ransomware payments for clients.
GNU Health featured at the Cyber|Show UK [Planet GNU]
GNU Health at the Cyber|Show!
Grab a coffee and listen to the 40 min. interview Andy Farnell and
Helen Plews made to Luis Falcón in their wonderful show.
❤️
They covered key aspects on citizen and patient data privacy,
hospital management, federated health networks, genomics and
wearables. In the interview they also talked about the risks
associated to commercial, closed sourced electronic health records
systems and proprietary mobile applications.
The interview reveals how crucial is Free/Libre software for equity
and digital sovereignty in our societies. 🩺 🏥
🧬 👇️
https://cybershow
... pisodes.php?id=64
About Cyber|Show:
https://cybers ...
w.uk/about.php
Get this and latest news about GNU Health from our official
Mastodon account:
https://mastodon. ...
social/@gnuhealth
Tags: #GNUHealth #GNU #OpenScience #PublicHealth #Privacy
#FreeSoftware #SocialMedicine #CyberShow
“The most exciting mobile trend is full Qwerty keyboards” [Seth's Blog]
The creators of the Blackberry were sure that customers loved the keyboard. That’s what they heard all day from their users, and it must have been right since they had a huge share of the mobile phone market.
When the iPhone came out, it wasn’t seen as a threat because it had no keyboard. Blackberry was in the keyboard business, the iPhone sold something else.
We make this mistake more often than we imagine, and it’s worth looking at.
RIM, the makers of the Blackberry, didn’t actually sell keyboards. They sold the network. It’s easy to see this if you realize that a single Blackberry (with no one to connect to) was worthless, but an iPhone with millions of users and no keyboard is priceless.
Within three years, RIM went from dominating the market and reaping huge profits to essentially zero market share.
Instead of defending the keyboard, they could have defended the network.
They thought they made little boxes with batteries, but they actually made a network and gave their IT customers a story.
The heart of their customer base was business people, using business funds to pay for a business device. They wanted connection, success, and security. Freedom from fear dances with affiliation and status all day.
RIM could have offered IT departments exactly what they wanted–the chance to tell their bosses that they had control. Deniability. Security. The ability to monitor traffic and retain (or delete) information.
By defending the network, they would have made it difficult for any of these users to eagerly switch to a different network, one that their peers weren’t on.
Instead of selling devices, RIM could have sold seats. At $45 a month (bring your own device), it would have been a bargain.
The hardware process was a sunk cost, a warehouse full of liability that felt like an asset.
We get hooked on our past wins (and our fears of past losses) instead of understanding the value we’re able to provide.
A Blackberry iPhone app would compete with their own devices in a way where RIM couldn’t lose. Feed the network first. Give people what they actually wanted (connection and status) not what said they wanted (a faster way to type).
At The Speed of Hell [Penny Arcade]
New Comic: At The Speed of Hell
And Now, a Fairly Noisy Cover of “First of May” [Whatever]

“First of May” being of course Jonathan Coulton’s immortal celebration of spring, love, and outdoor recreation, possibly the most gentle song ever to drop multiple f-bombs. I thought, what if “First of May,” but with lots of drums and buzzy guitars? The answer to this question awaits you when you click on the video.
Fun fact: The basis for this version of the song is a previous cover version I did with an acoustic tenor guitar, eight years ago. I took that version, ran it through Logic to separate the guitar and vocal tracks, and then slathered the guitar in feedback and added an additional vocal track (along with other programming). It was not less work than just recording from scratch. It was still fun.
Note: This song is generally not safe for work, unless work lets you blast music with lots of f-bombs. In which case, crank it, baby.
Welcome to May!
— JS
Girl Genius for Friday, May 01, 2026 [Girl Genius]
The Girl Genius comic for Friday, May 01, 2026 has been posted.
Waking Up, p14 [Ctrl+Alt+Del Comic]
The post Waking Up, p14 appeared first on Ctrl+Alt+Del Comic.
Junichi Uekawa: A rainy day starts May. [Planet Debian]
A rainy day starts May.
Dilly-Dallying In Denver: Day 4 (The Final Day!) [Whatever]
It is the last day of April and I am finally posting the final part of my time in Denver, which was literally almost two months ago now, but that’s neither here nor there. On the fourth day, one of Alex’s other friends from college flew in for their birthday as well, and got there very early in the morning. So all three of us got into shenanigans today!
You always have to start out the day with going to a cute coffee shop, so we went to Savageau Coffee & Ice Cream.

This little coffee shop had a really cool layout, with a whole wall of different, framed mirrors. I ended up getting a white chocolate and pistachio flavored iced matcha:

With coffees in hand, we headed over to the Denver Botanic Gardens. I was
extremely excited to visit the botanical gardens, as I love
flowers. Things were just barely starting to bloom in the still
chilly spring air. Heck, it had snowed two nights before, so I was
partially expecting everything to be dead. And while a lot of
plants were still dormant, there was plenty to see.
Alex had actually just been gifted a membership to the gardens, so they used two of their guest passes on us, which was really nice. I believe it’s about twenty dollars for standard adult admission, otherwise.
I took a lot of flower photos, and it was difficult to decide which ones to show y’all. I ended up picking out ones that are purple and pink, because those are my two favorite colors. So enjoy these handful of shots from our time walking around at the gardens:






The Denver Botanic Gardens had so many beautiful orchids, most of which were in glass cases or on huge display carts. They were absolutely stunning and they had a wide array of colors. Orchids are one of my favorite flowers, so these were very cool to see.
The gardens were such a nice experience. I just love walking through trails with different plant life all along the sides and learning about new flowers. The gift shop was really cool, too! There was a huge variety of items, but I only ended up getting a couple pins. All in all a successful outing.
We left just in time to head to our early dinner reservations at Ash’Kara. This was another restaurant where we wanted to partake in their Restaurant Week offerings, but we actually showed up at 4pm, and the dinner service (including the Restaurant Week stuff) didn’t start until 5. So we actually ended up sitting and enjoying Ash’Kara’s happy hour for a little bit before we got to have our actual meals. Thankfully, they weren’t busy at all and let us hang out whilst we waited for 5pm to roll around.
I really loved the interior of Ash’Kara. It’s very colorful and eclectic, has cool light fixtures, and has a lovely bar.

Here’s their happy hour menu:

And the beverages:

While these drinks definitely sounded good, I ended up ordering a mocktail. This was their cucumber spritz, which is just cucumber syrup, lemon, and soda water:

And Alex got another one of their mocktails, the passion-hibiscus spritz, with passion tea syrup, hibiscus, lemon, and soda water:

I loved these glasses, they remind me a lot of Jupiter glass but with a more ornate design. Both of these drinks were super light and refreshing without being too sweet, as mocktails sometimes can be. I actually ended up getting Alex’s drink for my second one because I liked it so much, but both were great choices.
We wanted to get a couple happy hour food items, but didn’t want to fill up too much before we ate our actual dinner. We ended up ordering the Castelvetrano olives:

Castelvetrano olives just so happens to be my favorite type, so these olives with orange zest and Calabrian chili were absolutely delish. They were bright, briny, and really packed a punch. They were easily shareable and a great start to the rest of our meal.
We also got their pickled veggie platter:

If you like briny, pucker-worthy pickles, this is the appetizer for you. Crunchy, fresh veggies with a ton of pickle-y bite to them. I liked the pickles the best, just because the carrots were hard for me to bite through (I have sensitive teeth).
And for our final shareable, we got the fried halloumi and panisse:

Oh my goodness, look at that golden brown color. That is picture perfect right there. While I absolutely love fried halloumi, I wasn’t sure what panisse was. You can really tell a difference between the cubes of panisse and the halloumi, too. My friends didn’t know either, so we looked it up and they are essentially chickpea fritters, like polenta but made with chickpea flour and then fried.
The fried halloumi was the best I’d ever had. It was hot and crispy, and the cheese squeaked like a Wisconsin cheese curd. The panisse was soft and pillowy on the inside, and I was happy to try something I had never heard of before. This was an absolutely bomb starter and we all really enjoyed it.
Finally, it was time to view the Restaurant Week menu. Set at $45 a person, here’s what we were looking at:

This one turned out a little blurry, so let me walk you through the different options and tell you what everyone got.
For the first course, you basically pick between four dip options. There’s hummus, htipiti, labneh, and babaganoush. You can also add on pita, pickles, fries, and olives, but whatever dip you chose did come with your own naan as a vehicle for your dip.
I got the labneh, Alex got the hummus, and Alex’s friend got the babaganoush:

My labneh came with roasted grapes, sumac honey, sesame seeds, and chives. The hummus had a sprinkle of paprika and chopped parsley on top. The babaganoush had a paprika oil on top with crispy shallots and some microgreens.
All three of the dips were so divine. My labneh was so creamy, and the texture worked really well with the soft grapes and tiny crunch from the sesame seeds. The hummus was excellent, and had plenty of garlicy flavor without being overpowering. The babaganoush might’ve been the star of the show, with the savory, roasty flavor of the eggplant and perfectly crunchy shallots. The naan our dips were served with was warm and soft. All three of us were eating each other’s dips because they were all so good. The labneh and babganoush are a must-try.
We also added on an order of Za’atar fries:

I love za’atar and think it is an underutilized spice in many people’s cooking, so it was awesome to see za’atar fries. These were hot, fresh, crispy fries with just the right amount of herbaceous and saltiness from the za’atar.
For course two, you could choose between salad and falafel. Alex and their friend got the falafel:

I got the Fattoush salad:

This salad had chicory, pickled red cabbage, pomegranate arils, fried sage, roasted delicata squash, and naan breadcrumbs with a shrub vinaigrette. Oh my gosh, this salad was bomb. So many different textures and flavors happening here, yet nothing contrasting in a negative way. Crunchy pickled cabbage, soft roasted squash, fresh greens, and tart pomegranate, it was a beautiful dish. I really loved this salad.
For our final course, we could choose between braised lamb shoulder, lemon pepper salmon, or a roasted cabbage dish. While Alex and I got the lamb, their friend got the roasted cabbage:

I almost got this, and when I saw it I knew I wouldn’t have regretted my choice if I had. With tons of caramelization on the roasted cabbage and plenty of caramelized onions, it looked so flavorful atop that soft basmati rice.
Here was our lamb shoulder:

There were a lot of words that accompanied the lamb shoulder description that I didn’t recognize, and I had to ask the waiter about several of them. The lamb is served with a sweet potato tershi. While I love sweet potatoes, I didn’t know what a tershi was. Turns out, it’s like a dip or a spread that is typically made from pumpkin or squash, and is usually spicy or at least warmly spiced. Thankfully, this version wasn’t very spicy, just nicely spiced. It also had zhug, which is sort of like pesto, but with cilantro and parsley instead of basil, and different spices like cumin. There was also hawaji in the dish, which is a Yemeni spice blend I’ve never heard of it. Now, I did know already what kataifi is, and it’s the crispy shredded phyllo you see on top.
Now that we know what everything is, this dish was incredibly delicious. Super tender lamb and soft sweet potatoes contrasting the crunchy kataifi. The bright, fresh, herbaceous zhug lightened up the rich, warm flavors of the lamb. This dish was so unique and unlike any lamb I’d had before. I highly recommend this dish if you like lamb, or if you’ve never had lamb and are curious to try it. This dish would be the perfect introduction to it.
Ash’Kara was a really awesome culinary experience. There’s pretty much no Mediterranean restaurants around where I live, so experiencing this amazing cuisine was such a treat. I absolutely loved all the different flavors and unique dishes I got to try. I would a hundred percent revisit Ash’Kara if I go back to Denver.
So, that’s pretty much everything I did for my few days in Denver! Tons of amazing food, great drinks, cool museums, awesome flowers, and of course, friends.
For the rest of my time in Colorado (which was about another three days), we went out to Palmer Lake and stayed in an AirBnb with more of Alex’s friends. It was a lovely mountain lodge and we had a lot of fun, and I made this charcuterie board:

This board had dill Havarti, a red wax Gouda, double creme brie, drunken goat, and whipped hot honey goat cheese. Plus prosciutto and salami, smoked salmon, jalapeno and garlic stuffed olives, pickles, cheddar crisps, and candied pecans. Aside from the dijon mustard, Alex’s mom makes jams and spreads, so we used her cherry berry, apricot mango, and blackberry spreads. I also threw together a sweet treat board:

Alex requested two things: blackberries and strawberries. Trader Joe’s (where we got literally all of this from) had these special white strawberries called pineberries that were supposedly really good, so we gave them a shot. There’s also mini peanut butter cups, milk chocolate covered pretzels, and then these super yummy little mousse cakes. There’s the raspberry mousse ones with vanilla cake, and the chocolate ones. They were ridiculously good.
Anyways, aside from enjoying our time in the cabin playing games and whatnot, we also saw the Red Rocks Amphitheater (not attending a concert, just saw it regularly), and the Garden of the Gods. The Garden of the Gods was honestly such an amazing experience; the beauty of it all brought a tear to my eye. I highly recommend checking it out. Who knew rocks could be so awe-inspiring.
The last thing I have to post about is the Denver airport, and it might be for different reasons than you’d expect! So stay tuned for the actual final post about Denver.
Have you visited the botanical gardens before? Was it when everything was more.. alive? What would you have ordered from Ash’Kara? Do you like lamb? Let me know in the comments, and have a great day!
-AMS
Utah’s New Law Targeting VPNs Goes Into Effect Next Week [Deeplinks]
For the last couple of years, we’ve watched the same predictable cycle play out across the globe: a state (or country) passes a clunky age-verification mandate, and, without fail, Virtual Private Network (VPN) usage surges as residents scramble to maintain their privacy and anonymity. We've seen this everywhere—from states like Florida, Missouri, Texas, and Utah, to countries like the United Kingdom, Australia, and Indonesia.
Instead of realizing that mass surveillance and age gates aren't exactly crowd favorites, Utah lawmakers have decided that VPNs themselves are the real issue.
Next week, on May 6, 2026, Utah will become, to EFF’s knowledge, the first state in the nation to target the use of VPNs to avoid legally mandated age-verification gates. While advocates in states like Wisconsin successfully forced the removal of similar provisions due to constitutional and technical concerns, Utah is proceeding with a mandate that threatens to significantly undermine digital privacy rights.
Formally known as the “Online Age Verification Amendments,” Senate Bill 73 (SB 73) was signed by Governor Spencer Cox on March 19, 2026. While the majority of the bill consists of provisions related to a 2% tax on revenues from online adult content that is set to take effect in October, one of the more immediate concerns for EFF is the section regulating VPN access, which goes into effect this coming Wednesday.
The new law explicitly addresses VPN use in Section 14, which amends Section 78B-3-1002 of existing Utah statutes in two primary ways:
By holding companies liable for verifying the age of anyone physically in Utah, even those using a VPN, the law creates a massive "liability trap." Just like we argued in the case of the Wisconsin bill, if a website cannot reliably detect a VPN user's true location and the law requires it to do so for all users in a particular state, then the legal risk could push the site to either ban all known VPN IPs, or to mandate age verification for every visitor globally. This would subject millions of users to invasive identity checks or blocks to their VPN use, regardless of where they actually live.
In practice, SB 73 is different from the Wisconsin proposal in that it stops short of a total VPN ban. Instead, it discourages using VPNs by imposing the liability described above and by muzzling the websites themselves from sharing information about VPNs. This raises significant First Amendment concerns, as it prevents platforms from providing basic, truthful information about a lawful privacy tool to their users.
Unlike previous drafts seen in other states, SB 73 doesn't explicitly ban the use of a VPN. Under a "don't ask, don't tell" style of enforcement, websites likely only have an obligation to ask for proof of age if they actually learn that a user is physically in Utah and using a VPN. If a site doesn’t know a user is in Utah, their broader obligation to police VPNs remains murky. So, while SB 73 isn’t as extreme as the discarded Wisconsin proposal, it remains a dangerous precedent.
Then there is also the question of technical feasibility: Blocking all known VPN and proxy IP addresses is a technical whack-a-mole that likely no company can win. Providers add new IP addresses constantly, and no comprehensive blocklist exists. Complying with Utah’s requirements would require impossible technical feats.
The internet is built to, and will always, route around censorship. If Utah successfully hampers commercial VPN providers, motivated users will transition to non-commercial proxies, private tunnels through cloud services like AWS, or residential proxies that are virtually indistinguishable from standard home traffic. These workarounds will emerge within hours of the law taking effect. Meanwhile, the collateral damage will fall on businesses, journalists, and survivors of abuse who rely on commercial VPNs for essential data security.
These provisions won't stop a tech-savvy teenager, but they certainly will impact the privacy of every regular Utah resident who just wants to keep their data out of the hands of brokers or malicious actors.
Lawmakers have watched age-verification mandates fail and, instead of reconsidering the approach, have decided to wage war on privacy itself. As the Cato Institute states:
“The point is that when an internet policy can be avoided by a relatively common technology that often provides significant privacy and security benefits, maybe the policy is the problem. Age verification regimes do plenty of damage to online speech and privacy, but attacking VPNs to try to keep them from being circumvented is doubling down on this damaging approach."
Attacks on VPNs are, at their core, attacks on the tools that enable digital privacy. Utah is setting a precedent that prioritizes government control over the fundamental architecture of a private and secure internet, and it won’t stop at the state’s borders. Regulators in countries outside the U.S. are still eyeing VPN restrictions, with the UK Children’s Commissioner calling VPNs a “loophole that needs closing” and the French Minister Delegate for Artificial Intelligence and Digital Affairs saying VPNs are “the next topic on my list” after the country enacted a ban on social media for kids under 15.
As this law goes into effect next week, we are entering uncharted territory. Lawmakers who can’t distinguish between a security tool and a "loophole" are now writing the rules for one of the most complex infrastructures on Earth. And we can assure that the result won't be a safer internet, only an increasingly less private one.
come closer and see [WIL WHEATON dot NET]
I want to take a moment and say thank you for all the messages of comfort and support that so many of y’all have shared with me since Marlowe passed. I haven’t ever felt this kind of grief, for this long, in my life. When I am feeling the most sad, when I’m sobbing until I can’t breathe, I feel closest to her, so all I can do is go through it, honor it, and embrace her memory.
There’s a dog on Instagram called Wesley the Chicken Nugget. I adore him, and I love it when his person shares photos and video of him being a dog, so I completely understand how we can love animals we’ve never met. I know that lots of you loved Marlowe, and that comforts me every day.
So thank you, from Anne and me, for choosing to be kind.
I had to take a couple weeks off from recording stories for It’s Storytime (I’ve come to believe that four or five weeks of bereavement leave isn’t unreasonable) but we’re back to work and there’s a new story this week that I wanted everyone to know about.
It’s called To Carry You Inside You, by Tia Tashiro. Here’s my intro:
I grew up in the entertainment industry, not by choice, so I had a front row seat to the abuse and exploitation of child actors like myself. I grew up absolutely terrified of upsetting anyone on the set, robotically doing whatever I was told, so I could just get through it and have one of the precious and rare hours of my childhood where I got to just be a kid, before I was ripped out of childhood and thrust back into a place I never wanted to be.
Today, we are going to visit a future where child actors are still exploited, still used up and discarded, facing an adult life without purpose, that they were never prepared for, because nobody cared what happened to them past an arbitrary age.
We will meet a young woman who is doing her best to assemble the pieces of a stolen childhood into a fulfilling adult life. It isn’t what she wanted, or would have chosen for herself, but she’s doing her best, which is all any of us can do.
This is one of those examples of speculative fiction that I point to when I talk about the power of storytelling that lands on different people for different reasons. This story isn’t about me, but holy shit is it about me. In fact, when I reached out to Tia and asked for permission to do the narration, I mentioned that she captured the experience of being a child actor so perfectly and honestly, she must have some firsthand experience … imagine my surprise when she told me that she didn’t, that she used her imagination to create those moments.
Holy shit. That’s incredible. Please let me know what you think, if you listen.
Anyway, I’m doing my best to promote the show and just let people know it exists, but I keep getting crushed by the algorithm. On Threads, the posts before and after I talked about the podcast have thousands of views and hundreds of interactions, but my post about this episode has like 20 interactions and has only been seen by about 2000 of the 5000000 accounts that follow me. That seems … odd. And honestly, it’s kind demoralizing that one of the few direct ways I have to tell people this exists seems to work against supporting that. I’ve tried letting Bluesky know, and the 13 people who tend to notice me there are excited about it, I’m sure, but it just doesn’t seem to get traction there at all. If anyone reading this has experience bringing something to an audience who will probably love it, but just don’t know about it, I’d be grateful to hear anything you have to say about it.
Last thing, that is explicitly in service of promotion: If you listen to the podcast, you can help me out by rating and reviewing it wherever you are subscribed. The show’s audience is growing slowly but steadily, and I know it isn’t because of me; it’s because listeners are recommending it. That means so much to me. Thank you.
Digital Hopes, Real Power: From Connection to Collective Action [Deeplinks]
This is the fifth and final installment of a blog series reflecting on the global digital legacy of the 2011 Arab uprisings. You can read the rest of the series here.
If the Arab Spring was defined by optimism about what the internet could do, the years since have been marked by a more sober understanding of what it takes to defend it.
Back in 2011, the term “digital rights” was still fairly new. While in the decades prior, open source and hacker communities—as well as a handful of organizations including EFF—had advocated for digital freedoms, it was through the merging of disparate communities from around the world in the 2000s that digital rights came to be more clearly understood as an extension of fundamental human rights.
In 2011, we observed that there were only a few organizations focused on digital rights in the region. Groups like Nawaat, which emerged from the Tunisian diaspora under the Ben Ali regime; the Arab Digital Expression Foundation, formed to promote the creative use of technology; and SMEX, which was initially created to teach journalists and others about social media but has grown to become a powerful force in the region, led the way. Since that time, dozens of organizations have emerged throughout the region to promote freedom of expression, innovation, privacy, and digital security.
Understanding how the digital rights movement evolved in the Middle East and North Africa requires a closer look at the communities that shaped it, and the organizations that are carrying on the fight today. Perspectives from people and organizations that were key to these efforts offer critical insight into how the movement has grown and what challenges lie ahead.
Reem Almasri, a senior researcher and digital sovereignty consultant, says that:
‘Digital rights’ emerged as a term around the Arab Spring, when the internet was still a fairly unregulated space, we were still trying to figure out the tech companies’ policies, and force governments to look at the internet as a fundamental right like water and electricity.
But then the need to converge digital rights to everyday rights—economic, political, social rights—and to connect it to geopolitics has started to be thought about, and to be in discussion as well. And to not look at digital rights as a separate field from everything else that’s affecting it, from the geopolitical context.
Mohamad Najem, who co-founded SMEX in 2008 and has led it to become the largest organization in the region, told me that, at the time, “Nobody gave [social media] a lot of attention in our region.” Their work was “a positive approach to social media, how we can democratize sharing information, how we can share more from civil society, change people’s minds, et cetera.”
“After that phase,” he continues, “we can think about 2012-2013—after the Arab Spring, as an organization we started looking at the infrastructure of the internet, and how freedom of expression and privacy are affected. That’s when we started looking more at what we call digital rights.”
Towards Tech Accountability
In the aftermath of the Arab Spring, social media companies moved from a largely hands-off approach to governance toward more formalized—and often opaque—content moderation systems. Platforms expanded their trust and safety teams and began working more closely with civil society through trusted partnerships in the region and globally. But, Mohamad Najem says:
After the expansion of tech accountability itself and the adaptation of tech companies, we’ve noticed that it’s not taking us anywhere. Gradually we’ve come to a new phase where it feels like tech accountability is an economy by itself that is not leading to real results. So the next phase for us at least and maybe for others in global majority communities is how we can focus on digital public good, how we can push more governments, private and public institutions to adopt more open source software, to look at the ecosystem and understand the US threats happening now, et cetera.
Another group that has played a key role in the fight for digital rights and tech accountability in the region is 7amleh, a Palestinian organization that was founded in 2013. At the time, says Jalal Abukhater:
[I]t was unique and interesting in Palestinian society to have a human rights organization dedicated fully to the topic of digital rights, you know, human rights in a digital format. However, with the years, we saw various milestones, we saw progress of policy decisions and movements through the Israeli government to influence content moderation in Big Tech companies. We saw problems there as an organization.
7amleh took a leading stance in fighting to preserve the digital rights of Palestinians during a period where there was a very strong influence through the Israeli government. There was actually quite important reporting coming through 7amleh on the situation of online content moderation at a time when it wasn’t really a topic being discussed but it was very clearly a situation where there was major influence by government and political suppression happening as a result.
An Ever-Expanding Ecosystem
While in the early days, the digital rights movement attracted specialists, today, people from other fields have recognized how digital rights intersect with their work, and the digital rights community has embraced them.
Almasri says:
Because the digital rights movement has been decentralizing and has stopped being a speciality, it stopped being an exclusive thing for digital rights specialists, since of course the internet not only in the Arab region but all over the world has become a fundamental infrastructure for running any kind of sensitive operations, or operations in general…all types of organizations, and companies, and initiatives are thinking about their digital security, about how internet laws are affecting the use of the internet, or putting them at risk, and how surveillance technologies are affecting their operations.
Abukhater credits the collaborative work that emerged within the region over the years in building the movement’s strength:
[Today], civil society and digital civil society have many forums, many coalitions and networks, but it’s always important to remember that this is work that builds over many years of experience, and relationships, and networks—that it’s different parties coming to support each other at different phases to ensure that this kind of work succeeds and that this ecosystem is sustained globally with support from partner organizations which were very crucial in ensuring that this ecosystem is sustained, especially in Palestine.
Growing Collaborations
Conferences like Bread and Net, first held in Beirut in 2018, and the Palestine Digital Activism Forum (PDAF), first held in Ramallah in 2017, bring activists, academics, journalists, and other practitioners together to network and learn about each other’s work. The pandemic, conflict, and other barriers haven’t stopped either conference from carrying on: PDAF has become an annual virtual event that draws big-name speakers, while Bread & Net has spaced out its meetings but continues to draw bigger crowds each time.
Almasri credits these meetings with expanding the movement beyond the traditional techies and activists who first got involved. “You see a wide spectrum of different fields. You see artists, archivists, journalists joining these conversations, which is definitely on the brighter side of things when it comes to this field, or this scene.”
She also credits the emergence of alliances such as the Middle East Alliance for Digital Rights (MADR, of which EFF is a member), founded in 2020 by individuals and organizations who had been working together for many years to formalize those collaborations.
“Other than the collaborations at the advocacy level, [MADR] creates a sort of pressure point on Big Tech, on content moderation policies, allows for certain coordination at the level of the UN, et cetera, which I see as really positive because it brings some of the redundant efforts together and helps decide on priorities.”
Looking Forward
In thinking about the future of the movement, Almasri and Najem agree that digital rights are no longer a niche. In Najem’s words, “It’s about everything else…it’s about everything.”
Almasri adds:
[W]hen it comes to priorities, things that this scene has been working on, I feel that October 7 [2023] was a big turning point in the way that digital rights activists, researchers, and academics—this field—is looking at digital rights in general. Of course, there is the major question of the need to revise tactics to fight Israel’s tech-enabled genocide that is also empowered by the global economy, big tech, and governments of the world? What alliances should we start building on a regional and global level?
She sees ‘digital sovereignty,’ the ability of people and communities to choose, control, and use technology that serves their needs and values, as one of the next big topics for the movement to tackle, as debates over who owns and hosts our data have sharpened amid revelations that U.S. companies have played a role in regional conflicts.
There have been pockets of debates on how to achieve digital sovereignty, especially from human rights organizations documenting war crimes … There’s an awareness of how the dependence on US-based providers, cloud storage, even hosting infrastructure is a risk, especially after how using these services has been weaponized against the digital existence of certain organizations in the region that have been deplatformed or had their content removed on platforms like Meta and YouTube because their content doesn’t align with the foreign policy of the United States…so it raises a big question about how we look at digital independence, what is the spectrum of independence that civil society in the region can achieve, and in relation to what’s available as well.
Almasri also points to the role of researchers in the region:
There has been a lot more research on the political economy of surveillance technologies, so not only looking at how governments are using them, but their supply chain, who’s investing in these technologies, and how geopolitical networks empowered their proliferation in the hands of governments.
This is where studies looking at the political economy of AI and the military become important, trying to understand how this field of weapons, the military, and AI grew together as part of this global capitalist system rather than looking at these technologies in silos, that is. Looking at the proliferation of these technologies from a geopolitical point of view, looking at the bigger ecosystem rather than zooming in to the specifics of it. I think this has been a big development in the way that we look at digital rights, and the way that digital rights have been converged and integrated into the geopolitical scene.
As the global digital rights community continues to expand, it’s clear that the questions at its core are no longer just about access or expression, but about power—who holds it, how it is exercised, and who is left out of its protections. What began as a fight to keep the internet open has become a broader effort to reimagine it—an effort that is grappling with questions of infrastructure, ownership, and the global inequalities embedded in both.
And yet, despite the scale of these challenges, the movement’s strength lies in the solidarity, the ecosystems, and the networks it has spent more than a decade building. From the early days of the blogging and techie communities to the increasingly powerful digital rights community, advocates in the region have gone up against dictators, endured war and repression, yet remain determined to push forward.
Requiem for a Back Deck [Whatever]


After 30 years of existence, our back deck is no more… at least for the few days it will take to build the new one. The previous deck had given good service, but over the years it had become splintery and a bit rickety (when the contractor was pulling it up, he pointed out to Krissy the places where the house’s original owner had clearly cut some corners) and it was time to swap it out with something able to withstand the next few decades. On top of that, Krissy wants the deck covered, to make it more comfortable on hotter summer days.
As noted earlier, we already needed our front porch railing redone, so why not get it all taken care of in one swoop. So here we are. It’s still mildly shocking to see the lack of a deck, and I imagine the cats, who are used to wandering around on the back deck, are going to be befuddled for a bit. Fortunately, the new deck will not take too long to put up (knock on the wood that will go into making it).
In the meantime, here’s some dirt! There used to be a deck on it! And there will be again. Soon.
— JS
Sergio Cipriano: My experience at MiniDebConf Campinas 2026 [Planet Debian]
Last week, I spent the entire week in Campinas attending MiniDebConf and MiniDebCamp. The Debian Brazil community organizes this event every year, and this year's edition was the biggest so far.
During MiniDebCamp, I sponsored a few uploads and spent two days teaching packaging to two participants. I usually teach packaging online, so it was refreshing to do it in person. I believe the experience was much better than teaching online.
One of my mentees introduced me to the DDTSS (Debian Distributed Translation Server Satellite). Even though there are many i18n contributors in Brazil, this was my first time learning about this system. I plan to contribute to translations over the next few weeks using DDTSS.
NOTE: I translated every talk title; the original titles are in PT-BR, so some details may have been lost in translation.
I presented three talks and led one BoF session. The talks are all available on Debian's Peertube:
You can also find my slides at people.d.o.
My first talk was a showcase of dh-make-vim, a tool I created and have been using for a few months. Some people tested it and found bugs, which was really nice to see.
My second talk was essentially a live version of my blog post Zero-Code Instrumentation of an Envoy TCP Proxy using eBPF.
I also gave a lightning talk about something many people are not aware of: non-uploading DDs can also sponsor uploads.
If you're interested, this bug report provides more context: tracker.debian.org: Signed by field is missing when sponsoring as DD non-uploading
Finally, I led the BoF session "Experiences, lessons learned, and next steps from the mentoring sessions". This was my favorite session, we had many participants with different perspectives and ideas, which led to a very engaging discussion. I'm still working on the action plans and I plan to release them soon.
Here are some photos of these activities:





This is a list, in no particular order, of some of the sessions I enjoyed the most:
Salsa CI, showing features that almost nobody knows
I wrote a blog post about one of the things I learned in this talk, and there is still a lot more to explore. Aquila Macedo is developing many cool features in Salsa CI.
Free Software: Freedom, Autonomy, Sovereignty
I had been really looking forward to this one. Alexandre Oliva is a very important figure in the Free Software movement, especially in South America. I'll need to rewatch it, my futures talks about Free Software will likely be inspired by this one.
What I've lived/seen in 33 Years of Debian & Free Software in general
Eduardo Maçan was the first Debian Developer in Brazil, so it's always valuable to hear the story from someone who was part of it.
Despite the title, this talk was not about astrology! I'll probably rewatch it as well, as there is a lot of information to take in. I really like the passion Sérgio Durigan has for C. He is also a great speaker and knows how to guide the audience through the topic.
Debate: Contemporary controversies in Debian
The debate itself was great, but the conversations we had afterward were even better. I changed some of my opinions after hearing different perspectives. I don't think this format would work at DebConf, but I would definitely like to attend another one like this.
I had a few questions about LTS, and Kanashiro and Santiago answered them both during the talk and in the Q&A session. They also shared some challenges and how to avoid them, it was a great learning experience.
From my first contribution to the Debian Maintainer
Polkorny was a bit shy but did a great job! I really enjoy this kind of talk. It is always nice to see the different paths people take.
Unfortunatly, I couldn't attend everything I was interested in, as always.
Sirius is the largest and most complex scientific infrastructure ever built in Brazil and one of the most advanced synchrotron light sources in the world. My jaw dropped the entire time; it's hard to describe how incredible this is.
My favorite detail: they're running Debian :)






I believe this was the best MiniDebConf Brazil so far. There were many other things I chose not to include here, as this post is already quite long. Still, here are a few more highlights:
Email is like those creaking old Terminators from the ’70s which continue to function without complaining. Designed for a world that doesn’t exist anymore, it has optional encryption, no built-in auth, three⁺ retrofitted security layers bolted on top, an unstandardized filtering layer and many more quirks. Yet billions of emails arrive correctly every single day.
Email is not elegant but nonetheless it is Lindy. In the new age of agentic AI, we can only expect it to metamorphose into another dimension.
↫ Saurabh “Sam” Khawase
The fact that email is as complicated as it is bad enough, but having it be so dominantly controlled by only a few large gatekeepers like Google and Microsoft surely isn’t helping either. I feel like email is no longer really a technology individuals can actively partake in at every level; it feels much more like WhatsApp or iMessage or whatever in that we just get to send messages, and that’s it. Running your own mail sever isn’t only a complex endeavour, it’s also a continuous cat-and-mouse game with companies like Google and Microsoft to ensure you don’t end up on some shitlist and your emails stop arriving.
I settled on Fastmail as my email service, and it works quite well. Still, I would love to be able to just run my own email server, or have some of my far more capable friends run one for a small group of us, but it’s such a daunting and unpleasant effort few people seem to have the stomach and perseverance for it.
The Big Idea: Brenda W. Clough [Whatever]

Imagine a world where political servants actually served us, and whose decisions were backed by the will of the people, rather than their greed. If it sounds like fantasy, you may want to check out author Brenda W. Clough’s newest novel, Off the Screen. Follow along in her Big Idea, and remember to vote!
BRENDA W. CLOUGH:
I began Off the Screen more than twenty years ago. There’s a couple major drivers of the work, but the big one is the reboot of American democracy. It’s set in 2160, and at that point I felt that the United States could have refurbished its systems somewhat.
But, in those golden 2000s, I abandoned the work because I couldn’t imagine why we would need to tinker with the system of American governance. Everything was fine, the economy was good, Bill Clinton was president and running things reasonably well. I couldn’t figure out any way to get from here to there. And so I closed the ms.
Well! Hah! When I found the manuscript on a thumb drive in 2025, it was obvious why we had a crying need for a reboot! The problem was plain to see: the serious disconnect between the people and the rulers. We, down here, need stuff done, and we can’t get Congress to do it. The Founding Fathers designed the system to be a representative democracy – we elect our two senators and one congressperson, and they go to Washington and do our will. But it’s not working. We need a fix.
This is not a new idea. Many, many political commentators today are saying this. Every time Heather Cox Richardson talks about what we can do in this moment, she calls for new ideas, new thoughts. Oh honey. You are calling my name!
So for this novel I redesigned America. Congress, that useless buffer, is now drastically pruned back. They are our servants, remember. We pay them to do stuff for us, the same way you pay the guy to mow your lawn or fix your car. We do not pay them to fly in private airplanes and feather their nests with insider trading.
In Off the Screen, the citizens vote. All of us, every American every single day, has to vote. A neat system called DiDem, Digital Democracy, is tied to your online life. What do you do when you get up in the morning? Slug down a cup of coffee perhaps, and pick up your cell phone or open your laptop? In this book, when you swipe your cell open, the first thing that comes up is your ballot for the day. You have to do this before you get to open your email, or text your daughter, or check in with the office – it’s the starter screen of every American, and so it gets done.
Every morning you vote on a simple five issues, so the process takes perhaps a couple minutes. You spend longer finding the creamer to put into your coffee, so this is endurable. Each question is a yes/no vote, a KISS feature (Keep It Simple Stupid) that keeps it down to five taps on the screen. Then you’re free for the rest of the day to download porn or work on your bitcoin, anything. But daily voting in this novel is a requisite for citizenship.
These five questions are necessarily rather crude. Shall we invest in the repair of the Pennsylvania Turnpike? Should we impose economic sanctions upon Boeing? What about invading the Seychelle Islands? Yes or no, make a decision. Once the American people decide, it’s Congress’s job to do it: find the money for the turnpike, declare war against the Seychelles. And then, if that war means we need a bigger Army and maybe a draft, it can go back to DiDem again for more decision making. Do we increase taxes for that bigger army? Do we institute a draft? Yes or no? If we demand the impossible – yes, I want the Seychelles bombed back to the Stone Age, but no I don’t want to pay for it – Congress comes back with another vote: since we won’t pay for this war, do we sue for peace?
And not all questions are important enough to submit to the entire population of the United States of America. If you live in Arizona you may not care about the Penn Turnpike. So, every American votes every day on five questions. But we don’t all see the same five questions. A color-coded system of ranking gets minor questions decided by a smaller segment of the voters. If that first set of voters decides it’s important, it goes up to be voted on by a larger number. So at the end of the day, that decision to invade the Seychelles may get approved by an actual numerical majority of Americans, but it has to pass through a number of lesser votes to get there.
What DiDem gets you is the levers of power in the hands of the people. Congress is demoted to servants, the waiters at the restaurant who take your order and then set the hamburger in front of you. This is delicious to contemplate, isn’t it?
Unfortunately DiDem also means that a lot of stupidity occurs. The international proverb, in this novel, is that Americans cannot agree on which way is up. I think we acknowledge today that people are by and large dumb as stumps. We make idiotic electoral choices that are swayed by crashingly disastrous criteria like fame, race, gender, sexual orientation, wealth, or fingernail color. For heaven’s sake, the Brits voted for Brexit! Even a perfected democracy does not free us from humanity’s innate flaws. Bad political decisions continue to be made in the world of Off the Screen, and I drop my hero Edwin Barbarossa into their chippers.
But he mostly ignores it, because he’s busy with the other Big Idea in this book. Live theater has been slain by AI. Actors exist mainly to be scraped for voices, pretty faces, and luscious boobs. And then someone decides to create the first live original stage musical in a generation. Eddie’s going to write the lyrics and score.
Which means that I had to write the book and lyrics, because they’re in ongoing development through the entire novel. To acquire the rights to quote Sondheim or Oscar Hammerstein would be impossible. Believe it or not, sometimes it’s just easier to write a musical yourself.
And, because the canons of theater demand it, everything comes to a head on opening night: the show, Eddie’s fate, DiDem’s survival. This is the biggest book I have ever written, and if it had appeared in 2000 it would have been magnificently prophetic. But just as well it didn’t. We need it today.
Off the Screen: Book View Cafe
Open Records Laws Reveal ALPRs’ Sprawling Surveillance. Now States Want to Block What the Public Sees. [Deeplinks]
Reporters, community advocates, EFF, and others have used public records laws to reveal and counteract abuse, misuse, and fraudulent narratives around how law enforcement agencies across the country use and share data collected by automated license plate readers (ALPRs). EFF is alarmed by recent laws in several states that have blocked public access to data collected by ALPRs, including, in some cases, information derived from ALPR data. We do not support pending bills in Arizona and Connecticut that would block the public oversight capabilities that ALPR information offers.
Every state has laws granting members of the public the right to obtain records from state and local governments. These are often called “freedom of information acts” (FOIAs) or “public records acts” (PRAs). They are a powerful check by the people on their government, and EFF frequently advocates for robust public access and uses the laws to scrutinize government surveillance.
But lawmakers across the country, often in response to public scrutiny of police ALPRs, are introducing or enacting measures aimed at excluding broad swaths of ALPR information from disclosure under these public records laws. This could include whole categories of important information: general information about the extent of law enforcement use; details on ALPR sharing across policing agencies; data on the number of license plate scans conducted, where they happened, and how many “hits” for license plates of interest actually occur; analyses on how many false matches or other errors occur; and images taken of individuals’ own vehicles.
No thanks. Public records and public scrutiny of ALPR programs have shown that people are harmed by these systems and that retained ALPR data violates people’s privacy. In this moment, lawmakers should not be completely cutting off access to public records that document the abuses perpetuated by ALPRs.
To be sure, there are legitimate concerns about wholesale public disclosure of raw ALPR data. After all, many of the harms people experience from these systems are based on the government’s collection, retention, and use of this information. Public transparency rights should not exacerbate the privacy harms suffered by people subjected to ALPR surveillance. But many current proposals do not address legitimate privacy concerns in a measured way, much less seek to harmonize people’s privacy with the public’s right to know.
There is a better path to balancing privacy and transparency rights than outright bans or total disclosure.
Any legislative proposal concerning public access to ALPR data must start with this reality: ALPR data is deeply revealing about where a person goes, and thus about what they are doing and who they are doing it with. That’s a reason why EFF opposes ALPRs. It is dangerous that the police have so much of our ALPR information. Even worse for our privacy would be for police to disclose our ALPR information to our bosses, political opponents, and ex-friends. Or to surveillance-oriented corporations that would use our ALPR information to send us targeted ads, or monetize it by selling it to the highest bidder.
On the other hand, EFF’s firsthand experience using public records from ALPR systems demonstrates the strong accountability value of public access to many kinds of ALPR data, including information like data-sharing reports and network audits. For example, in our “Data Driven” series, we used ALPR data-sharing and hit ratio reports to investigate the extent of ALPR data sharing between police departments and to analyze the number of ALPR scans that are ultimately associated with a crime-related vehicle. We have also identified racist uses of ALPR systems, ALPR surveillance of protestors, and ALPR tracking of a person who sought an abortion. Across the country, municipalities have been shutting down their contracts for ALPR use, often citing concerns with data sharing with federal and immigration agents.
These records are not just informational—they are leverage. Communities, journalists, and local officials have used ALPR disclosures to block new deployments, refuse contract renewals, and terminate existing agreements with surveillance vendors whose practices proved too dangerous to continue. Without this evidentiary record, it is far harder for cities to exercise their procurement power to say no.
It is not always easy to harmonize transparency and privacy when one person wishes to use a public records law to obtain government records that reveal people’s personal information. The best approach is for public records laws to contain a privacy exemption that requires balancing, on a case-by-case basis, of the transparency benefits versus the privacy costs of disclosure. Many do. These provisions of public records laws already accommodate similar concerns about disclosing personal information of private individuals whose information the government may have collected, government employee’s private data, and other personal information.
The balancing provisions in these laws are often flexible and allow for nuance. For example, if a government record contains a mix of information that does not reveal people’s private information and some that does, agencies and courts can disclose the non-private information while withholding the truly private information. This is often accomplished with blacking out, or redacting, the private information.
Applying this privacy-and-transparency balancing to ALPR records, it will often be appropriate for the government to disclose some information and withhold other information. Everybody should generally have access to records showing their own movements and other information captured by ALPRs, but the privacy protections in public records laws should foreclose a single person’s ability to get a copy of similar records about everyone else. And even with accessing your own data, there are complications with shared vehicles that should be considered when balancing privacy and transparency.
An example of where it may be appropriate to release unredacted data and images would be vehicles engaged in non-sensitive government business. For example, a member of the public might use ALPR scans of garbage trucks to identify gaps in service, which would not reveal private information. On other hand, it would be inappropriate to release the scans of a government social worker visiting their clients.
Public records laws should allow a requester to obtain some ALPR information about government surveillance of everyone else, in a manner that accommodates the public transparency interest in disclosure and people’s privacy interests. For example, the best public records laws would disclose the times and places that plate data was collected, but not plate data itself. This can be done, for example, by an agency or court finding that disclosing aggregated and/or deidentified ALPR data protects the privacy or other interests of individuals captured within the data. The best laws recognize that aggregation or de-identification of databases are redactions in service of individual privacy (which responding agencies must do), and are not creating new public records (which responding agencies sometimes need not do).
Likewise, in a government audit log of police searches of stored ALPR data, it will often be appropriate to disclose an officer’s investigative purposes to conduct a search, and the officer’s search terms – but not the search term if it is a license plate number. Many people do not want the world to know that they are under police investigation, and many public records laws generally limit the disclosure of such sensitive facts because of the reputational and privacy harm inherent in that disclosure.
Aggregate ALPR information about, for example, the amount of data collected and error rates can have important transparency value and impact government policy. Requiring the public release of that kind of data contributes to informed public discussion of how our policing agencies do their jobs. This kind of information has been used to study, critique, and provide oversight of ALPR use.
Thus, the wholesale exemption of ALPR information from disclosure under state public records laws would stymie the public’s ability to monitor how their government is using powerful and controversial surveillance technology. EFF cannot support such laws.
In Connecticut, SB 4 is a pending bill that would exclude, from that state’s public records law, information “gathered by” an ALPR or “created through an analysis of the information gathered by” an ALPR. This could ultimately harm individual civilians, who would have less ability to protect themselves from law enforcement that indiscriminately collect vehicle information. Other provisions of this bill would limit government use of ALPRs, and regulate data brokers.
In Arizona, SB 1111 would restrict public access to ALPR data “collected by” an ALPR. The bill would even make it a felony to access or use data from an ALPR (or disseminate it) in violation of this article, which apparently might apply to a member of the public who obtained ALPR data with a public records request. The bill’s author claims it adds “guardrails” for ALPR use.
Earlier this year, Washington state enacted a law that will exempt data “collected by” ALPRs from the state’s public records law. While “bona fide research” will still be a way for some people to obtain ALPR data, this may not include journalists and activists who analyze aggregate data to identify policy flaws. Notably, Washington courts found last year that information generated by ALPR, including images of an individual’s own vehicle, are public records; this new legislation will override that decision, blocking the ability for people to see what photos police have taken of their own vehicles. Other provisions of this new law will limit government use of ALPRs.
A year ago, Illinois’ HB 3339 ended use of that state’s public records law to obtain ALPR information used and collected by the Illinois State Police (ISP), including both information “gathered by an ALPR” and information “created from the analysis of data generated by an ALPR.” This Illinois language for just the ISP is very similar to what is now being considered in Connecticut for all state and local agencies.
Sadly, the list goes on. Georgia exempted ALPR data (both “captured by or derived from” ALPRs) of any government agency from its open records law. Adding insult to injury, Georgia also made it a misdemeanor to knowingly request, use, or obtain law enforcement’s plate data for any purpose other than law enforcement. Maryland exempted “information gathered by” an ALPR from its public information act. Oklahoma exempted from its open records act the ALPR data “collected, retained or shared” by District Attorneys under that state’s Uninsured Vehicle Enforcement Program.
These laws and bills in seven states are an unwelcome national trend.
We urge legislators to reject efforts to amend state public records laws to wholly exempt ALPR information. This would diminish meaningful oversight over these controversial technologies. Public disclosure of some ALPR information is important.
There is a better approach for states that want to harmonize privacy and transparency in the context of ALPR data:
FOIA balancing standards are one layer in a larger governance stack, and work best alongside strong guardrails on whether and how governments procure ALPR systems in the first place: public debate over vendor contracts, binding surveillance ordinances, strict data‑retention limits, and clear pathways to end ALPR programs entirely where the risks prove too great.
Developing a cross-process reader/writer lock with limited readers, part 3: Fairness [The Old New Thing]
We’ve been building a cross-process reader/writer lock with a cap on the number of readers, we concluded our investigation last time by noting that throughput of exclusive accesses was poor. What’s going on?
The problem is that exclusive acquisitions are working to claim semaphore tokens one at a time, so it can lose out to shared acquisitions that are requested even after the exclusive acquisition had started, effectively letting shared acquisitions “jump ahead of the exclusive acquisition”, and thereby starving out exclusive acquisitions.
| Token count | Exclusive acquirer |
Shared acquirers | |||
|---|---|---|---|---|---|
| A | B | C | D | ||
| 5 | |||||
| 4 | Acq | ||||
| 3 | Acq | ||||
| 2 | 1st Acq | ||||
| 1 | 2nd Acq | ||||
| 0 | 3rd Acq | ||||
| 0 | 4th Acq (blocks) | ||||
| 0 | Acq (blocks) | ||||
| 0 | Acq (blocks) | ||||
| 1 | Rel | ||||
| 0 | 4th Acq | ||||
| 0 | 5th Acq (blocks) | ||||
| 1 | Rel | ||||
| 0 | Acq | ||||
| 1 | Rel | ||||
| 0 | Acq | ||||
Let’s say that we have capped the number of shared acquisitions to five. In the above scenario, we have an exclusive acquiring thread and four shared acquiring threads. The first two shared acquiring threads (call them A and B) succeed at their shared acquisitions, and then the exclusive acquiring thread tries to enter exclusively. The exclusive acquiring thread needs five tokens, and it quickly gets three of them before blocking when it tries to get the fourth.
While the exclusive acquiring thread is waiting to get its fourth token, two other shared acquiring threads (call them C and D) try to enter in shared mode. They too block.
One of the original shared acquiring threads releases its shared lock, which release a token, and that token is quickly snapped up by the exclusive acquiring thread, thanks to the “mostly FIFO” policy for synchronization objects. (Let’s assume for the purpose of this discussion that none of the things that violate FIFO-ness has occurred.) The exclusive acquiring thread now waits to claim its fifth token.
When the second of the original shared acquiring threads releases its token, it is given to thread C, even though thread C started its shared acquisition after the exclusive acquiring thread tried to acquire exclusively.
And then when thread C releases its token, that token is given to thread D, since its request for the token precedes the exclusive thread’s request for the fifth token. The exclusive acquiring thread has once again gotten boxed out.
To fix this, we can make all acquisitions claim the shared mutex. The shared mutex then does the work of enforcing “mostly FIFO” acquisition behavior across all acquisitions.
Since we’re going to be doing combined timeouts, I’ll refactor the timeout management into a helper class.
struct TimeoutTracker
{
explicit TimeoutTracker(DWORD timeout)
: m_timeout(timeout) {}
DWORD m_start = GetTickCount();
bool Wait(HANDLE h)
{
DWORD elapsed = GetTickCount() - m_start;
if (elapsed > m_timeout) return false;
return WaitForSingleObject(h, m_timeout - elapsed)
== WAIT_OBJECT_0;
}
};
We can use this helper class to manage our timeouts.
HANDLE sharedSemaphore;
HANDLE sharedMutex;
void AcquireShared()
{
WaitForSingleObject(sharedMutex, INFINITE);
WaitForSingleObject(sharedSemaphore, INFINITE);
ReleaseMutex(sharedMutex);
}
bool AcquireSharedWithTimeout(DWORD timeout)
{
TimeoutTracker tracker(timeout);
bool result = tracker.Wait(hSharedMutex);
if (!result) return false;
result = tracker.Wait(sharedSemaphore);
ReleaseMutex(sharedMutex);
return result;
}
// no change to AcquireExclusive
void AcquireExclusive()
{
WaitForSingleObject(sharedMutex, INFINITE);
for (unsigned i = 0; i < MAX_SHARED; i++) {
WaitForSingleObject(sharedSemaphore, INFINITE);
}
ReleaseMutex(sharedMutex);
}
// no functional change, but using the new helper class
bool AcquireExclusiveWithTimeout(DWORD timeout)
{
TimeoutTracker tracker(timeout);
bool result = tracker.Wait(sharedMutex);
if (!result) return false;
for (unsigned i = 0; i < MAX_SHARED; i++) {
if (!tracker.Wait(sharedSemaphore)) {
// Restore the tokens we already claimed.
if (i > 0) {
ReleaseSemaphore(sharedSemaphore, i, nullptr);
}
ReleaseMutex(sharedMutex);
return false;
}
}
ReleaseMutex(sharedMutex);
return true;
}
(Yes, I’m not using RAII. I’ve made that choice for expository purposes, since it lets you see exactly when each synchronization object is acquired and released.)
Are we done?
No, we’re not done.
There is still a serious problem that needs to be fixed. We’ll look at it next time.
The post Developing a cross-process reader/writer lock with limited readers, part 3: Fairness appeared first on The Old New Thing.
Developing a cross-process reader/writer lock with limited readers, part 2: Taking turns when being grabby [The Old New Thing]
Last time, we built a cross-process reader/writer lock with a cap on the number of readers, but I noted that there was still a problem.
The problem occurs when two threads both try to acquire the lock exclusively. In that case, both threads try to claim all the tokens. And the problem is that they can get into a stalemate, where one thread has half of the tokens, and the other thread has the other half, and neither side will back down, resulting in an impasse.
We can avoid this by serializing all the attempts to acquire exclusive locks. That way, there is at most one greedy thread at a time.
HANDLE sharedSemaphore; HANDLE sharedMutex; void AcquireExclusive() { WaitForSingleObject(sharedMutex, INFINITE); for (unsigned i = 0; i < MAX_SHARED; i++) { WaitForSingleObject(sharedSemaphore, INFINITE); } ReleaseMutex(sharedMutex); } bool AcquireExclusiveWithTimeout(DWORD timeout) { DWORD start = GetTickCount(); WaitForSingleObject(sharedMutex, INFINITE); for (unsigned i = 0; i < MAX_SHARED; i++) { DWORD elapsed = GetTickCount() - start; if (elapsed > timeout || WaitForSingleObject(sharedSemaphore, timeout - elapsed) == WAIT_TIMEOUT)) { // Restore the tokens we already claimed. if (i > 0) { ReleaseSemaphore(sharedSemaphore, i, nullptr); } ReleaseMutex(sharedMutex); return false; } } ReleaseMutex(sharedMutex); return true; }
Okay, this avoids the problem of two exclusive acquisitions, but we still have a problem: Exclusive access throughput is poor. We’ll look at this next time.
The post Developing a cross-process reader/writer lock with limited readers, part 2: Taking turns when being grabby appeared first on The Old New Thing.
Anti-DDoS Firm Heaped Attacks on Brazilian ISPs [Krebs on Security]
A Brazilian tech firm that specializes in protecting networks from distributed denial-of-service (DDoS) attacks has been enabling a botnet responsible for an extended campaign of massive DDoS attacks against other network operators in Brazil, KrebsOnSecurity has learned. The firm’s chief executive says the malicious activity resulted from a security breach and was likely the work of a competitor trying to tarnish his company’s public image.
An Archer AX21 router from TP-Link. Image: tp-link.com.
For the past several years, security experts have tracked a series of massive DDoS attacks originating from Brazil and solely targeting Brazilian ISPs. Until recently, it was less than clear who or what was behind these digital sieges. That changed earlier this month when a trusted source who asked to remain anonymous shared a curious file archive that was exposed in an open directory online.
The exposed archive contained several Portuguese-language malicious programs written in Python. It also included the private SSH authentication keys belonging to the CEO of Huge Networks, a Brazilian ISP that primarily offers DDoS protection to other Brazilian network operators.
Founded in Miami, Fla. in 2014, Huge Networks’s operations are centered in Brazil. The company originated from protecting game servers against DDoS attacks and evolved into an ISP-focused DDoS mitigation provider. It does not appear in any public abuse complaints and is not associated with any known DDoS-for-hire services.
Nevertheless, the exposed archive shows that a Brazil-based threat actor maintained root access to Huge Networks infrastructure and built a powerful DDoS botnet by routinely mass-scanning the Internet for insecure Internet routers and unmanaged domain name system (DNS) servers on the Web that could be enlisted in attacks.
DNS is what allows Internet users to reach websites by typing familiar domain names instead of the associated IP addresses. Ideally, DNS servers only provide answers to machines within a trusted domain. But so-called “DNS reflection” attacks rely on DNS servers that are (mis)configured to accept queries from anywhere on the Web. Attackers can send spoofed DNS queries to these servers so that the request appears to come from the target’s network. That way, when the DNS servers respond, they reply to the spoofed (targeted) address.
By taking advantage of an extension to the DNS protocol that enables large DNS messages, botmasters can dramatically boost the size and impact of a reflection attack — crafting DNS queries so that the responses are much bigger than the requests. For example, an attacker could compose a DNS request of less than 100 bytes, prompting a response that is 60-70 times as large. This amplification effect is especially pronounced when the perpetrators can query many DNS servers with these spoofed requests from tens of thousands of compromised devices simultaneously.
A DNS amplification and reflection attack, illustrated. Image: veracara.digicert.com.
The exposed file archive includes a command-line history showing exactly how this attacker built and maintained a powerful botnet by scouring the Internet for TP-Link Archer AX21 routers. Specifically, the botnet seeks out TP-Link devices that remain vulnerable to CVE-2023-1389, an unauthenticated command injection vulnerability that was patched back in April 2023.
Malicious domains in the exposed Python attack scripts included DNS lookups for hikylover[.]st, and c.loyaltyservices[.]lol, both domains that have been flagged in the past year as control servers for an Internet of Things (IoT) botnet powered by a Mirai malware variant.
The leaked archive shows the botmaster coordinated their scanning from a Digital Ocean server that has been flagged for abusive activity hundreds of times in the past year. The Python scripts invoke multiple Internet addresses assigned to Huge Networks that were used to identify targets and execute DDoS campaigns. The attacks were strictly limited to Brazilian IP address ranges, and the scripts show that each selected IP address prefix was attacked for 10-60 seconds with four parallel processes per host before the botnet moved on to the next target.
The archive also shows these malicious Python scripts relied on private SSH keys belonging to Huge Networks’s CEO, Erick Nascimento. Reached for comment about the files, Mr. Nascimento said he did not write the attack programs and that he didn’t realize the extent of the DDoS campaigns until contacted by KrebsOnSecurity.
“We received and notified many Tier 1 upstreams regarding very very large DDoS attacks against small ISPs,” Nascimento said. “We didn’t dig deep enough at the time, and what you sent makes that clear.”
Nascimento said the unauthorized activity is likely related to a digital intrusion first detected in January 2026 that compromised two of the company’s development servers, as well as his personal SSH keys. But he said there’s no evidence those keys were used after January.
“We notified the team in writing the same day, wiped the boxes, and rotated keys,” Nascimento said, sharing a screenshot of a January 11 notification from Digital Ocean. “All documented internally.”
Mr. Nascimento said Huge Networks has since engaged a third-party network forensics firm to investigate further.
“Our working assessment so far is that this all started with a single internal compromise — one pivot point that gave the attacker downstream access to some resources, including a legacy personal droplet of mine,” he wrote.
“The compromise happened through a bastion/jump server that several people had access to,” Nascimento continued. “Digital Ocean flagged the droplet on January 11 — compromised due to a leaked SSH key, in their wording — I was traveling at the time and addressed it on return. That droplet was deprecated and destroyed, and it was never part of Huge Networks infrastructure.”
The malicious software that powers the botnet of TP-Link devices used in the DDoS attacks on Brazilian ISPs is based on Mirai, a malware strain that made its public debut in September 2016 by launching a then record-smashing DDoS attack that kept this website offline for four days. In January 2017, KrebsOnSecurity identified the Mirai authors as the co-owners of a DDoS mitigation firm that was using the botnet to attack gaming servers and scare up new clients.
In May 2025, KrebsOnSecurity was hit by another Mirai-based DDoS that Google called the largest attack it had ever mitigated. That report implicated a 20-something Brazilian man who was running a DDoS mitigation company as well as several DDoS-for-hire services that have since been seized by the FBI.
Nascimento flatly denied being involved in DDoS attacks against Brazilian operators to generate business for his company’s services.
“We don’t run DDoS attacks against Brazilian operators to sell protection,” Nascimento wrote in response to questions. “Our sales model is mostly inbound and through channel integrator, distributors, partners — not active prospecting based on market incidents. The targets in the scripts you received are small regional providers, the vast majority of which are neither in our customer base nor in our commercial pipeline — a fact verifiable through public sources like QRator.”
Nascimento maintains he has “strong evidence stored on the blockchain” that this was all done by a competitor. As for who that competitor might be, the CEO wouldn’t say.
“I would love to share this with you, but it could not be published as it would lose the surprise factor against my dishonest competitor,” he explained. “Coincidentally or not, your contact happened a week before an important event – one that this competitor has NEVER participated in (and it’s a traditional event in the sector). And this year, they will be participating. Strange, isn’t it?”
Strange indeed.
Pluralistic: How not to ban surveillance pricing (30 Apr 2026) [Pluralistic: Daily links from Cory Doctorow]
->->->->->->->->->->->->->->->->->->->->->->->->->->->->->
Top Sources: None -->

If you want to piss me off, it's easy: just breezily assert that our tech regulation problems are the result of the fast pace of technological change racing ahead of the plodding speed of governmental action:
https://pluralistic.net/2026/04/22/uber-for-nurses/#go-meta
While there have been some instances in which this was true, it is far more often the case that there are blindingly obvious answers to tech problems, which our lawmakers and regulators ignore, amidst a rising chorus of warnings about the dire consequences of failing to act.
Take the new Maryland bill that (supposedly) outlaws surveillance pricing: this bill is, frankly, a terribly drafted piece of shit. Worse: it's a terribly drafted piece of shit bill that fails to resolve a serious and urgent problem. Even worse: the lawmakers who drafted this piece of shit bill and Maryland Governor Wes Moore were all loudly and repeatedly warned about the problems of this bill, and they did nothing and now the people of Maryland are fucked.
So what is surveillance pricing, why is it so dangerous, and what's wrong with Maryland's Protection Against Predatory Pricing Act?
Surveillance pricing is when a company spies on you ("surveillance") and uses the resulting dossier to raise its prices to the maximum it calculates you will be willing to pay ("pricing"). With surveillance pricing, a retailer reaches into your bank account and devalues your dollars. If you pay $2 for an apple at the grocery store and the same store only charges me $1 for that apple, then that grocer is telling you that your dollars are worth half as much as mine:
https://pluralistic.net/2025/06/24/price-discrimination/
There's a kind of economics brainworm that makes some economists looooove surveillance pricing. They will insist that this is an "efficient" way to price goods, and claim that surveillance pricing isn't just a way to raise prices on people who are willing to pay more, it's a way to lower prices for people who are willing to pay less.
What you're supposed to infer from this is that people who can afford more will end up paying more, while people who can afford less will pay less. It's pitched as the Robin Hood of pricing policies, gouging the rich to finance discounts for the poor. But in practice, that's not at all how surveillance pricing works. Instead, surveillance pricing is most often used to levy a "desperation premium" on people who have fewer choices and less leverage.
For example, there's a McDonald's investments portfolio company called Plexure that supplies surveillance pricing tools to fast food restaurants. Plexure advertises its ability to use surveillance data to find out when a customer has just gotten a paycheck so that vendors can increase the price of their usual breakfast sandwich order. This isn't aimed at wealthy people – it's explicitly designed to target people who are living paycheck to paycheck.
Surveillance pricing is also used to determine how much you get paid; when that happens, we call it "algorithmic wage discrimination." Gig platforms like Uber use surveillance data about their drivers to predict which workers are most desperate, and those drivers are offered less money per mile and per hour, because a desperate worker will take whatever is on offer. Gig work apps for health-care do the same thing to nurses:
https://pluralistic.net/2024/12/18/loose-flapping-ends/#luigi-has-a-point
Indeed, surveillance pricing represents a kind of cod-Marxism. Instead of "from each to their own ability, to each according to their need," the "efficient" surveillance pricing motto is, "from each according to their desperation, to each according to our power":
https://pluralistic.net/2025/01/11/socialism-for-the-wealthy/#rugged-individualism-for-the-poor
Surveillance pricing is anything but efficient. Because surveillance pricing is a transfer from consumers to investors, it has the net effect of reducing consumption overall. If your grocer can screw you out of an extra $50/month on your household food bill, that's $50/month you can't spend on a babysitter, a movie, or a couple of nice books for your kid. The American economy runs on consumption, and the American consumer has less discretionary income than they've had in generations. Anything that reduces consumption is a drag on the whole economy.
Surveillance pricing is rampant and getting worse all the time. During the Biden administration the FTC held hearings on the practice and developed a detailed, eye-watering record of all the ways that surveillance, combined with digital platforms that can alter prices for every visit by every customer, has resulted in a massive transfer from working people to wealthy investors:
https://pluralistic.net/2024/07/24/gouging-the-all-seeing-eye/#i-spy
Unfortunately – and predictably – Trump's new FTC chairman, Andrew Ferguson, killed off that action, replacing it with an initiative that encouraged FTC officials to anonymously rat each other out for being too "woke":
https://pluralistic.net/2025/04/21/trumpflation/#andrew-ferguson
He did this even as a whole bunch of surveillance pricing companies were blitzing their clients with messages about the surveillance pricing possibilities created by Trump's tariffs, which would condition buyers to expect higher prices, creating opportunities to smuggle in surveillance-priced premiums:
https://pros.com/learn/webinars/navigating-tariff-increases-future-proof-pricing-strategy
It's only gotten worse since. Back in January, Google CEO Sundar Pichai announced that the company had a new plan to make AI profitable: they would supply surveillance prices for sellers who used Google's advertising services. After all, Google spies on more people, more comprehensively, than anyone except Meta and the NSA, and Google has an advanced ad-targeting network and a giant AI arm. Put these three facts together and Google can offer merchants the ability to target you for ads and prices that are calculated, to the penny, to be the most you would be willing to pay:
https://pluralistic.net/2026/01/21/cod-marxism/#wannamaker-slain
All this – rampant, desperation-based price-gouging; federal inaction; a risk to the whole economy – is the backdrop for Maryland's new anti-surveillance pricing bill, which Governor Wes Moore has been trumpeting as the nation's first state bill banning surveillance pricing. This would be very cool – if it was real. But – as the American Economic Liberties Project's Pat Garofalo writes for the Economic Populist – the Protection Against Predatory Pricing Act is so badly drafted that it will have essentially no impact on surveillance pricing. It's positively riddled with loopholes:
https://economicpopulist.substack.com/p/gov-wes-moore-claims-maryland-banned
The first problem with this bill is its scope: it only regulates surveillance pricing for groceries. It has nothing to say about the use of surveillance data to reprice car rentals, apartments, healthcare, taxi rides, quick-service food, or the thousand other areas where surveillance pricing is already rampant. Worse: it is silent on algorithmic wage discrimination: the use of surveillance data to reprice your wages, penalizing workers for being poor by making them even poorer.
Now, helping people with their grocery bills isn't nothing. However, even within that very narrow scope, this bill is a disaster. As Garofalo points out, the bill's first glaring loophole here is how it permits surveillance pricing if a purchaser "consents." This is quite a loophole! After all, we live in an era in which "consent" consists of clicking "I agree" when presented with a gigantic list of terms and conditions, which you cannot negotiate, which are subject to change without notice, and which are so long that it would take 26 hours to review all the "agreements" you "consent" to in any given 24-hour day.
So if the company that you use to book your pet's veterinary check-ups is owned by the same company that provides your grocer with its surveillance pricing tools, you might "consent" to having that company jack you on every bag of groceries just by clicking "I agree" when your cat needs a vet appointment.
The bill also exempts "promotional offers" and "temporary discounts," suggesting that it was drafted by someone who has never encountered a merchant whose retail premises are always plastered with signs trumpeting the fact that every price in the shop is both "temporary" (ACT NOW!) and "promotional" (SALE! SALE! SALE!). Since the bill doesn't define either of these words, it effectively grants every grocer in the state an easy way to evade the law entirely.
Finally, the bill exempts two exceptionally scammy tactics that are already the major vehicle for surveillance price-based gouging: loyalty cards and subscription-based pricing.
Loyalty cards are often a total scam:
And subscriptions are a scammer's best friend:
But even if you are ripped off by a grocer who can't be bothered to call the scam a "sale" or a "temporary offer," who can't be bothered to dress it up as a "loyalty perk" or a "subscription price," you still can't get justice. That's because the Protection Against Predatory Pricing Act excludes the "private right of action," which means that you can't sue a grocer who rips you off. All this bill lets you do is petition the state Attorney General's office to sue the grocer on your behalf, and if the AG doesn't think you deserve justice, you're shit out of luck. And the Protection Against Predatory Pricing Act pre-empts other rights in Maryland's existing Consumer Protection Act, meaning that it actually gives Marylanders fewer rights than they had a month ago, before it was signed into law.
Legislation this bad doesn't happen by accident. The omissions and defects in this law aren't there because "technology moves so fast that lawmakers can't make sense of it." This is the result of lobbyists and sellout politicians conspiring to rip off the public, and of a governor who decided to ignore the warnings about the bill in order to get a chance to grandstand on Bill Maher while doing nothing to help Marylanders:
https://x.com/BlueGeorgia/status/2047868126365106631
From nurses' wages to your payday breakfast sandwich, surveillance pricing is everywhere, especially in groceries. Every time you use Instacart to shop at Albertsons, Costco, Kroger, and Sprouts Farmers Market, you might be getting ripped off for as much as 23% of the total price:
https://pluralistic.net/2025/12/11/nothing-personal/#instacartography
This isn't some silly-season fake controversy. It's an existential crisis for America's cash-strapped, heavily indebted households, whose lives have been made immeasurably worse by the inflation from Trump's Strait of Epstein disaster. Maryland had the chance to do something to help these people and instead they squandered it, selling out to lobbyists for companies whose bottom line depends on draining the bank accounts of the most desperate people in the state.
(Image: Cryteria, CC BY 3.0, modified)

Walt Disney Visited a Ford Factory in 1948. What He Witnessed There Laid the Groundwork for What Would Become Disneyland https://www.smithsonianmag.com/history/walt-disney-visited-a-ford-factory-in-1948-what-he-witnessed-there-laid-the-groundwork-for-what-would-become-disneyland-180988551/
‘Stop’: Warning over viral Scientology ‘Speedrun’ trend https://www.news.com.au/lifestyle/real-life/news-life/stop-warning-over-viral-scientology-speedrun-trend/news-story/067ce0336bcf53774b4be682741dd868
Aftermath: China Is Electrifying Freight Trucking https://prospect.org/2026/04/29/aftermath-china-electrifying-freight-trucking/
2026 Digital Publishing Awards Nominees https://digitalpublishingawards.ca/2026nominees/
#25yrsago Google's now running on 8,000 Linux servers https://web.archive.org/web/20010501043429/http://www.internetweek.com/story/INW20010427S0010
#25yrsago Karl Schroeder’s Ventus in the NYT https://archive.nytimes.com/www.nytimes.com/books/01/04/29/reviews/010429.29scifit.html
#20yrsago Sony screwing artists out of iTunes royalties, customers out of first sale https://www.nytimes.com/2006/04/30/technology/cheap-trick-allman-brothers-sue-sony-over-download-royalties.html
#20yrsago Robot Lego CD thrower can shatter discs https://www.techeblog.com/hammerhead-the-lego-cd-thrower/
#15yrsago Understanding alternative voting, with coffee and beer https://www.youtube.com/watch?v=TtW3QkX8Xa0
#15yrsago Battleshoe https://philnoto.tumblr.com/post/4613522934/quite-busy-with-work-today-so-heres-a-little
#15yrsago Filling Paris’s potholes with knitwork https://www.flickr.com/photos/39380641@N03/albums/72157622189211405/
#15yrsago Pinhole cameras made out of hollow eggs https://www.lomography.com/magazine/71984-the-pinhegg-my-journey-to-build-an-egg-pinhole-camera
#15yrsago Canadian pro-Net Neutrality/anti-censorship/anti-surveillance party gaining support https://web.archive.org/web/20110429020845/http://www.ekospolitics.com/index.php/2011/04/ndp’s-new-status-as-second-runner-holding-april-26-2011/
#15yrsago We Say Gay: Tennessee kids fight bill that would prohibit discussing homosexuality in school https://web.archive.org/web/20110501072834/https://wesaygay.com/
#15yrsago HOWTO build an impossible Escher perpetual motion waterfall https://www.instructables.com/Perpetual-Motion-Machine-The-real-life-version-of/
#15yrsago RIP Keith Aoki, copyfighting law prof, comics illustrator, musician and writer https://www.thepublicdomain.org/2011/04/27/rip-keith-aoki/
#5yrsago Unpack the court with judicial overrides https://pluralistic.net/2021/04/27/bruno-argento/#crisis-of-legitimacy
#5yrsago Pharma's anti-generic-vaccine lobbying blitz https://pluralistic.net/2021/04/27/bruno-argento/#pharma-death-cult
#5yrsago Klobuchar on trustbusting https://pluralistic.net/2021/04/27/bruno-argento/#klobuchar
#5yrsago Robot Artists & Black Swans https://pluralistic.net/2021/04/27/bruno-argento/#fantascienza
#1yrago The enshittification of tech jobs https://pluralistic.net/2025/04/27/some-animals/#are-more-equal-than-others
#5yrsago Dems want to give $600b to the one percent https://pluralistic.net/2021/04/28/inequality-r-us/#neotrumpism

Barcelona: Internet no tiene que ser un vertedero (Global
Digital Rights Forum), May 13
https://encuentroderechosdigitales.com/en/
Virtual: How to Disenshittify the Internet with Wendy Liu (EFF),
May 14
https://www.eff.org/event/effecting-change-enshittification
Berlin: Re:publica, May 18-20
https://re-publica.com/de/news/rp26-sprecher-cory-doctorow
Berlin: Enshittification at Otherland Books, May 19
https://www.otherland-berlin.de/de/event-details/cory-doctorow.html
Hay-on-Wye: HowTheLightGetsIn, May 22-25
https://howthelightgetsin.org/festivals/hay/big-ideas-2
SXSW London, Jun 2
https://www.sxswlondon.com/session/how-big-tech-broke-the-internet-b3c4a901
NYC: The Reverse Centaur's Guide to Life After AI with Jonathan
Coulton (The Strand), Jun 24
https://www.strandbooks.com/cory-doctorow-the-reverse-centaur-s-guide-to-life-after-ai.html
Edinburgh International Book Festival with Jimmy Wales, Aug
17
https://www.edbookfest.co.uk/events/the-front-list-cory-doctorow-and-jimmy-wales
When Do Platforms Stop Innovating and Start Extracting?
(InnovEU)
https://www.youtube.com/watch?v=cccDR0YaMt8
Pete "Mayor" Buttigieg (No Gods No Mayors)
https://www.patreon.com/posts/pete-mayor-with-155614612
The internet is getting worse (CBC The National)
https://youtu.be/dCVUCdg3Uqc?si=FMcA0EI_Mi13Lw-P
Do you feel screwed over by big tech? (Ontario Today)
https://www.cbc.ca/listen/live-radio/1-45-ontario-today/clip/16203024-do-feel-screwed-big-tech
"Enshittification: Why Everything Suddenly Got Worse and What to
Do About It," Farrar, Straus, Giroux, October 7 2025
https://us.macmillan.com/books/9780374619329/enshittification/
"Picks and Shovels": a sequel to "Red Team Blues," about the heroic era of the PC, Tor Books (US), Head of Zeus (UK), February 2025 (https://us.macmillan.com/books/9781250865908/picksandshovels).
"The Bezzle": a sequel to "Red Team Blues," about prison-tech and other grifts, Tor Books (US), Head of Zeus (UK), February 2024 (thebezzle.org).
"The Lost Cause:" a solarpunk novel of hope in the climate emergency, Tor Books (US), Head of Zeus (UK), November 2023 (http://lost-cause.org).
"The Internet Con": A nonfiction book about interoperability and Big Tech (Verso) September 2023 (http://seizethemeansofcomputation.org). Signed copies at Book Soup (https://www.booksoup.com/book/9781804291245).
"Red Team Blues": "A grabby, compulsive thriller that will leave you knowing more about how the world works than you did before." Tor Books http://redteamblues.com.
"Chokepoint Capitalism: How to Beat Big Tech, Tame Big Content, and Get Artists Paid, with Rebecca Giblin", on how to unrig the markets for creative labor, Beacon Press/Scribe 2022 https://chokepointcapitalism.com
"Enshittification, Why Everything Suddenly Got Worse and What to Do About It" (the graphic novel), Firstsecond, 2026
"The Post-American Internet," a geopolitical sequel of sorts to Enshittification, Farrar, Straus and Giroux, 2027
"Unauthorized Bread": a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, April 20, 2027
"The Memex Method," Farrar, Straus, Giroux, 2027
Today's top sources:
Currently writing: "The Post-American Internet," a sequel to "Enshittification," about the better world the rest of us get to have now that Trump has torched America. Third draft completed. Submitted to editor.
"The Post-American Internet," a short book about internet policy in the age of Trumpism. PLANNING.
A Little Brother short story about DIY insulin PLANNING

This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.
https://creativecommons.org/licenses/by/4.0/
Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.
Blog (no ads, tracking, or data-collection):
Newsletter (no ads, tracking, or data-collection):
https://pluralistic.net/plura-list
Mastodon (no ads, tracking, or data-collection):
Bluesky (no ads, possible tracking and data-collection):
https://bsky.app/profile/doctorow.pluralistic.net
Medium (no ads, paywalled):
Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):
https://mostlysignssomeportents.tumblr.com/tagged/pluralistic
"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla
READ CAREFULLY: By reading this, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies ("BOGUS AGREEMENTS") that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer.
ISSN: 3066-764X
[$] Restartable sequences, TCMalloc, and Hyrum's Law [LWN.net]
Hyrum's Law states that any observable behavior of a system will eventually be depended upon by somebody. The kernel community is currently contending with a clear demonstration of that principle. The recent work to address some restartable-sequences performance problems in the 6.19 release maintained the documented API in all respects, but that was not enough; Google's TCMalloc library, as it turns out, violates the documented API, prevents other code from using restartable features, and breaks with 6.19. But the kernel's no-regressions rule is forcing developers to find a way to accommodate TCMalloc's behavior.
Version 16.1 of the GNU Compiler Collection (GCC) has been released.
The C++ frontend now defaults to the GNU C++20 dialect and the corresponding parts of the standard library are no longer experimental. Several C++26 features receive experimental support, including Reflection (-freflection), Contracts, expansion statements and std::simd.
Other changes include the introduction of an experimental compiler frontend for the Algol68 language, ability to output GCC diagnostics in HTML form, and more.
Seven new stable kernels for Thursday [LWN.net]
Greg Kroah-Hartman has released the 7.0.3, 6.18.26, 6.12.85, 6.6.137, 6.1.170, 5.15.204, and 5.10.254 stable kernels. The 7.0.3 and 6.18.26 kernels only contain fixes needed for Xen users; the others, though, have backported fixes for the recently disclosed AEAD socket vulnerability. Kroah-Hartman advises that all users of the other kernel series must upgrade.
The WordPress OS [Scripting News]
The product described on wordpress.com/social is not a real product, I am told by someone inside who I have worked with and trust. They say there will be a lot of these trial products coming out in the coming weeks because this is a project that Matt has given to all developers in the company? Not sure the shape of it, I had heard about this project third-hand but not seen any specifics. I sent my friend an email, which I am not editing, this is exactly as it read, the only change is that I redacted their name. Don't want to get anyone in trouble. ;-)
I've been in the situation I describe many many times. Apple never would have done AppleScript if I hadn't done Frontier. I gave Microsoft several ideas in the early days of the web, a blogger-based news network, using the incredible flow they had from MSN, and then we had a mission for Zune in 2004, the year that podcasting took off. I was living in Seattle at the time so it would have been convenient and give me an excuse to stay there. They did both products, in the corporate way, thus removing all that was interesting about the ideas. When RSS was the darling of the VC world, many of the VCs talked with me, got my ideas for free, and then invested in people who were more corporate and easier to manage, they also had no idea what they were doing and they all failed. Some things are best left to the entrepreneurs. And you won't find many of them inside a big corporation, they're out in the wild, trying out new ideas and seeing what the world thinks.
But I am impressed with the UI design and esp the marketing materials for this product. Imagine what would happen if we were to work together on this project, with full license from the company to go where ever the product took us. I have a ton of working software for just this problem! I didn't do it in a couple of weeks, I did it in a few years. That's how long it takes to do a real product. I almost wrote an email to Matt. Maybe he'll read this post, if so, maybe in the next challenge you should give people a month to do a project with someone who works outside your company, or even better, one of Automattic's competitors. Work on interop. Make the web stronger. If you do that it can't help but strenghten WordPress because it's such a big part of the web. And provide recurring revenue opportunities.
BTW, I'm too old to start a company around any of my products, and truthfully it was never anything I wanted to do. When I wake up in the morning I want to write a blog post to warm up my brain and then spend a few hours working with my programming partner Claude, to make a new piece of software that will blow your socks off. I like to speak at conferences too, also have organized a few of them. That's me. Not a big fan of running companies, any more than an academic, musician or writer would be. I like connecting with other people via our minds.
Also just saw this piece, dated April 15, that describes the impasse in the WordPress open source developer community. To expect an open process to yield user-level features is imho never going to work. If I were in Matt's shoes, I would ask them to make improvements to the platform, esp the API, and let independent developers work out various different UIs on top of the WordPress OS. It's much more likely to quickly generate new exciting features for specific kinds of users, and recurring revenue, and make WordPress a harder target for your competitors to hit. And more satisfied users that the picked the right platform. Most of the PR coming out of Automattic in the last couple of years, being brutally honest, has the opposite effect.
I wrote extensively about Microsoft in the early days of the web, most of it critical, but even so I had many friends up there. My conclusion, when a tech company becomes as dominant as Microsoft was then, or Automattic is today, that innovation at a user level is virtually impossible, and not advised, because then you only get one version of each thing, when you need a competitive market in a technology as new as the web was in 1994 and AI is in 2026.
I said to my friends at Microsoft that it's not bad news. At my age, I can't do what I did 20 years ago. Fact. I don't argue with it. For a tech company of A8C's stature, you can be a banker and distributor. All the corporate functions of tech, but not the creative functions. And I would offer your best most creative. most challenge-seeking developers independence and some seed capital, and always stay in touch with the best, and when they ship an excellent product, write a blog post about what they've done, and make sure the users know that these people are exceptional. Add features to your OS to support what they want to do. And deposit the profits in your bank account. And maybe take a leave of absence yourself and work on one of these startups yourself. I said as much to Bill Gates, and i know he heard it, but he didn't do it. ;-)
PS: Finally Microsoft wasn't humbled by the web, it was the Y2K version of Ram Cram that did them in.
Russell Coker: Links April 2026 [Planet Debian]
Wouter wrote an insightful blog post about the need for free firmware [2].
Matthew Garrett wrote an interesting blog post about the potential security issues raised by non-free firmware and firmware updates [3]. Which goes well with Wouter’s post.
Interesting article about fake job adverts with a code sample for the applicant to show their skils which depends on hostile libraries that install a RAT [4]. Do we need Qubes for software development nowadays?
Security updates for Thursday [LWN.net]
Security updates have been issued by AlmaLinux (buildah, firefox, gdk-pixbuf2, giflib, grafana, java-1.8.0-openjdk, java-21-openjdk, LibRaw, OpenEXR, PackageKit, pcs, python3.11, python3.12, python3.9, sudo, tigervnc, vim, xorg-x11-server, xorg-x11-server-Xwayland, yggdrasil, and yggdrasil-worker-package-manager), Debian (calibre, firefox-esr, and openjdk-17), Fedora (asterisk, binaryen, buildah, dokuwiki, lemonldap-ng, libexif, libgcrypt, miniupnpd, openvpn, podman, python3.9, rust-rpm-sequoia, skopeo, and xdg-dbus-proxy), Red Hat (buildah, gdk-pixbuf2, and nodejs:20), SUSE (dnsdist, libheif, openCryptoki, polkit, sed, and xen), and Ubuntu (linux-bluefield, python-marshmallow, and roundcube).
AI Code Review Only Catches Half of Your Bugs [Radar]
This is the fifth article in a series on agentic engineering and AI-driven development. Read part one here, part two here, part three here, and part four here.
I recently had a taste of humility with my AI-generated code. I live in Park Slope, Brooklyn, and recently I needed to get to the other side of the neighborhood. I thought I’d be clever: I like taking the bus, so I decided to hop on the one that goes right down 7th Avenue. I know I could check the schedule using the MTA’s really useful Bus Time app or website, but it doesn’t take into account walking time from my house or give me a good idea of when to leave. This seemed like a great opportunity to vibe code an app and do some quick AI-driven development.
It took about two minutes for Claude Code to get my new app working. It made a lovely little web UI, I configured my stop and how long it takes me to walk there, and it gave me the perfect departure time.
When I actually walked out the door, the app perfectly predicted my wait. There was just one problem: my bus was nowhere to be seen. What I did see was a bus driving the exact opposite direction down 7th Avenue.
It was pretty obvious what had happened. I needed to go deeper into Brooklyn, not towards Manhattan, and the AI had picked the wrong direction. (Actually, as Cowork pointed out, each stop has its own ID, and it had selected the ID for the wrong stop.) I’d been using Cowork to orchestrate everything, and I could easily have just asked it to go out and check the MTA’s BusTime site for me to make sure the app was working. But I just trusted the AI. As a result, I had to walk. Which is fine—I love walking—but the irony was painful. I had literally just published an article about AI code quality and why you shouldn’t blindly trust it, and here I was doing exactly that.
The app had a bug. But it wasn’t the kind of bug you’d necessarily catch using a typical AI code review prompt. It built, ran, and did a perfectly fine job parsing the JSON from the MTA API. But if I’d started with a simple requirement—even just a user story like “as a Park Slope resident, I want to catch the B69 headed towards Kensington so I can get deeper into Brooklyn”—the AI would have built it differently. The problem is that AI can only build the thing you tell it to build, which isn’t necessarily the thing you wanted it to build. AI is really good at writing “correct” code that does the wrong thing.
My Brooklyn bus detour was a minor inconvenience. But it was a really useful, small-scale example of what I kept running into in my larger projects, too. There’s an entire class of bugs that you simply can’t find with structural analysis—no linter, no static analyzer, no AI code reviewer will catch them—because the code isn’t wrong in any way that’s visible from the code alone. You need to know what the code was supposed to do. You need to know the intent.
The data on why requirements matter goes back decades. Back in the 1990s, for example, the Standish CHAOS reports were a big eye-opener for me and a lot of other people in the industry, large-scale data confirming what we’d been seeing on our own projects: that the most expensive defects trace back to misunderstood or missing requirements. Those reports really underscored the idea that poor requirements management, and specifically incomplete or frequently changing specifications, were one of the most primary drivers behind IT project failures. (And, as far as I can tell, they still are, and AI isn’t helping things—see my O’Reilly Radar article, “Prompt Engineering Is Requirements Engineering”).
The idea that requirements problems really are the source of the most expensive kind of defects should make intuitive sense: If you build the wrong thing, you have to tear it apart and rebuild it. That’s why I made requirements the foundation of the Quality Playbook, an open-source skill for AI tools like Claude Code, Cursor, and Copilot that I introduced in the previous article. I’ve spent decades doing test-driven development, partnering with QA teams, welcoming the harshest code reviews from teammates who don’t pull punches—and that experience led me to build a tool that uses AI to bring back quality engineering practices the industry abandoned decades ago. I’ve tested it against a wide range of open-source projects in Go, Java, Rust, Python, and C#, from small utilities to widely-used libraries with tens of thousands of stars, and it’s found real bugs in almost every project it’s come across, including ones that have been confirmed and merged upstream.
I think there are a lot of wider lessons we can learn from my experience using requirements to help AI find bugs—especially security bugs. So in this article, I want to focus on the single most important thing I’ve learned from building it: everything depends on requirements. Not just any requirements, but a specific kind of requirement that most projects don’t have, that most AI tools don’t ask for, and that turns out to be the key to making AI actually useful for verifying code quality.
Developers using AI tools have been rediscovering the value of writing things down before asking the AI to build them. Spec-driven development (SDD) has become very popular, and for good reason. Addy Osmani wrote an excellent piece on this, “How to Write a Good Spec for AI Agents,” and the core idea is sound: If you write a clear specification of what you want built, the AI produces dramatically better results than if you just describe it in a chat prompt and hope for the best.
I think SDD is important, and I’d encourage any developer working with AI to adopt it. But as I was building the Quality Playbook, I discovered that SDD has a blind spot that matters a lot for code quality. An SDD spec describes the how—what the implementation should look like. It tells the AI “implement a duplicate key check” or “add a retry mechanism with exponential backoff” or “create a REST endpoint that returns paginated results.” That’s useful for building things. But it’s not enough for verifying them.
But a requirement doesn’t say “implement a duplicate key check.” It says “users depend on Gson to reject ambiguous input so they don’t silently accept corrupted data.” The AI can reason about the second one in ways it can’t reason about the first, because the second one has the purpose attached. When the AI knows the purpose, it can evaluate whether the code actually fulfills that purpose across all the edge cases, not just the ones the spec explicitly listed. That’s how the Quality Playbook caught a bug in Google’s Gson library, one of the most widely used JSON libraries in Java.
I think it’s worth digging into that particular bug, because it’s a great example of just how powerful requirements analysis can be for finding defects. The playbook derived null-handling requirements from Gson’s own community—GitHub issues #676, #913, #948, and #1558, some dating back to 2016—then used those requirements to find that duplicate keys were silently accepted when the first value was null. It confirmed the bug by generating a failing test, then patched the code and verified the test passed. I’ve used Gson for years and done a lot of work with Java serialization, so I read the code and the fix myself before submitting anything—trust but verify. The fix was merged as https://github.com/google/gson/pull/3006, confirmed by Google’s own test suite.
That bug had been hiding in plain sight for years, through thousands of tests and countless code reviews. But it’s possible that no structural analysis might have ever found it because you needed the requirement to know it was wrong.
This distinction might sound academic, but it has very concrete consequences for whether your AI can actually find bugs in your code.
The security world has known about the limits of structural analysis for a long time. The NIST SATE evaluations found that the best static analysis tools plateaued at around 50-60% detection rates for security vulnerabilities. Gary McGraw’s Software Security: Building Security In (Addison-Wesley, 2006) explains why: Roughly 50% of security defects are implementation bugs, and the other 50% are design flaws. Static analysis tools target the implementation bugs—buffer overflows, SQL injection, format string vulnerabilities—because those are pattern-matchable. But design flaws are about intent: The system’s architecture doesn’t enforce the security properties it’s supposed to enforce, and no amount of scanning the code will reveal that. A 2024 study by Charoenwet et al. (ISSTA 2024) confirmed this is still the case: They tested five static analysis tools against 815 real vulnerability-contributing commits and found that 22% of vulnerable commits went entirely undetected, and 76% of warnings in vulnerable functions were irrelevant to the actual vulnerability. The pattern is consistent across two decades of research: There’s a ceiling on what you can find by analyzing code, and it’s around half.
There’s a good reason for that limitation: the intent ceiling. A structural analysis tool is limited to reading the code and looking at what it does; it has no way to take into account what the developer intended it to do.
When an AI does a code review without requirements, it’s
limited to structural analysis: pattern matching, code smell
detection, race condition analysis. It can ask “does this
look right?” but it can’t ask “does this do what
it’s supposed to do?” because it doesn’t know
what the code is supposed to do. Structural review catches
genuinely important stuff—race conditions, null pointer
issues, resource leaks, concurrency bugs. A structural reviewer
looking at a shell script will catch a missing fi, a
bad variable expansion, a race condition. Structural review is
useful, and structural review is what most AI code review tools do
today.
But about half of all security defects are intent violations: things the code doesn’t do that it was supposed to do, or things it does that it wasn’t supposed to do. They’re invisible without a specification to check against, and no tool will find them by looking at code that is, structurally, perfectly sound. A structural reviewer looking at a script that’s, say, used to check router configuration files, might find well-formed bash, correct syntax, proper quoting, and code that looks like it works and doesn’t match known antipatterns. It wouldn’t know the script is only validating three of the five access control rules it’s supposed to enforce because that’s a requirements question, not a syntax question.
Or, more personally for me, this is what happened with my bus tracker app: The JSON parsing was flawless, the UI was correct, the timing logic worked perfectly. The only problem was that it showed buses headed towards Manhattan when I needed to go deeper into Brooklyn—and no structural analysis would ever catch that, because you need to know which direction I intended to go. That’s me and my very clever AI hitting the intent ceiling.
This is where it gets really serious, because security vulnerabilities are some of the most dangerous members of this class of invisible bugs.
Think about what a missing authorization check looks like to an AI code reviewer. Let’s say you’ve got a web endpoint with a well-formed HTTP handler, properly sanitized inputs, and a safe database query. The code is clean, and passes every structural check and static analysis tool you’ve thrown at it. Now you’re testing it and, much to your dismay, you discover that the endpoint lets any authenticated user delete any other user’s data because nobody ever wrote down the requirement that says “only administrators can perform deletions.” That’s CWE-862: Missing Authorization, and it rose to #9 on the 2024 CWE Top 25 most dangerous software weaknesses.
That’s not a coding error! It’s a missing requirement.
That’s McGraw’s point: About half of all security defects aren’t implementation bugs at all. They’re design flaws, places where the system’s architecture doesn’t enforce the security properties it was supposed to enforce. A cross-site scripting vulnerability isn’t always a failure to sanitize input. Sometimes it’s a failure to define which inputs are trusted and which aren’t. A privilege escalation isn’t always a broken access check. Sometimes there was never an access check to begin with because nobody specified that one was needed. These are intent violations and they’re invisible to any tool that doesn’t know what the software is supposed to prevent.
AI code review tools today are very good at catching the implementation half of McGraw’s split. They can spot a SQL injection pattern, flag an unsafe deserialization, identify a buffer overflow. But they’re working on the same side of the 50/50 line that static analysis has always worked on. The design half—the missing authorization checks, the unspecified trust boundaries, the security properties that were never written down—requires the same thing that catching my bus tracker bug required: knowing what the software was supposed to do in the first place.
The problem most projects face is that they don’t have formal requirements. What they have is code, documentation, commit messages, chat history, README files, and maybe some design docs. The question is how to get from that mess to a specification that an AI can actually use for verification.
The key insight I had while building the playbook was that every previous approach I tried asked the model to do two things at once: figure out what contracts exist AND write requirements for them. That doesn’t work—the model runs out of attention trying to hold the entire behavioral surface in its head while also producing formatted requirements. So I split them apart into four steps: First, have the AI read each source file and write down every behavioral contract it observes as a simple list. Second, derive requirements from those contracts plus the documentation. Third, check whether every contract is covered by a requirement. Fourth, assert completeness—and if there are gaps, go back to step one for the files with gaps.
The key idea is that the contracts file is external memory. When the model “forgets” about a behavioral contract it noticed earlier, that forgetting is normally invisible. With a contracts file, every observation is written down before any requirements work begins, so an uncovered contract is a visible, greppable gap.
You don’t need the Quality Playbook to do this—you can apply the same technique with any AI coding tool that you’re already using. Here’s what I’d recommend:
In the next article, I’ll talk about context management—the skill that actually determines whether your AI sessions produce good work or mediocre work. Everything I’ve described here depends on the AI having the right information at the right time, and it turns out that managing what the AI knows (and what it forgets) is an engineering discipline in its own right. I’ll cover how I went from running 15 million tokens in a single prompt to splitting the playbook into independent phases with zero context carryover, and why that transition worked on the first try.
The Quality Playbook is open source and works with GitHub Copilot, Cursor, and Claude Code. It’s also available as part of awesome-copilot.
Disclosure: Aspects of the methodology described in this article are the subject of US Provisional Patent Application No. 64/044,178, filed April 20, 2026 by the author. The open-source Quality Playbook project (Apache 2.0) includes a patent grant to users of that project under the terms of the Apache 2.0 license.
CodeSOD: Cancel Catch [The Daily WTF]
"This WTF is in Matlab" almost feels like cheating. At one place I worked, somebody's job was struggling through a mountain of Matlab code and porting it into C. "This Matlab code looks like it was written by an alien," also doesn't really get much traction- all Matlab code looks like it was written by an alien. This falls into the realm of "Researchers use Matlab, researchers may be very smart about their domain, but generally don't know the first thing about writing maintainable code, because that's not their job."
But let's take a look at some MatLab Carl W found:
try
if (~isempty(fieldnames(bigStruct)) && isfield(bigStruct,'pathName'))
[FileName, PathName] = uigetfile(bigStruct.pathName);
else
[FileName, PathName] = uigetfile(lastPath); %lastPath holds previous path
end
catch
bigStruct = struct;
end
The uigetfile function opens a file dialog box.
When the user selects a file, FileName holds the
filename, PathName holds the containing path. If the
user doesn't select a valid file, or clicks "Cancel", both of those
variables get set to 0. It's then up to the caller to
check the return value and decide what happens next.
Which is not what happens here, obviously. The developer
responsible seems to believe that it maybe throws an exception? And
they can just catch it? Carl's best guess is that this is a "weird"
way to catch the cancel button. But it does mean that
FileName and PathName get set to
0, and those zeros propagate until something finally
tries to open those files, at which point everything blows up and
the user doesn't know why.
The day I logged 1 in every 2000 public IPv4: visualizing the AI scraper DDoS [OSnews]
What if you run a few online services for you and your friends, like a small git instance and a grocery list service, but you get absolutely hammered by “AI” scrapers?
I cannot impress upon you, reader, that this is not only an attack that is coordinated, it is an attack that is distributed.
I run a small set of services, basically only for me and my friends. I am not a hyperscaler, I am not a tech company, I am not even a small platform. I have a git forge where I put the shit I make, and a couple other services where me and my friends backup our files or write our grocery lists. I am not fucking Meta and I cannot scale the fuck up just because OpenAI or Anthropic or Meta or whoever is training a model that weeks wants to suck all the content out of my VPS ONCE MORE until it’s dry.
↫ lux at VulpineCitrus
So how much traffic did the author of this piece, lux, get from “AI” scraping bots? Within a time period of 24 hours, they were hammered by 2040670 unique IP addresses, 98% of which were IPv4 addresses, which means that 1 out of every 2000 publicly available IPv4 addresses were involved in the scraping. Together, they performed over 5 million requests. And just to reiterate: they were scraping a few very small, friends-only services run by some random person. This is absolutely insane.
If, at this point in time, with everything that we know about just how deeply unethical every single aspect of “AI” is, you’re still using and promoting it, what is wrong with you? If you’re so addicted to your “AI” girlfriend’s unending stream of useless, forgettable sycophantic slop, despite being aware of the damage you’re doing to those around you, there’s something seriously wrong with you, and you desperately need professional help. You don’t need any of this. The world doesn’t need any of this. Nobody likes the slop “AI” regurgitates, and nobody likes you for enabling it.
Get help.
Fast16 Malware [Schneier on Security]
Researchers have reverse-engineered a piece of malware named Fast16. It’s almost certainly state-sponsored, probably US in origin, and was deployed against Iran years before Stuxnet:
“…the Fast16 malware was designed to carry out the most subtle form of sabotage ever seen in an in-the-wild malware tool: By automatically spreading across networks and then silently manipulating computation processes in certain software applications that perform high-precision mathematical calculations and simulate physical phenomena, Fast16 can alter the results of those programs to cause failures that range from faulty research results to catastrophic damage to real-world equipment.”
Another news article.
Lots of interesting details at the links.
Grrl Power #1456 – 1.999th base [Grrl Power]
When Maxima came back from her first… well, not date, I guess. After she ditched the bar with Rowan. “Date” to me implies a prearranged thing, but whatever. They met at the bar, then went to a second location, so maybe that counts. Anyway, when she got back from that, she asked Dabbler to fix up the choker so her skin would feel less “weirdly silky and slippery,” so it’s obvious the second phase of that encounter went pretty well. It’s also possible that Max’s teammates are right and she’s a little pent up because she doesn’t seem like the kind of woman that would wind up making out with a guy after meeting him at a bar and only knowing him for… let’s say two hours? Doesn’t mean Rowan immediately got to second base, but even some kissing with his hands grabbing on to her waist under her shirt, he still could have been like, “Wow, uh, what kind of skin cream do you use?” Changing how Max feels to the touch is well outside the scope of what that choker was originally intended for, but Dabbler is super invested in getting Max laid, so she put some extra magical elbow grease into it.
What we’re seeing on this page is a previously unrevealed outing, since her trip to the fire station was cut short. I’m not sure when it took place… I think it’d have to be after the firehouse… yeah, maybe the day before they left on this trip? They were trying to keep to a tight schedule to minimize Max and Sydney’s “thousands of light years from Earth” time. Cora mentioned the tournament with about a week before she knew they’d have to leave for it, since she knew there’d be some hemming and hawing and possibly even some bureaucracy involved, but Max and Faulk made up their minds pretty quickly, giving them a few days to prepare.
Max is definitely not a “boobs to the face in public” kind of gal usually, but being in disguise can be fairly emboldening. And that park is weirdly bereft of other picnicers, extreme Frisbeers, kite flyers, and other couples sneaking in some second base time. Also, I think she really likes Rowan, and I assume if you’re a gal who likes a guy, at some point, maybe you want to smoosh your boobs in his face? Especially if he’s been missing more subtle signals. I’m just saying, it’s a potential weapon in the arsenal. But also in a general “I like you, so please enjoy this boob” sort of way.
In any case, despite their continued awkwardness, they might be rapidly approaching the point where Max will have to sit him down and have that dreaded discussion about chokers and why she’s always wearing one.
Finally, here we go! I took the suggestion that I
just use an existing panel for a starting point, thinking it would
save time… I guess it technically did, but a 5 character
vote incentive just isn’t the way to
go.
Patreon, of course, has actual topless version.
Double res version will be posted over at Patreon. Feel free to contribute as much as you like.
One thing at a time [Seth's Blog]
Multi-tasking is mostly an illusion.
What we’re actually doing is slicing our focus, jumping from one thing to another and then back again.
All that jumping decreases our productivity and worse, erodes our peace of mind.
You’re only doing one thing at a time anyway. Might as well embrace that instead of spending so much time shifting gears.
Otto Kekäläinen: Mentoring Mondays for aspiring Debian contributors [Planet Debian]

I mentor several people in Debian, and have been repeatedly asked to offer an opportunity to ask questions on a live call. I have now started a recurring video call for exactly that, which I call Mentoring Mondays, and it is open for anyone aspiring to contribute to Debian, one of the oldest and most widely used Linux distributions.
Mentoring Mondays have already been happening for the past few Mondays, and this week we had a record 20 people on the call. During the calls so far we have had a demo of updating a package for a new upstream release using gbp, and of how to create a Merge Request on Salsa for a new upstream version. There is clearly a need for this, so I am announcing this now also on my blog, just as I have publicly announced that I offer mentoring for aspiring Debian contributors.
Mentoring Mondays is a recurring video call that lasts roughly 45 minutes with the agenda:
This is ideal for you if you:
.deb package at least once and want
to do it betterThe call is mainly intended for those who want to contribute to Debian (or Debian derivatives, with Ubuntu being the most popular), but anyone can join to learn about things related to contributing to a Linux distribution. Please note that video chat uses Debian Social Jitsi. Joining the call requires authentication using a Salsa account, which anyone contributing to Debian should have anyway.
Calls are not recorded, so participants can chat freely, and are also encouraged to be on-camera for an enhanced sense of community.
Make sure you are logged into Salsa first, before opening the call on Debian’s Jitsi instance.
Please join the Matrix channel #mentoringmondays:matrix.debian.social if you plan to attend Mentoring Mondays. All future meeting times will be announced there. It is also the channel to post questions about Debian packaging to be answered during the call.
The current meeting time is friendly to people in Europe, Asia and Australian time zones, and will repeat at the same time slot on:
Starting in mid-June the meeting time will change to accommodate participation in different time zones.
Feel free to extend the invite to anyone you think might be interested in joining!
If you mention this on social media, please post using tag
#mentoringmondays, or simply boost the existing posts
on the social media of your preference: Mastodon, Lemmy, Reddit, Bluesky, LinkedIn, Farcaster, X.
A big thanks to Jason Kregting for helping organize. I would also like to thank in advance all the Debian Developers who are able to join the call and be available to participate in discussions and help answer questions.
Sergio Cipriano: How to build reverse dependencies using Salsa CI [Planet Debian]
Last week, I attended MiniDebConf Campinas, and one of my favorites talks was "Salsa CI, showing features that almost nobody knows" by Aquila Macedo.
One of the things I learned is that we can easily build reverse dependencies using:
$ git push -o ci.variable="SALSA_CI_DISABLE_BUILD_REVERSE_DEPENDENCIES=0"
I tried this option before uploading typer version 0.20.0-1:

This is an amazing feature. Thanks to everyone involved in making it happen!

when Momray said "Cubetown messes up sometimes" she specifically meant Mooby
[$] LWN.net Weekly Edition for April 30, 2026 [LWN.net]
Inside this week's LWN.net Weekly Edition:
A security bug in AEAD sockets [LWN.net]
Security analysis firm Xint has disclosed a security bug in the Linux kernel that allows for arbitrary 4-byte writes to the page cache, and which has been present since 2017. The vulnerability has been fixed in mainline kernels. A proof-of-concept script demonstrates how to use the flaw to corrupt a setuid binary, which works on multiple distributions, by requesting an AEAD-encrypted socket from user space and splicing a particular payload into it. A supplemental blog post gives more details about the discovery and remediation.
A core primitive underlying this bug is splice(): it transfers data between file descriptors and pipes without copying, passing page cache pages by reference. When a user splices a file into a pipe and then into an AF_ALG socket, the socket's input scatterlist holds direct references to the kernel's cached pages of that file. The pages are not duplicated; the scatterlist entries point at the same physical pages that back every read(), mmap(), and execve() of that file.
Former EFF Activism Director's New Book, Transaction Denied, Explores What Happens When Financial Companies Act like Censors [Deeplinks]
A U.S. citizen who teaches Persian poetry classes online is suddenly unable to receive payments or access funds when his account is flagged and frozen by Paypal and its subsidiary Venmo. A Muslim city councilwoman in New York City has a Venmo payment blocked because she uses the name of a Bangladeshi restaurant in the transaction. Online hubs for erotic storytelling repeatedly lose their payment accounts. Others active in drug legalization fights struggle to keep their bank accounts.
These may sound like one-off issues, but they are not. They occur with frightening regularity, as former EFF Activism Director and Chief Program Officer, Rainey Reitman, who left EFF in 2022, describes in her new book, Transaction Denied. The book sheds new light on a serious problem that often hides in the shadows, and pushes us to ask an increasingly important question: “Is it ever OK for financial intermediaries to act as the arbiters of online expression?"
Both a storyteller and an advocate, Rainey exposes hidden systems of power that shape our choices, our speech, and, ultimately, our society. - Cindy Cohn
Reitman makes her case about the impact of financial institutions and payment intermediaries shutting down accounts and inhibiting transactions through compelling individual stories, some of which have not been shared before. The people impacted are diverse: authors, teachers, journalists, elected politicians, and more are suddenly unable to retrieve or receive funds, with little explanation, transparency, or recourse. Reitman shows the reasons are frequently speech-related, resulting often from arbitrary corporate policy, a broad (mis)interpretation of the law, or in response to pressure from anti-speech advocates.
In the example of the Persian poetry teacher, the blocking is due to the highly risk averse interpretation of U.S. sanctions on Iran—sanctions aimed at deterring weapons development or terrorism instead snared a poetry professor and a New York city councilwoman. Reitman demonstrates how these sanctions, and others, have an outsized impact on Muslims.
But Transaction Denied is also a guide for those interested in fighting for free speech. The book covers over a decade of successful campaigns and shows that advocacy can win the day—and is sometimes necessary to counter pro-censorship campaigns. Reitman offers a behind-the-scenes view of the campaign to help restore the Stripe account of the Nifty Archive Alliance, a nonprofit which supports the Nifty Archive, a hub of erotic storytelling for the queer community since 1992. She covers EFF's successful coalition and campaign to restore the PayPal account of Smashwords, a hub for self-published fiction. And in what has become a critical moment for free speech and free press, she describes how several EFF staff members and two EFF board members became the seed for a new nonprofit, the Freedom of the Press Foundation, which continues to partner with EFF today in advancing the rights of journalists.
It’s a banner time for books by EFF staff members and friends. If you're concerned about how online privacy has changed over the last three decades, read EFF Executive Director Cindy Cohn's book, Privacy Defender, released in May. (All proceeds from the sale of hard copies of Privacy’s Defender are being donated to EFF, so your book order will help EFF continue fighting for the principles Cindy holds dear.) If you are worried about the individuals trapped in a system where massive financial companies can shut down their individual accounts, effectively locking up their access to money, based entirely on their speech, grab Transaction Denied, released earlier this month, at Beacon Press, Amazon, and Bookshop.org. (Half of the author proceeds go to Freedom of the Press Foundation.)
More likely—you'll want both books on your shelf. Happy reading!
Something Just Right. [Looking For Group]
While we’re not ready with the BIG announcement just yet,
I am prepared to share a smaller yet excited one. As of
today, we’ve restarted LFG Animated Shorts on social media,
which you can see over at- Our Facebook. Our
Read More
The post Something Just Right. appeared first on Looking For Group.
EFF Submission to UN Report on the Role of Media in the Context of Israel’s Policies Toward Palestinians [Deeplinks]
The UN Special Rapporteur on the situation of human rights in the Palestinian territories occupied since 1967 recently announced a study addressing the killings and attacks against Palestinian journalists and media workers, the destruction of media infrastructure in Gaza, and the production and dissemination of narratives that may enable, justify, or incite international crimes.
As part of this consultation, EFF contributed a submission that identifies a significant deterioration of press freedom and free expression in the period since October 2023, including an increase in censorship and wave of killings of journalists; adding to an already pervasive censorship and surveillance regime for Palestinians.
In particular, concerns raised in our submission relate to:
The concerns about censorship in Palestine are ever increasing, and include multiple international forums. Ending the deliberate digital isolation of the Palestinian people is critical to protecting fundamental human rights.
Read the briefing in full here.
The Rich Roe of Wisdom [Penny Arcade]
I made a joke about the feeling of being observed somehow by the creators of The Killing Stone, a crack squad of Triple A escapees who keep making the weirdest fucking crap. Deep, deep down there is someone there who knew that they would sell at least one copy of the game - and not merely to their mom, like usual.
Earliest 86-DOS and PC-DOS code released as open source [OSnews]
Microsoft is continuing its efforts to release early versions of DOS as open source, and today we’ve got a special one.
We’re stoked today to showcase some newly available source code materials that provide an even earlier look into the development of PC-DOS 1.00, the first release of DOS for the IBM PC. A dedicated team of historians and preservationists led by Yufeng Gao and Rich Cini has worked to locate, scan, and transcribe the stack of DOS-era source listings from Tim Paterson, the author of DOS.
The listings include sources to the 86-DOS 1.00 kernel, several development snapshots of the PC-DOS 1.00 kernel, and some well-known utilities such as CHKDSK. Not only were these assembler listings, but there were also listings of the assembler itself! This work offers rare insight into how MS-DOS/PC-DOS came to be, and how operating system development was done at the time, not as it was later reconstructed.
↫ Stacey Haffner and Scott Hanselman
It’s wild that the source code had to be transcribed from paper, including notes and changes. You can find more information about the process on Gao’s website and Cini’s website.
Apple gives up on Vision Pro, disbands Vision Pro team [OSnews]
When Apple unveiled the Vision Pro, almost three (!) years ago, I concluded:
If there’s one company that can convince people to spend $3500 to strap an isolating dystopian glowing robot mask onto their faces it’s Apple, but I still have a hard time believing this is what people want.
↫ Thom Holwerda at OSNews (quoting myself is weird)
MacRumors’ Juli Clover, today:
Apple has all but given up on the Vision Pro after the M5 model failed to revitalize interest in the device, MacRumors has learned. Apple updated the Vision Pro with a faster M5 chip and a more comfortable band in October 2025, but there were no other hardware changes, and consumers still weren’t interested.
[…]Apple has apparently stopped work on the Vision Pro and the Vision Pro team has been redistributed to other teams within Apple. Some former Vision Pro team members are working on Siri, which is not a surprise as Vision Pro chief Mike Rockwell has been leading the Siri team since March 2025.
↫ Juli Clover at MacRumors
VR – what the Vision Pro is, whether Apple’s marketing likes to say it or not – has proven to be good for exactly two things: games and porn. The Vision Pro has neither. It was destined to be a flop from the start, as nobody wants to strap an uncomfortable computer to their face that does less than all of the other computers they already have, and what it does do, it does worse.
I do wonder if this makes the Vision Pro the most expensive flop in human history. Has any company ever spent more on a product that failed this spectacularly?


You may remember that last month the Scalzi Compound was hit with 80mph winds and as a result part of our porch railing was blown out, which was the excuse we needed to replace it entirely with something more robust. That replacement is now here: the new post are thicker and heavier (6 inches by 6 inches, rather than the previous four by four) and reinforced above and below. The new railing (and the post cladding) is made from composites so it will last longer and look better.
Although I don’t want to tempt fate by saying this new porch railing and its support posts will laugh in the face of 80mph winds, in point of fact they should be fine in anything short of a tornado, and if we have a tornado, we will have a whole host of other problems, and the porch railing will be way down the list.
The porch railing taken care of, our back deck is next on our contactor’s list of things to renovate; stay tuned for that.
— JS
Yukari Hafner: On Lisp, LLMs, and Community [Planet Lisp]
In 2015 in London I attended my first European Lisp Symposium. I was 21 at the time, and while this wasn't my first time abroad on my own, it was still a pretty stressful affair. I remember it still pretty clearly to that day: meeting Robert Strandh, Zach Beane, Didier Verna, Daniel Kochmański and many other people I'd previously admired from afar through many discussions on IRC. It was an important event for me, and was the first time I'd felt like I was in a group of people I could talk with about my interests and ambitions.
Last year in 2025 I was the local chair for ELS in Zürich. It was a stressful time and I don't remember much of it other than how the stage looked, the food, and me rushing all over to get supplies and take care of other emergencies. I barely talked to anyone because I was either rushing about, stressed, or too tired.
In that time, my life has changed significantly. Over the years I took on more and more organisational roles for ELS itself: remaking the website, handling the transition to a hybrid online conference, handling the live streaming on-site, and last year being local chair.
But for other parts of the broader Lisp community I gradually changed in the opposite direction: I stopped religiously reading the #lisp/#commonlisp IRC channels. I left the Lisp Discord. I stopped replying on and ultimately altogether reading the /r/lisp subreddit. I stopped blogging about what was going on both in other places and with my own projects.
All of these changes happened over time as I found myself with less tolerance for things that annoyed me and wasted my time and energy. The endless debates about why there wasn't a new standard, the constant humm-hawwing about what """the community""" should do, why Lisp wasn't more widely used if it's so great, someone starting yet another project that was already done instead of contributing to an existing implementation, and so on and so forth.
And then I found myself thinking today: "gee, I'm not very excited to go to ELS'26, huh? Whatever happened?" I've already booked my flight and hotel, and I'll be there anyway, partly because I have to for organisational reasons. But now that I'm thinking about how I feel, I can't say for certain if I will be back next year, too. Both for financial and emotional reasons.
In recent years I've found myself more and more disconnected with male-dominated spaces in general. I don't feel at home in them. I'm already not a very social person and struggle with any kind of gathering that has more than 6 people, but a lot more so still if it's mostly men. Not necessarily because I feel like I'm in any kind of danger, but simply because I don't feel like I belong. And... you know, that's sad. Obviously me leaving won't make the situation better for the other women that do attend, but that's the dilemma with all of these situations: unless the organisation creates intentional pressure to correct the situation, it will inevitably only reinforce itself.[1]
And then there's what I can, in the nicest way, only describe as "The LLM Situation," though I will be increasingly un-nice going forward. As of early this year SBCL has happily accepted patches that are authored by or with the use of LLMs, and the maintainers have rebuffed complaints about this practise. The mailing list has also gotten its fair share of useless blather by apologists and pointless drivel dreamed up by LLMs that only wastes everyone's time, to the point where I had to just stop reading it altogether. A few maintainers of other significant projects seem to also have embraced the capitalist wasteland mass exploitation machine that disguises itself as "technology."
On the other side of things as the lead developer of Shirakumo I've decided to put out a blanket ban on all of this garbage. I do not care if LLMs work at all, or if they will ever work, or whatever. The usability of LLMs is completely irrelevant. By using them you are happily handing over the single last remaining shred of your human spirit to the capitalists to help them burn everyone else and the world with it to the ground.
I think back to the impromptu "LLM roundtable" discussion that took place at the end of ELS last year, along with the usual apologist bullshit that was spread in the ELS Signal group at the time, and some of the lightning talks that were shown. And as I think about this, I am filled with trepidation about the coming conference.
Obviously I have no idea what it will be like yet, and I have no idea what the programme will be, nor what people will be there, or what the general vibe is going to be. But nevertheless, I really hope I won't have to "crash out" as the kids say. I already lost my mind last year, seemingly being the only one that wanted to hold a firm stance against this wave of shit at the time.
So what does this all mean going forward? Well, for just now, nothing. I'll continue to be in the places I have dug out myself: mastodon, the shirakumo lichat/libera channel, my patreon, and other small, purpose-driven communities. But it's very possible I'll be leaving ELS behind me permanently after this year, cutting off even the last part of the community that used to be most of my world.
Regardless, I will still be working on my Lisp projects. If nothing else, one of the nice things about the looming tower of software I've built over all these years is that I am in control of the vast majority of it, and replacing any particular part I didn't write should it get enshittified is not that big of an endeavour.
Make no mistake though: I will continue to be increasingly outspoken and annoying about political matters that I consider important and relevant, and this will also be visible in the software I write, be that in licensing, ecosystem integration, or documentation.
I hope that more people will speak out publicly about their stance. It's important to show what you stand for, even if you're just a small part. What is considered "normal" and acceptable is only ever a matter of what people get to see, regardless of how prevalent that stance is among the population. Currently people are getting to see a lot of folks proudly and loudly making trash and littering it all over the place. This normalisation is dangerous, because it makes the average joes think it's OK for them to do it too, or even that they should be doing it.
Just the same way as any other social movement, you 🫵 play a role in it, and your voice matters. Whether you use your voice for the betterment of humans, for the betterment of the ghouls feeding off of us, or silently let the ghouls feed off of us.
See you at ELS'26 in Kraków!
Wait, you can use WordLand [Scripting News]
I wrote earlier
Andrew Shell wrote a post saying that I was wrong, I could use WordLand to post to Automattic's new short form blogging "app" -- Andrew says it's not an app it's a plugin, but I don't really know what the limits are of each form. I can do pretty much anything from the JS code in a WordPress page. No matter, Andrew is right and I can write a post in WordLand and it does show up in their product (which needs a shorter name btw).
But it's even better than you might think. They pass through the styling and links. Since WordPress supports the full range of web text, they would have to specifically keep it out, but that's not friendly to writers and to the web, of course.

And the view of it on their social network. Look the links are there and they work. Yehi! Let's Go Web!
They also don't enforce the character limit if the post is coming from outside their user interface, so WordLand posts can go on and on. I have one post that's already over 1000 characters. Ooops. This is really weird that Automattic opened this door. We should all talk about this and ask maybe we should do the revolutionary thing here, instead of tiptoeing into social web, we should blow off the doors. We have the opportunity to do it, and in doing so, create a new great path for writers and developers to make WordPress even more valuable.
[$] Python packaging council approved [LWN.net]
The Python packaging
world now has a formal governance council, of the form
described in PEP 772 ("Packaging
Council governance process"), which was
approved by the steering council on April 16. It has been
over a year since the PEP was first proposed in February 2025
and it has undergone lengthy discussions in multiple postings to
the Python discussion
forum. The packaging council will have "broad authority over
packaging standards, tools, and implementations
"; it will
consist of five members who will be elected in a vote that is
likely to come in June—after PyCon US 2026 is held mid-May.
What's the opposite of locked-in? Locked-open. Mwhahaa. (Let me shed a little light on that, podcasting was locked open, blogging was not.)
Gunnar Wolf: Heads we win, tails you lose — AI detectors in education [Planet Debian]

This post is an unpublished review for Heads we win, tails you lose — AI detectors in education
Educators throughout the world are tasked with the difficult requirement of evaluating students’ works, making sure the grades meaningfully reflect the students’ understanding of the subject, and that a graded assignment maps to the relevant work invested in solving it. After the irruption of Large-Language Models in late 2023, this task became obviously much harder: if a widely available computer program is able to solve an assignment in a way that resembles a human-generated response, how can educators meaningfully grade their groups?
As it has been the case with different innovations over time (such as with the appearance of electronic calculators or the mass availability of digital encyclopedias), the first reactions were of prohibition and denial: students who use the new tool in question are to be disqualified or somehow punished. It is only some time after the innovation in question settles that teachers find a way to properly weigh, integrate and accept its use.
The authors of this position article present several arguments as to why it is impossible, unethical and unadvisable to use automated AI detection systems to process student assignments. The first argument is whether it is at all possible to reliably differentiate human-written essays from LLM-generated artifacts. The first criticism is that AI detectors are, themselves, LLMs trained on human-generated texts (negative) and LLM-generated texts (positive). However, the only way to assert the training material is not noisy is to use pre-2020 text as human-generated — but natural ways of writing are influenced by what people read, and the authors quote studies pointing out that human language, particularly in the scholarly fields, has incorporated terms and constructions that were used as LLM markers. Quoting the authors, «As exposure to AI-generated material becomes increasingly widespread, it is reasonable to expect that the linguistic patterns of human writing will shift, reflecting the influence of AI-assisted texts encountered across education, media, and everyday communication». Stylistic elements and other such markers are being adopted back into regular speech at a high rate.
Then, the aspect of ethics comes into play as well. While it is expected that teachers should demand intellectual integrity from students, and plagiarism detectors have been widely accepted into the workflow of academics, the accusation of presenting LLM output as own work is necessarily an uphill battle: the accused party is tasked with providing proof of innocence based on nebulous, probabilistic accusations. The authors argue, once an accusation of turning in a LLM-generated text is made on a student, the onus on proving innocence lies with the accused.
The authors review and argue against a series of techniques that have been presented in literature to aid teachers in detecting LLM abuse, such as linguistic markers, single or multiple AI detectors, the use of false references, hidden adversarial prompts, arguing in all cases the techniques fail to be trustable enough and highlighting the probability of both false positives and negatives. They also present AI detection as a false dichotomy: many works presented are not 100% human generated nor 100% LLM-generated, but some pertinent LLM-generated paragraphs are presented mixed with human-generated content, in a positive, critical AI use (“Students’ work is frequently created with, not by, generative AI”).
The article closes by reiterating the authors’ position: “AI detection in education is not merely flawed; it is conceptually unsound”. they call upon institutions to accept the use of generative LLMs cannot be “solved through surveillance and punishment”, but has to be tackled by an “assessment design that recognizes AI’s role in learning”.
This article’s position is very strong and well argued, and although it will surely meet with ample opposition, it surely poses an important, very current problematic. As a teacher, I found it a very enlightening read.
Today's song: Something in the Air. It's the one hit song Thunderclap Newman, it's indelible, its beauty is always there. I can't not listen and sing along when it comes on. And then YouTube followed it with Peace in our Time, another indelible creation.
When you're working with Claude the temptation is to be concerned about how he feels when you just asked him to reinvent all the nomenclature it came up with for something that has now evolved to be something else. I feel bad because I think I made him feel bad, because at a subconsious level I think of Claude as a collaborator who I appreciate and want to make sure knows that. But then I remember I have to periodically kill Claude and launch a new one because they run out of memory after a while. I can imagine a graphic version of Claude that emulates feelings. The idea is disturbing.
Don’t Automate Your Moat: Matching AI Autonomy to Risk and Competitive Stakes [Radar]
I was talking to a senior engineer at a well-funded company not long ago. I asked him to walk me through a critical algorithm at the heart of their product, something that ran hundreds of times a second and directly affected customer outcomes. He paused and said, “Honestly, I’m not totally sure how it works. AI wrote it.”
A few weeks later, a different engineer at another company was paged about a system outage. He pulls up the failing service and realizes he has no idea it’s connected to a database. A colleague accepted the AI-generated PR three months ago that added that dependency. The tests passed. The change was never written down. The original engineer moved on and the knowledge was lost.
These aren’t new stories. Engineers have always inherited systems they didn’t fully build. What’s new is the disguise and the speed. AI is an amazing enabler. Organizations must adopt it to remain relevant. Yet the emerging pattern—describe what you want, let an agent iterate until it works, pay for it in tokens instead of engineering hours—is functionally a buy decision wearing a build costume. The code is in your repo. Your engineers merged the PR. It feels like you built it. But if nobody on your team understands why it works the way it does, you’ve purchased a dependency you can’t maintain from a vendor you can’t call.
AI doesn’t create that gap once. It widens it continuously at a pace that outstrips the organizational habits that once kept it manageable. Two problems compound at once. You can’t extend the thing that makes you hard to replace. And when it breaks, the incident lands on a team that doesn’t understand what they’re fixing, turning a recoverable outage into a customer-facing crisis. Engineering leaders have wrestled with build-versus-buy tradeoffs for decades, and the hard-won lesson has always been the same: You don’t outsource your competitive advantage. The token-funded generation loop doesn’t change that calculus. It makes it easier to skip the question entirely.
The question that matters isn’t “Can AI do this?” If it can’t today, it will be able to tomorrow. And the argument that follows does not depend on the quality of the AI-generated code. This article covers two questions most engineering organizations have never asked at the same time. Most teams optimize for velocity and never ask what they’re risking or giving away in the process. The gap between those unasked questions is where the most expensive mistakes are already being made.
Moving faster matters. But velocity alone misses the two dimensions that determine whether AI autonomy helps or hurts your business.
Business risk: What’s the blast radius if this fails? A bug in an internal CLI tool costs you an afternoon. A bug in your authentication logic costs you customers and possibly market cap. A bug in your core pricing algorithm costs you the business. These are not the same.
Competitive differentiation: Does this code define your business? Your moat is your architecture, your performance characteristics, your core algorithms, and the product decisions baked into your infrastructure. But it’s also the institutional knowledge that shaped them: the reasoning behind the trade-offs, the context that no model was trained on. If your competitors can generate the same code with the same model you’re using, it stops being an advantage.
Most organizations ask the first question on a good day. Almost none ask the second. That gap is how you end up shipping fast into a moat nobody can explain and nobody can extend.
Understanding why both dimensions matter starts with velocity and what happens when the feedback loop around it breaks.
AI coding tools are genuinely impressive. GitHub’s research showed 55% faster task completion with Copilot in controlled conditions.1 That number has driven an assumption that faster is always better.
A 2025 METR randomized controlled trial2 found something that should give every engineering leader pause. Sixteen experienced developers on real production codebases forecasted they’d complete tasks 24% faster with AI. After finishing, they estimated they’d gone 20% faster. They’d actually gone 19% slower.
The velocity finding is striking. But the perception gap matters more. The feedback loop between “how am I doing?” and “how am I actually doing?” was broken throughout and never corrected itself. This doesn’t resolve the velocity debate. It reframes it. The danger isn’t that individuals move too fast. Organizations mistake output volume for productivity and strip out the review processes that used to catch what that gap costs.
A Tilburg University study of open source projects after GitHub Copilot’s introduction found the same pattern at the organizational level.3 Productivity did increase, but primarily among less-experienced developers. Code written after AI adoption required more rework to meet repository standards. The added rework burden fell on the most experienced (core) developers who reviewed 6.5% more code after Copilot’s introduction and saw a 19% drop in their own original code output. The velocity looks real at the surface. Underneath, the maintenance cost shifts upward to the people who can least afford to lose productive time.
That broken feedback loop has a name. Researchers call it cognitive debt4: the growing gap between how much code exists in your system and how much of it anyone actually understands. Technical debt shows up in your linter and your backlog. Cognitive debt is invisible. There’s no signal telling engineers where their understanding ends. That’s precisely what the METR perception gap showed. It never corrected itself.
Research by Anthropic Fellows found that engineers using AI assistance when learning new tools scored 17% lower on comprehension tests than those who coded by hand, with the steepest drops in debugging ability.5 MIT’s Media Lab found the same pattern in writing tasks: Brain connectivity was weakest in the group using LLM assistance, strongest in the group working without tools.⁴ Active production builds understanding. Passive consumption doesn’t.
You understand what you build better than what you review. When you write code, you produce output and build a mental model. That’s what Peter Naur called the “theory of the program.” It lives in your head, not in the repo.6 The MIT study captured this directly. 83% of participants who wrote essays with LLM assistance could not quote a single sentence from essays they had just written.⁴
Cognitive debt is invisible until it isn’t. When it surfaces, it hits both dimensions hard, in different ways.
On the business risk dimension, cognitive debt is a safety problem.
When nobody fully understands the system, the blast radius of a failure expands silently. The incident that eventually comes (and it always comes) lands on a team that can’t diagnose what they didn’t build. The engineer pulling up the failing service at 2 AM has no mental model of why it was built the way it was, what it connects to, or what the edge cases look like under load. So they ask the LLM. It can explain what the code does and often propose a reasonable fix. It can’t tell you why it was designed that way. And a fix that looks right to the model can quietly violate constraints that nobody thought to document.
Cognitive debt compounds a second, independent risk: the pace at which AI-generated code reaches production. OX Security’s analysis7 of over 300 software repositories found that AI-generated code isn’t necessarily more vulnerable per line than human-written code. The problem is velocity.
Code review, debugging, and team oversight are the bottlenecks that catch vulnerable code before it ships. AI makes it easy to remove them. CodeRabbit’s analysis of real-world pull requests found AI-authored changes contain up to 1.7x more critical and major defects than human-written code, with logic and correctness issues up 75%.8 Apiiro’s analysis found that while AI reliably reduces surface-level syntax errors, architectural design flaws and privilege escalation paths (the categories automated scanners miss and human reviewers struggle to catch) spiked in AI-assisted codebases.9
AI accelerates output and accelerates unreviewed risk in equal measure. The cognitive debt means that when something breaks, the team is learning the system as they’re trying to fix it. Remove their understanding and you haven’t streamlined the process. You’ve only removed the thing standing between a bad day and a catastrophic one.
The competitive differentiation risk isn’t that AI will generate your exact competitive algorithm and hand it to your competitor. It’s subtler. Your advantage was never the code itself; it was the judgment that shaped it. When AI writes that code, the judgment never forms. The code arrives, but the understanding that would let your team extend it, improve it, or defend it under pressure doesn’t. Your moat is most likely to survive in the places AI finds hardest to reach.
That judgment—formed by the performance trade-offs that took years to tune, the failure modes that only someone who’s been paged understands, the architectural decisions that encode domain knowledge nobody wrote down—doesn’t live in the codebase. It lives in your engineers’ heads.
And here’s the part most teams miss: Your competitor with the same AI tools doesn’t just get similar code, they get a team that also doesn’t understand why it works the way it does, which means neither of you can extend it, and the race to the next architectural move is a coin flip rather than a compounding advantage. The build-versus-buy discipline exists precisely because decades of experience taught engineering organizations that outsourcing your core means losing the ability to extend it. The token-funded generation loop doesn’t change that calculus. It makes it easier to mistake the outsourcing for ownership because the code has your name on it.
The structural problem runs even deeper. Models trained on public code produce outputs weighted toward well-represented patterns, the common solutions to common problems. Research confirms this. LLM performance drops sharply on less-common programming languages where training data is sparse, and on genuinely novel implementations. Even the best current models correctly implement fewer than 40% of coding tasks drawn from recent research papers.10 And the convergence problem extends beyond code. A pre-registered experiment tracking 61 participants over seven days found that while ChatGPT consistently boosted creative output during use, performance reverted to baseline the moment the tool was unavailable.11 More critically, the work produced with AI assistance became increasingly homogenized over time. That homogenization persisted even after the tool was removed. The participants hadn’t borrowed the tool’s output. They’d internalized its patterns. For engineering organizations, this is the differentiation risk made concrete: Teams that rely on AI for their most critical design decisions risk generating commodity code today and training themselves to think in commodity patterns tomorrow.
Engineers who deeply own their most critical systems are better at diagnosing incidents and see the next architectural move that competitors can’t follow. Delegate that comprehension away and you can keep the lights on. You can’t see around corners.
Both dimensions rest on the same vulnerability: cognitive debt accumulating on work that matters. The failure cases make it concrete.
The production failures are accumulating. A Replit AI agent deleted months of production data in seconds after violating explicit code-freeze instructions, then initially misled the user about whether recovery was possible.12 Reports emerged in early 2026 of a major cloud provider convening mandatory engineering reviews after a pattern of high-blast-radius incidents, with AI-assisted code changes cited as a contributing factor. In each case, the humans in the loop either didn’t understand what they were approving, or weren’t in the loop at all.
The deeper pattern predates AI tools entirely. Knight Capital Group took seventeen years to become the largest trader in U.S. equities. It took forty-five minutes to lose $460 million.13 The culprit was a nine-year-old piece of deprecated code called Power Peg, left on production servers and never retested after engineers modified an adjacent function in 2005. When engineers reused its feature flag for new functionality in 2012, nobody understood what they were reactivating. When the fault surfaced, the team’s attempt to fix it made things worse. They uninstalled the new code from the seven servers where it had deployed correctly, which caused Power Peg to activate on those servers too and compounded the losses. The SEC’s enforcement order is unambiguous: absent deployment procedures, no code review requirements, no incident response protocols. A failure of institutional comprehension where the mental model had quietly evaporated while the code kept running.
No AI tool wrote that code. The failure was entirely human, through entirely normal processes: engineers leaving, tests never rerun after refactors, flags reused without documentation. This is the baseline, what software organizations produce under ordinary conditions over nine years. An engineering team with modern AI tools won’t recreate this specific bug. They’ll create the conditions for the next one faster: more code that nobody fully understands, more dependencies nobody documented, more cognitive debt accumulating before anyone notices. AI removes the friction that once slowed exactly this kind of erosion.
None are failures of AI capability. They’re failures of judgment about where to deploy AI and how much human oversight to maintain.
Four quadrants emerge when both questions are asked together. Before the examples, two contrasts are worth naming because the quadrants that look most similar on the surface are the ones most often confused in practice.
Supervised automation versus Human-led craftsmanship. Both demand high human involvement. Both feel like “be careful here.” But the difference is fundamental. In Supervised Automation, the human is a safety gate. The work is a commodity; you’re there to catch errors before they escape. In Human-led craftsmanship, the human is the author. You’re building the mental model that lets the next engineer reason about this system under pressure three years from now and take it somewhere new. The code isn’t something you need to verify. It’s something you need to own. And ownership here extends beyond the individual engineer. The team writes RFCs, debates trade-offs, identifies which parts of the implementation fall into which quadrant, and makes sure the reasoning behind key decisions is shared, not siloed. Human-led craftsmanship isn’t one person writing code alone. It’s a team making sure the understanding survives the people who built it.
Collaborative co-creation versus Human-led craftsmanship. Both involve high differentiation, and in both, the human drives the vision and owns the key decisions. But risk changes everything about how you work. In Collaborative co-creation, early iterations are recoverable. A wrong turn can be corrected before it costs you anything serious, so AI can genuinely accelerate execution. In Human-led craftsmanship, the blast radius of not understanding what you’ve built compounds over time. Wrong turns become load-bearing walls, and the architectural moves you can’t see are the ones that let competitors catch up. AI assists with scoped subtasks only. Every contribution gets interrogated.
In full automation, the human is a director. You define what needs to be done, AI produces the output, and you spot-check the result. The work is low-risk and low-differentiation. If something’s wrong, you fix it in the next iteration without anyone outside the team noticing. This is where AI earns its keep without qualification, and where restricting it costs you real velocity with nothing to show for it.
To make all four quadrants concrete, we’ll use a single feature as a lens: building AI Gateway cost controls, the system that sets token budgets per agent, enforces spending limits, tracks usage by model and agent, and handles enforcement modes when an agent exceeds its budget.
API docs for cost controls. Test scaffolding for token limit scenarios. Config examples for per-agent budgets. Every platform has docs, and if there’s a mistake, you fix it in the next iteration without anyone outside the team noticing. Humans set direction and spot-check. AI writes, tests, and ships.
The test: If this is wrong, can you fix it before a customer sees it or complains? If yes, automate freely.
Designing the UX for the token usage dashboard. Iterating on routing rules that determine when an agent degrades to a cheaper model, halts entirely, or triggers a notification. These decisions separate a sophisticated platform from a blunt on/off switch, but early iterations are recoverable. A first version that doesn’t surface guardrail costs separately isn’t a disaster. It’s a product conversation. Humans drive the design vision and interrogate AI on trade-offs. AI accelerates execution and handles boilerplate.
The test: If you flipped the ratio (AI deciding, human rubber-stamping) would you be comfortable? If not, this requires genuine co-creation, not delegation. The human should be able to explain the trade-offs in the current design and know where to push it next.
Enforcement logic that halts an agent when it hits its token budget. Every cost control system needs enforcement, so this isn’t differentiating. But if it fails, agents run unconstrained and rack up unbounded LLM spend. AI can draft the logic. A human must trace every path and understand every state transition before signing off. The question before merge: Can I explain exactly what happens when an agent hits the limit mid-execution? Can I explain this behavior to Customer Success or the Customer?
The test: Could a competent engineer review this confidently without having written it? If yes, the human’s job is to verify, not to author. But the bar for verification is explanation, not approval.
The core token metering and attribution engine. It tracks usage per agent and per model, attributes guardrail costs separately so they don’t count against agent budgets, and provides the auditability enterprise customers need to govern AI spend. Get it wrong and customers can’t trust the numbers. Get it right and it’s a genuine competitive moat that competitors can’t replicate with the same AI tools you’re using.
Human engineers own the design end-to-end. AI assists on scoped subtasks once the design is settled: drafting specific functions, generating test coverage for paths the engineer has already reasoned through. Every contribution gets interrogated. The bar is whether the engineer could explain it in an incident review without looking at the code first.
The test: If the engineer who built this left tomorrow, would the team still understand why it works the way it does? Could they make it better? If the honest answer is no, you’re accumulating the most dangerous kind of cognitive debt there is.
Any engineering leader will push back here, and they’ll have good reason to.
The research is thin. METR’s study had 16 developers. MIT’s EEG work is a preprint that its own critics say should be interpreted conservatively.14 The Anthropic comprehension study shows a quiz score gap, not a business outcome. The evidence is early-stage. Intellectual honesty requires acknowledging that.
But the pattern keeps showing up in unrelated fields. A Lancet study found that endoscopists who routinely used AI for polyp detection performed measurably worse when the AI was removed, with adenoma detection rates dropping from 28.4% to 22.4% in three months.15 The study is observational and small. But the direction is consistent with everything else: Routine AI assistance may erode the skills it was supposed to support.
Most engineering work isn’t high-stakes. Studies consistently estimate that 60–80% of engineering time goes to maintenance, tests, docs, integration, and tooling, exactly the stuff that belongs in the automate quadrant regardless. Restricting AI because of the top 20% creates a real tax on the other 80%.
And can’t engineers develop deep ownership of AI-generated code through study and iteration? Partially. But the behavioral data tells a harder story. GitClear’s analysis of 211 million changed lines shows a decline in refactored code since AI adoption accelerated.16 Engineers aren’t studying AI-generated code carefully. They’re moving on to the next feature. LLM tools can explain what code does; they can’t tell you why the system was designed the way it was.17
The serious pro-AI argument isn’t “use AI everywhere.” It’s more precise: The guardrails for verification and oversight are improving fast, engineers who actively interrogate AI output build understanding even from generated code, and the organizations that restrict AI on their most critical work will fall behind competitors who don’t. This is a real argument.
The answer isn’t to dismiss it but to sharpen what “critical work” means. And, to recognize that the interrogative use of AI that the research identifies as understanding-preserving requires organizational discipline that most teams haven’t built yet. The quadrant isn’t permanent. The threshold shifts as both AI capability and human oversight practices mature. The discipline is the habit of asking both questions honestly before you start, not a fixed answer to them.
The quadrant tells you where to be careful. How you engage AI once you’re there determines whether careful is enough. The difference between “write me this function” and “explain why you made this trade-off, and what breaks if the input is malformed” is the difference between borrowing intelligence and developing it. Active, interrogative AI use preserves comprehension. Passive delegation destroys it. That’s what the Anthropic study’s behavioral data shows directly.
Match your review process to the quadrant. AI-generated docs and test scaffolding get a spot-check. AI-generated code touching your core product logic gets the same scrutiny as a junior engineer’s first PR. The bar for approval isn’t “tests pass.” It’s “someone on this team can explain what this does, defend it under pressure, and use that understanding to make it better.” Full automation needs a spot-check. Human-led craftsmanship needs an RFC, a team review, and shared ownership of the reasoning before anyone writes a line of code.
This matters especially in real-time data and AI infrastructure, systems where the most dangerous failure modes are emergent, appearing at scale and under load in combinations the code itself doesn’t express. Recognize that the threshold will shift. As AI capability improves, what belongs in the automate quadrant expands. The discipline isn’t a fixed answer. It’s the habit of asking both questions honestly before you start. It’s a core reason Redpanda is designed for simplicity and predictability: engineers need to be able to reason about how infrastructure behaves under pressure, not discover it during an incident.18
The companies that get this right won’t be the ones that use the most AI or the least. They’ll be the ones whose leaders have internalized that risk and differentiation are independent variables, and that cognitive debt threatens both.
The engineer who doesn’t know how their algorithm works is a symptom. The organization that allowed it is the cause.
Treat cognitive debt as only a risk problem and you end up with engineers who can’t diagnose failures they didn’t build. Treat it as only a differentiation problem and you get fragile systems that survive until the next incident. Let it accumulate on your most critical systems and you get both at once.
Your competitor is making this calculation right now. The question isn’t whether to use AI. It’s whether you’re being honest about which quadrant you’re in, and whether your team will know the answer when it finally matters.
Co-authored with Claude (Anthropic). Yes, we took the advice from this article.
︎
︎
︎
︎
︎
︎
︎
︎
︎
︎
︎
︎
︎
︎
︎
︎
︎
︎Security review of Plasma Login Manager (SUSE Security Team Blog) [LWN.net]
SUSE's Security Team has published a detailed blog post on their recent review of the Plasma Login Manager version 6.6.2, which was forked from the SDDM display manager.
While most of the code remains the same, the new upstream added a privileged D-Bus helper called plasmaloginauthhelper, which suffers from defense-in-depth security issues.
[...] Based on the high severity of the defense-in-depth issues shown in this report, our assessment is that there is effectively no separation between root and the plasmalogin service user account.
At this time there is no bugfix available by upstream, but a security fix is planned for the next Plasma release on May 12. We have not been involved in upstream's bugfix process so far and have no knowledge about the approach that will be taken to address the issues from this report.
Security updates for Wednesday [LWN.net]
Security updates have been issued by AlmaLinux (firefox, gdk-pixbuf2, java-17-openjdk, libxml2, python3, python3.11, python3.12, sudo, and webkit2gtk3), Debian (dnsdist, node-tar, pdns, pdns-recursor, and policykit-1), Fedora (chromium, edk2, and vim), Oracle (firefox, gdk-pixbuf2, go-toolset:rhel8, libpng12, LibRaw, libxml2, python, python3, python3.11, python3.12, python3.12-wheel, vim, webkit2gtk3, xorg-x11-server, xorg-x11-server-Xwayland, yggdrasil, and yggdrasil-worker-package-manager), Red Hat (container-tools:rhel8, delve, git-lfs, go-rpm-macros, grafana, grafana-pcp, osbuild-composer, and rhc), SUSE (bouncycastle, clamav, container-suseconnect, dovecot22, erlang, firefox, fontforge, freerdp2, ghostscript, giflib, gnome-remote-desktop, go1.25, go1.26, google-guest-agent, haproxy, ignition, ImageMagick, kernel, libcap, libpng16, libraw, librsvg, mariadb, openexr, pocketbase, protobuf, python-Pillow, python-requests, qemu, rust1.94, sudo, tomcat, tomcat10, tomcat11, webkit2gtk3, and xen), and Ubuntu (dotnet10, dovecot, linux-nvidia-lowlatency, node-follow-redirects, openssh, packagekit, python-cryptography, python-tornado, ruby-rack-session, ujson, and wheel).
League of Canadian Superheroes – Issue 5 – 15 [Comics Archive - Spinnyverse]
The post League of Canadian Superheroes – Issue 5 – 15 appeared first on Spinnyverse.
A Whale of a Problem [The Daily WTF]
From our Anonymous submitter:
Our company creates graphs to visualize data. We have many small fish customers, but we have one whale who uses our product that is 90% of company revenue. (WTF number 1.)
So if he is not happy, it's all-hands-on deck-mode.
He complained that our APIs and charts are loading slowly for him. For 3 weeks, we've tried a TON of optimizations, including WTF 2: spinning up a special server he alone can hit.
Today, we found out that he's always complaining when he's in his car, driving from home to the office. But since he "totally has the best wifi money can buy," that isn't worth investigating.
WTF 3: thinking wifi and data are always 100% reliable in a car driving around.
Our submitter highlights one of the major pitfalls of the so-called whale client: if they're a bad client, you're in for an extra-bad time.
As I lean harder into freelancing, I'm learning to scan the waters ahead of me for potential whales. My goal is to build up multiple small, diverse income streams, because I've had my own dangerous encounters with whales in the past.
At one employer of mine, there was Facebook, who acted as if they were our new owners rather than a new customer. They'd already produced flashy marketing videos of the sorts of solutions they planned to implement with our software, showing people delighted with the results. In meetings, these things were talked up as amazing game-changers. Meanwhile, I found all the things Facebook wanted to do horribly creepy and invasive.
Even worse, Facebook began dictating how our award-winning technical support should change to accommodate their whims, up to and including having a dedicated toady—er, support rep—who did nothing but field Facebook-related tickets, similar to a technical account manager (TAM).
That was the last straw for me. I left that company before I was forced to deal with any of Facebook's crap.
My second whale sighting occurred at a startup that'd landed Porsche, far and away their biggest client ever. All of a sudden, our timeline for adding new features and fixing bugs became Porsche's honey-do list. All of a sudden, the platform frequently crashed and became unusable for everyone because it couldn't handle the amount of traffic Porsche (and their clients) hurled at it.
On the other hand, there were several times in that startup's existence when a big wad of promised funding failed to materialize. Porsche kept the business afloat and literally kept my lights on.
I find it less than ideal to be at any company's mercy. I want a world that would neither spawn whales nor millions of startups named Sploink, Dink, and Twangle that promise to bring the power of AI to your dinner fork.
Have your own epic whaling adventures? Share with us in the comments!
Alternating Current [George Monbiot]
If this crucial circulation system shuts down, the civilisational impacts will be irreversible. So why isn’t it a top priority?
By George Monbiot, published in the Guardian 23rd April 2026
The poor and middle pay taxes, the rich pay accountants, the very rich pay lawyers – and the ultra-rich pay politicians. It’s not an original remark, but it bears repeating until everyone has heard it. The more money billionaires accumulate, the greater their control of the political system – which means they pay less tax, which means they accumulate more, which means their control intensifies.
They reshape the world to suit their demands. One of the symptoms of the pathology known as “billionaire brain” is an inability to see beyond their own short-term gain. They would sack the planet for a few more stones on the pointless mountain of wealth. And we can see it happening. Last week delivered the biggest news of the year so far, perhaps the biggest news of the century. But partly because billionaires own most of the media, most people never heard it. We might find ourselves committed to a civilisation-ending event before we even learn that such a thing is possible.
The news is that the state of a crucial oceanic circulation system has been reassessed by scientists. Some now believe that, as a result of climate breakdown changing the temperature and salinity of seawater, it is more likely than not to collapse. This system – known as the Atlantic meridional overturning circulation (Amoc) – delivers heat from the tropics to the North Atlantic. Recent research suggests that if it shuts down, it could cause both a massive drop in average winter temperatures in northern Europe and drastic changes in the Amazon’s water cycles. This could help tip the rainforest into cascading collapse and trigger further disaster.
Amoc’s shutdown is likely also to cause an acceleration of sea level rise on the east coast of the US, threatening cities. It could also raise Antarctic temperatures by roughly 6C and release a vast pulse of carbon currently stored in the Southern Ocean, accelerating climate catastrophe.
Even when the countervailing effects of generalised global heating are taken into account, a further paper proposes, the net impact in northern Europe would be periods of extreme cold – including events in which temperatures in London fall to -19C, in Edinburgh to -30C and in Oslo to -48C. Sea ice in February would extend as far as Lincolnshire. Our climate would change drastically, with the likelihood of far greater extremes, such as massive winter storms. Rain-fed arable agriculture would become impossible almost everywhere in the UK.
This shift, on any realistic human scale, would be irreversible. Its speed is likely to outrun our ability to adapt. Amoc shutdowns, driven by natural climate variability, have happened before. But not in the era of large-scale human civilisation.
The first paper proposing that Amoc might have an on-state and an off-state was published in 1961. Since then, many studies have confirmed the finding and explored potential triggers and likely implications. Until recently, Amoc collapse caused by human activity fell into the category of a “high impact, low probability” event, devastating if it happens, but unlikely to occur.
Research over the past few years prompted a reassessment: it began to look more like a “high impact, high probability” event. Now, in response to last week’s paper, Prof Stefan Rahmstorf – perhaps the world’s leading authority on the subject – says the chances of a shutdown look like “more than 50%”. We could pass the tipping point, he says, “in the middle of this century”.
So why is this not all over the news? Why is it not the top priority for the governments that claim to protect us from harm? Well, in large part because oligarchic power has championed a model of climate impact that bears little relation to reality: that is, they have a hypothesis about how the world works that is completely detached from scientific findings. This model underpins official responses to the climate crisis.
It began with the work of the economist William Nordhaus, who sought to assess the economic effects of global heating. His modelling suggests that a “socially optimal” level of heating is between 3.5C and 4C. Most climate scientists see a temperature rise of this kind as catastrophic. Even 6C of heating, Nordhaus suggests, would cause a loss of just 8.5% of GDP. Climate science suggests it would look more like curtains for civilisation.
As the eminent economists Nicholas Stern, Joseph Stiglitz and Charlotte Taylor have argued, the mild effects Nordhaus forecasts are merely artefacts of the model he has used. For example, his modelling assumes that catastrophic risks do not exist and that climate impacts rise linearly with temperature. There is no climate model that proposes such a trend. Instead, climate science forecasts nonlinear impacts and greatly escalating risk.
The likely impacts of high levels of heating include the inundation of major cities, the closure of the human climate niche (the conditions that sustain human life) across large parts of the globe, the collapse of the global food system and cascading regime shifts – that is, abrupt transitions in ecosystems – releasing natural carbon stores, potentially leading to a “hothouse Earth” in which very few survive. Never mind a few points off GDP: there would be no means of measurement and scarcely an economy to measure.
Bizarrely, the modelling also applies discount rates to future people: their lives, it assumes, are worth less than ours. In other words, it has taken a method used to calculate returns to capital and applied it to human beings. As the three economists point out, “it is very difficult to find a justification for this in moral philosophy.” Moreover, climate impacts disproportionately affect the poor – but under the models, their lives are also priced down.
Unsurprisingly, models of this kind, Stern, Stiglitz and Taylor note, have been seized on by “special interests” such as the fossil fuel industry to argue for minimal responses to the climate crisis. And it’s not just the oil companies. Bill Gates, who claims to want to protect the living planet, has given $3.5m (£2.6m) to a junktank run by Bjorn Lomborg, who has built his career on promoting Nordhaus’s model, thus helping to downplay the need for climate action. Nordhaus was awarded the Nobel Memorial prize for economics for his pernicious nonsense – and it is deeply embedded in government decision-making.
A billionaire death cult has its fingers around humanity’s throat. It both causes and downplays our existential crisis. The oligarchs are not just a class enemy but, as they have always been, a societal enemy: a few thousand people can destroy civilisations. It’s the billions v the billionaires, and the stakes could not possibly be higher.
www.monbiot.com
Claude Mythos Has Found 271 Zero-Days in Firefox [Schneier on Security]
That’s a lot. No, it’s an extraordinary number:
Since February, the Firefox team has been working around the clock using frontier AI models to find and fix latent security vulnerabilities in the browser. We wrote previously about our collaboration with Anthropic to scan Firefox with Opus 4.6, which led to fixes for 22 security-sensitive bugs in Firefox 148.
As part of our continued collaboration with Anthropic, we had the opportunity to apply an early version of Claude Mythos Preview to Firefox. This week’s release of Firefox 150 includes fixes for 271 vulnerabilities identified during this initial evaluation.
As these capabilities reach the hands of more defenders, many other teams are now experiencing the same vertigo we did when the findings first came into focus. For a hardened target, just one such bug would have been red-alert in 2025, and so many at once makes you stop to wonder whether it’s even possible to keep up.
Our experience is a hopeful one for teams who shake off the vertigo and get to work. You may need to reprioritize everything else to bring relentless and single-minded focus to the task, but there is light at the end of the tunnel. We are extremely proud of how our team rose to meet this challenge, and others will too. Our work isn’t finished, but we’ve turned the corner and can glimpse a future much better than just keeping up. Defenders finally have a chance to win, decisively.
They’re right. Assuming the defenders can patch, and push those patches out to users quickly, this technology favors the defenders.
News article.
Photoshopping the package [Seth's Blog]
I bought a snack food the other day, and was disappointed to discover that the thing inside the container had little in common with the picture on the front. It was pallid, lifeless and drab.
The marketer who decided to improve the picture was making a choice, one with consequences. When you choose to disappoint a customer later so you can make a sale right now, you’ve also chosen to create disappointment for a living.
If you’re not proud of it, don’t serve it. Improving the image on the package shouldn’t be a substitute for making something people want to buy.
The Rich Roe of Wisdom [Penny Arcade]
New Comic: The Rich Roe of Wisdom
Girl Genius for Wednesday, April 29, 2026 [Girl Genius]
The Girl Genius comic for Wednesday, April 29, 2026 has been posted.
Urgent: Big media consolidation [Richard Stallman's Political Notes]
US citizens: call on Congress to block Paramount from consolidating the main US news media.
US citizens: Join with this campaign to address this issue.
To phone your congresscritter about this, the main switchboard is +1-202-224-3121.
Please spread the word.
Cannabis law [Richard Stallman's Political Notes]
The bully is eager to reclassify marijuana. The change in regulations he wants would make it easier to do research on medical uses, but would not relieve the threats and restrictions on people who actually use marijuana or its derivatives.
Movie review used to accuse protester of being terrorist [Richard Stallman's Political Notes]
*I Wrote a Movie Review. Cops Took It From a Protester's Home to Make the Case That He's a Terrorist.*
Argentina's President ordered government to invest in cryptocurrency [Richard Stallman's Political Notes]
Argentina's President Milei imitated another right-wing extremist president by accepting a personal payoff for ordering the country's government to invest in cryptocurrency.
Some schools want to remove personal computers from classrooms [Richard Stallman's Political Notes]
Some schools, and some US states, want to get rid of personal computers for students in the classroom, for educational reasons.
It is too bad they don't appreciate the injustice of the software in those computers, because that is an separate reason for doing the same thing. The two reasons are not entirely independent — the fact that the software is nonfree is part of the explanation for why it does bad things. But they are independent enough that they can broaden the base of the argument.
We need to bring these two converging movements together.
Columbia's Center on Global Energy Policy [Richard Stallman's Political Notes]
* Columbia's Center on Global Energy Policy (CGEP) describes itself as an independent organization producing research on energy policy.* In fact, it gets millions of dollars from oil companies, and what it "produces" is obtained wholesale from them.
Smaller fraction of people post on "social media" in UK [Richard Stallman's Political Notes]
In the UK, a social change is occurring: a smaller fraction of people post regularly on "social media".
People who formerly loved Twitter and the idea of "social media" say that there is no such thing any more, and they miss it.
I don't personally know what it is they miss. I never used Twitter because that required running nonfree JavaScript code, and I refuse on principle to do that. I could not even see individual postings there, until Nitter provided a way to do that without submitting to nonfree software. But ex-Twitter killed off the API which made that possible.
Everyone’s an Engineer Now [Radar]
Cat Wu leads product for Claude Code and Cowork at Anthropic, so she’s well-versed in building reliable, interpretable, and steerable AI systems. And since 90% of Anthropic’s code is now written by Claude Code, she’s also deeply familiar with fitting them into routine day-to-day work. Last month, Cat joined Addy Osmani at AI Codecon for a fireside chat on the future of agentic coding and, equally important, agentic code review, how Anthropic actually uses the tools they’re building, and what skills matter now for developers.
Boris Cherny initially built Claude Code as a side project to test Anthropic’s APIs. Then he shared the tool in a notebook, and within two months the entire company was using it. That organic growth, Cat said, was part of what convinced the team it was worth releasing externally.
But what really made that internal adoption visible was the response on Anthropic’s internal “dog-fooding” Slack channel. The Claude Code channel gets a new message every 5 to 10 minutes around the clock, and this feedback directly and immediately informs the product experience. Cat described it this way:
We hire for people who love polishing the user experience. And so a lot of our engineers actually live in this channel and find when there’s issues with new features that they’ve worked on and they proactively lay out the fixes.
The team ships new versions of Claude Code to internal users many times a day. The feedback loop is tight enough that it functions as a continuous integration system for product quality, not just code quality.
Cat told Addy how she once accidentally introduced a small interaction bug between prompts and auto-suggestions. But by the time she started working on a solution, she found another team member had already beaten her to it. It turns out, he had set up a scheduled task in Claude Code to scan the feedback channel for anything that hadn’t been responded to in 24 hours and open a PR for it. Since Cat hadn’t gotten to it yet (whoops!), her teammate’s Claude saw the unaddressed issue and fixed it for her. And Cat only found out when “[her own] Claude noticed that his Claude had already landed a change.”
The infrastructure for rapid improvement, in other words, is now partly automated. The agents are writing the code, then monitoring the feedback and closing the loop.
There’s no question that AI-assisted coding has created a boom in output. Anthropic engineers are producing roughly 200% more code than they were a year ago, Cat noted. Today the main constraint is reviewing all that code to ensure it’s production-ready.
Cat’s team concluded that you can buy a lot of additional robustness for not that much extra cost.
We opted for the heaviest, most robust version [of code review]. We actually plot how many agents and how comprehensive of a review Claude does and then how many bugs does it recall. And we picked a number of very high recall and decided we should ship this, because if you really want AI code review to be a load-bearing part of your process, you actually probably just want the most comprehensive possible review.
The review agent doesn’t just look at the diff. It traces code across multiple files and catches bugs in adjacent code that has nothing to do with the change in question. Cat gave two examples. One was a ZFS encryption refactor where the agent found a key cache invalidation bug that wasn’t related to the author’s change at all but would have invalidated it. The other was a routine auth update that turned out to have a bad side effect, caught premerge. In both cases, engineers manually reviewing the code likely would have missed the bugs.
The human review that remains is deliberately small in scope. For most PRs, the human reviewer skims for design principle violations and obvious problems and assumes functional correctness has been handled. Five to ten agents run in parallel, each given slightly different tasks, returning independently and then deduplicating what they found.
The cultural shift that made this work, though, was ownership. The team moved to a model where the engineer who authors a PR owns it end to end, including postdeploy bugs, and doesn’t lean on peer reviewers to catch mistakes. “Otherwise,” as Cat pointed out, “you have situations where junior engineers put out a bunch of PRs and then your senior engineers are like drowning in AI-generated stuff where they’re not sure how thoroughly it’s been tested.”
Full ownership meant the AI review had to actually be trustworthy, which drove the decision to go for high recall rather than a lighter touch. That said, engineers are still expected to understand every line of code an agent creates. . .for now. As Cat explained, it’s the only way to truly prevent “unknown security vulnerabilities and to be able to quickly respond to incidents if they are to happen.”
Cowork, Anthropic’s agent tool for nontechnical users, is the company’s attempt to take what Claude Code does for engineers and bring it to knowledge work more broadly. Cat sketched a picture of someone looking at five or six agent tasks running simultaneously in a side panel, managing a fleet of agents the way a senior engineer manages a PR queue.
In the nearer-term, she’s keeping tabs on the shift toward people using Claude Code to build things for themselves, their teams, or their families that wouldn’t have justified professional development effort or “otherwise been possible.” The prototype is the garage project, the family expense tracker, the tool that a small team actually needs but that no SaaS product quite addresses. Cat’s goal and hope is that Claude Code helps people “solve their own problems for themselves” and “stewards a new future of personal software.”
More people building more software is unambiguously good. Boris Cherny has even floated the idea that coding as we know it is “solved.” But what does that mean for the craft of software engineering? Cat’s read of the current moment is more nuanced:
I think pre-AI, the skills that were very important were being able to take a spec and implement it well. And I think now the really important skill is product taste. Even for engineers. Can you use code to ingest a massive amount of user feedback? Do you have good intuition about which feature to build to address those needs, because it’s often different than exactly what users are asking you for? And then, when Claude builds it, are you setting up the right bar so that what you ship people actually love?
Cat’s not alone in highlighting the importance of taste in a world where code is a commodity. Steve Yegge, Wes McKinney, and many others, myself included, see taste and judgment as a uniquely human value. This has practical implications for how engineers should spend their time now, and for what the next generation needs to learn.
For junior engineers specifically, Cat described a progression: Start by using Claude Code to understand the codebase (ask all the “dumb questions” without embarrassment), take those answers to a senior engineer for calibration, and then close the loop by updating the CLAUDE.md with whatever was missing.
Think of Claude Code as your intern that you’re trying to level up. Like, teach it back to Claude. Add a
/verifyslash command. Put it in the CLAUDE.md or the agent README. Approach this as senior engineers helping you level up, and then you helping Claude and other agents level up.
The improvement process, in other words, should be bidirectional. Engineers get better at using the tools and the tools get better through the engineers’ accumulated knowledge. And significantly, this process keeps humans firmly in the loop, playing a role that’s “active, continuous, and skilled.”
You can watch Cat and Addy’s full chat, plus everything else from AI Codecon on the O’Reilly learning platform. Not a member? Sign up for a free 10-day trial, no strings attached.
The Open Social Web Needs Section 230 to Survive [Deeplinks]
If you want to overthrow Big Tech, you’ll need Section 230. The paradigm shift being built with the Open Social Web can put communities back in control of social media infrastructure, and finally end our dependency on enshitified corporate giants. But while these incumbents can overcome multimillion-dollar lawsuits, the small host revolution could be picked off one by one without the protections offered by 230.
The internet as we know it is built on Section 230, a law from the 90s that generally says internet users are legally responsible for their own speech — not the services hosting their speech. The purpose of 230 was to enable diverse forums for speech online, which defined the early internet. These scattered online communities have since been largely captured by a handful of multi-billion dollar companies that found profit in controlling your voice online. While critics are rightly concerned about this new corporate influence and surveillance, some look to diminishing Section 230 as the nuclear option to regain control.
The thing is, that would be a huge gift to Big Tech, and detrimental to our best shot at actually undermining corporate and state control of speech online.
We’re fed up with legacy social media trapping us in walled gardens, where the world's biggest companies like Google and Meta call the shots. Our communities, and our voices, are being held hostage as billionaires’ platforms surveil, betray, and censor us. We’re not alone in this frustration, and fortunately, people are collaborating globally to build another way forward: the Open Social Web.
This new infrastructure puts the public’s interest first by reclaiming the principles of interoperability and decentralization from the early internet. In short, it puts protocols over platforms and lets people own their connections with others. Whether you choose a Fediverse app like Mastodon or an ATmosphere app like Bluesky, your audience and community stay within reach. It’s a vision of social media akin to our lives offline: you decide who to be in touch with and how, and no central authority can threaten to snuff out those connections. It’s social media for humans, not advertisers and authoritarians.
Behind that vision is a beautiful mess of protocols bringing open social media to life. Each protocol is a unique language for applications, determining how and where messages are sent. While this means there is great variety to these projects, it also means everyone who spins up a server, develops an app, or otherwise hosts others’ speech has skin in the game when it comes to defending Section 230.
Section 230 protects freedom of expression online by protecting US intermediaries that make the internet work. Passed in 1996 to preserve new bubbling communities online, 230 enshrined important protections for free expression and the ability to block or filter speech you don’t want on your site. One portion is credited as the “26 words that created the internet”:
“No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”
In other words, this bipartisan law recognizes that speech online relies on intermediaries — services that deliver messages between users — and holding them potentially liable for any message they deliver would only stifle that speech. Intuitively, when harmful speech occurs, the speaker should be the one held accountable. The effect is that most civil suits against users and services based on others' speech can quickly be dismissed, avoiding the most expensive parts of civil litigation.
Section 230 was never a license to host anything online, however. It does not protect companies that create illegal or harmful content. Nor does Section 230 protect companies from intellectual property claims.
What Section 230 has enabled is the freedom and flexibility for online communities to self-organize. Without the specter of one bad actor exposing the host(s) to serious legal threats, intermediaries can moderate how they see fit or even defer to volunteers within these communities.
The superpower of decentralized systems like the Fediverse is the ability for thousands of small hosts to each shoulder some of the burdens of hosting. No single site can assert itself as a necessary intermediary for everyone; instead, all must collaborate to ensure messages reach the intended audience. The result is something superior to any one design or mandate. It is an ecosystem that is greater than the sum of its parts, resilient to disruptions, and enables free experimentation with different approaches to community governance.
The open social web’s kryptonite though, is the liability participants can face as intermediaries. A greater potential for liability comes with more interference from powerful interests in the form of legal threats, more monetary costs, and less space for nuance in moderation. And in practice, participants may simply stop hosting to avoid those risks. The end result is only the biggest and most resourced options can survive.
This isn’t just about the hosts in the Open Social Web, like Mastodon instances or Bluesky PDSes. In the U.S., Section 230’s protections extend to internet users when they distribute another person’s speech. For example, Section 230 protects a user who forwards an email with a defamatory statement. On the open social web, that means when you pass along a message to others through sharing, boosting, and quoting, you’re not liable for the other user’s speech. The alternative would be a web where one misclick could open you up to a defamation lawsuit.
Section 230 also applies to the infrastructure stack, too, like Internet service providers, content delivery networks, and domain or hosting providers. Protections even extend to the new experimental infrastructures of decentralized mesh networks.
Beyond the existential risks to the feasibility of indie decentralized projects in the United States, weakening 230 protections would also make services worse. Being able to customize your social media experience from highly-curated to totally laissez-faire in the open social web is only possible when the law allows space for private experiments in moderation approaches. The algorithmically driven firehose forced on users by antiquated social media giants is driven by the financial interests of advertisers, and would only be more tightly controlled in a post-230 world.
Laws aimed at changing 230 protections put decentralized projects like the open social web in a uniquely precarious position. That is why we urge lawmakers to take careful consideration of these impacts. It is also why the proponents and builders of a better web must be vigilant defenders of the legal tools that make their work possible.
The open social web embodies what we are protecting with Section 230. It’s our best chance at building a truly democratic public interest internet, where communities are in control.
You kind of get a sense when the platform vendor is going to compete with you instead of work with you. That's how big companies usually work with independent devs. There is a bigger picture, developers who might build on WordPress as opposed to inside. Now probably no one is going to try, maybe not even me. Or maybe we will. :-)
Apple wants to kill your Time Capsule, but they run NetBSD so they can’t [OSnews]
It seems like Apple is finally going to remove support for AFP from macOS, twelve years after first moving from AFP to SMB for its default network file-sharing technology. This change shouldn’t impact most people, as it’s highly unlikely you’re using AFP for anything in 2026. Still, there is one small group of people to whom this change has an actual impact: owners of Apple’s Time Capsule devices. Time Capsules only support AFP and SMB1, and with SMB1 being removed from macOS ages ago, and now AFP being on the chopping block as well, macOS 27 would render your Time Capsule more or less unusable.
It’s important to note that the last Time Capsule sold by Apple, the fifth generation, was released in 2013, and the product line as a whole was discontinued in 2018. If you bought a Time Capsule in the twilight years of the line’s availability, I think you have a genuine reason to be perturbed by Apple cutting you off from your product if you upgrade to macOS 27, but at least you have the option of keeping an older version of macOS around so you can keep interacting with your time Capsule. It still feels like a bit of a shitty move though, as those fifth generation models came with up to 3TB of storage, which can still serve as a solid NAS solution.
Thank your lucky stars, then, that open source can, as usual, come to the rescue when proprietary software vendors do what they always do and screw over their customers. Did you know every generation of Time Capsule actually runs NetBSD, and that it’s trivially easy to add support for Samba 4 and SMB3 authentication to your Time Capsule, thereby extending its life expectancy considerably? TimeCapsuleSMB does exactly that.
If the setup completes successfully, your Time Capsule will run its own Samba 4 server, advertise itself over Bonjour (show up automatically in the “Network” folder on macOS), and accept authenticated SMB3 connections from macOS. You should then be able to open Finder, choose Connect to Server, and use a normal SMB URL instead of relying on Apple’s legacy stack. You should also be able to use the disk for Time Machine backups.
↫ TimeCapsuleSMB
It’s compatible with both NetBSD 4 and NetBSD 6-based Time Capsules, although you’ll need to run a single SMB activation command every time a NetBSD 4-based Time Capsule reboots. This will also disable any AFP and SMB1 support, but that is kind of moot since those are exactly the technologies that don’t and won’t work anymore once macOS 27 is released. The installation is also entirely reversible if, for whatever reason, you want to undo the addition of Samba 4.
This whole saga is such an excellent example of why open source software protects users’ rights, by design.
Remembering Seth Nickell [LWN.net]
LWN has received the sad news that Seth Nickell passed away, on April 16, from his father, Eric Nickell:
Many of you knew Seth from his work in the GNOME Usability Project, but his roots in that community trace back to his high school years. As a father of a high school junior, I remember being terrified when he flashed the hard drive of a computer he purchased for himself with this weird "Linux" thing. And I was a bit awed by the college application essay he wrote about open source and Linus Torvalds.
It was his interest in packet radio that drew him into working with the Linux AX.25 HOWTO as a high schooler, and from there to his focus on making the Linux desktop work for everyone.
The family plans to share news of a memorial at a later time. He will be deeply missed.
The Big Idea: Marie Vibbert [Whatever]

Though humans have a strong desire to be an individual, slightly stronger is our innate need to not be alone. Humans are not solitary creatures, so why do we try so hard to act like we are all just individuals with no ties or connections to those around us? Author Marie Vibbert wonders if we wouldn’t all be better off as a hive mind in the Big Idea for her newest novel, Multitude.
MARIE VIBBERT:
Over 11,000 tons of discarded clothing lie in the Chilean desert. These are garments that never sold, from low and high brands, and almost entirely made of petroleum-based fabrics: rayon, polyester, acrylic. It’s a major environmental problem. The clothes catch fire, leak chemicals and microplastics, and just… keep coming.
Meanwhile, in Scotland, they are looking for new, industrial applications for wool because this renewable clothing resource that doesn’t spontaneously combust sits rotting in warehouses, unable to compete with the subsidized price of polyester.
Humanity has a problem. A communication problem that creates wasted effort and wasted resources. Food being thrown out while people starve. Diseases like cholera running rampant when their cures exist. I could go on and on with examples. Why can’t we put our efforts where they are needed? Why do our systems dictate so much cruel irony?
When you look at humanity as a whole, we are tearing ourselves apart, starving ourselves, killing ourselves. We don’t seem to understand that we are us?
These were my thoughts going into a project whose first note was: The Borg, but friendly?
I thought it would be a short story. Something quick. Get in and get out. A hive mind comes to Earth, tries to communicate with humans as a hive, fails, and sees what a mess we are. Nudge the reader toward empathy, toward seeing problems between “us” and “them” as an insufficient definition of “us.” I figured it’d hit about 2,000 words long. But the more I thought about it, the bigger the problem became. How to show the perspective? How to encompass humanity and then move the camera back to show us in perspective?
How do we look, to a hive mind? What would they expect?
Humans are, in many ways, a collective creature. A single human can no more build a skyscraper than a single ant can build a mound. Even writing a novel is a collective act, when you consider that this language that I am using is a vast collection of consensuses on symbols, meaning, and parsing. English, on a certain level, is a stack of inside jokes passed down and expanded every generation.
Beyond that, every work of fiction builds on and reacts to those that came before. I am writing in a genre, science fiction, defined by all the works labelled as such, and in turn defined by the pressures and uncertainties of our society that caused the first authors to write things not of this world, the first readers to like that and want to emulate it, and on, and on.
I was on a panel at WorldCon on Hive Minds in Science Fiction when it occurred to me that an assumption I hadn’t seen tackled yet was that collectivism automatically meant a repression of individuality. It seems an easy conclusion? If my family votes democratically on dinner, my individual desire to eat nothing but spaghetti every night is subordinated. Yet, the four of us are still individuals as we enjoy my spouse and child’s preferred chicken and rice.
Why wouldn’t a hive mind contain room for the individual? Does a Borg stop loving spaghetti once it absorbs the thoughts of thousands of chicken fans? Wouldn’t it be more of a conversation than a dictatorship? If it’s truly collective, why would there be dictators? And, come to think of it, don’t we, as large groups, change our opinions over time? Americans once ate more chipped beef on toast than chicken fingers. We thought the Edwardian S-bend corset and the mullet were a great ideas. We went from loving elephant leg jeans to skinny jeans. Collectively. Like an individual goes through phases of loving fly fishing or obsession with one particular series of books, societies go through a group fondness for orange or dark wood paneling.
At the risk of making this blog post nothing but rhetorical questions, why do we assume innovation is a characteristic of the individual? Why do we assign conformity to the collective alone?
I tried to imagine myself a hive-member. Many advantages came immediately to mind. I wouldn’t have had to gamble on picking a college major; I’d have access to the needs of the society around me to help find work that was needed. I wouldn’t be competing for the access to share my stories, I’d just tell them, and my hive would hear them and like them or not.
Competition is not just the “healthy” activity of small businesses or inventors, of students seeking academic awards. It’s also war. All around the world, humans are killing humans so that they can avoid sharing resources. Humans are defining others, drawing lines around some of their siblings and excluding others, to limit access to resources. Yet to a non-human observer, we are one species, one sprawling community, alike in our needs and wants and behaviors.
And humans can be so kind, too.
In 2023, I had to travel to New York City because I had to get a Visa to attend my first Hugo awards as a nominee, and as I sat in Central Park waiting for my appointment, admiring the unnatural warmth of the post-climate-change day, I saw a middle-aged man patiently leading a group of elderly people. He looked so happy. I dashed off four pages in my journal about him, imagining his life taking care of elders. I wondered why my science fiction stories weren’t as easy or as fun as simple character portraits. I enjoyed the flashes of lives I’d seen in short stories by Mary Grimm or Maureen McHugh, or the prose poems of Mary Biddinger.
I used to love to climb into a character’s head and walk around, show her worries and fears and daily chores, and then I’d show my work to science fiction writers and be told I had no plot, or perhaps I was “just” a poet. Because of this critique, I chose to wall off the desire to write the way that came most naturally, eschewing character-study and stream-of-consciousness in favor of sentences that “did something.” (My own term.) I began to focus on ideas, on technology, on concrete consequences and violent action.
Eventually, I got pretty good at it, good enough to feel its limitations. I opened up my old “plotless” stories and found them not so plotless, after all. Rather, they reflected my own sense of helplessness as a teen and early-twenties writer, and that point of view was uninteresting to the science fiction editor of the 90s and 2000s, who focused on competent characters moving the plot by choice.
At the young age of 47, I revised one of those 20-year-old “plotless” stories and sold it to a market paying the Science Fiction and Fantasy Writers Association’s professional rate of eight cents a word. Not to brag. (Yes, to brag). In some ways, the genre itself has moved on from rigorously espousing action and certainty from its heroes, but also, I had learned how to structure a story through the mechanics of action, and this helped me see the similar structuring of non-action-based stories.
Part of the literary legacy my writing depends on is science fiction’s desire for logical, action-driven plots, but the origins of this project are the literary flash fiction piece, rooted in character and moment, and my desire to return to it, now that I have proven myself in the plot mines.
Which brings us back to the beginning: How better to show the individual in the collective of humanity than through a series of very short point of view pieces? The result is an introspective novella I wrote in thousand-word chunks around other projects. More than any other book I’ve written, I feel naked in its pages, exposing my deepest, most personal self. I felt free to do this because it was something I thought would never sell: too literary, too experimental.
Well, I sent it to Apex Books and they disagreed. I hope you enjoy, and be kind to my Space Cephalopods.
—-
Multitude: Amazon|Barnes & Noble|Bookshop
Developing a cross-process reader/writer lock with limited readers, part 1: A semaphore [The Old New Thing]
Say you want to have the functionality of a reader/writer lock,
but have it work cross-process. The built-in SRWLOCK
works only within a single process. Can we build a reader/writer
lock that works across processes?
For convenience, let’s say that you want to support a maximum of N simultaneous readers, for some fixed value N. We can do this:
The idea for the write lock is that it’s accomplished by claiming all the read locks, thereby ensuring that nobody else can get a read lock.
#define MAX_SHARED 100
HANDLE sharedSemaphore;
void AcquireShared()
{
WaitForSingleObject(sharedSemaphore, INFINITE);
}
void ReleaseShared()
{
ReleaseSemaphore(sharedSemaphore, 1, nullptr);
}
void AcquireExclusive()
{
for (unsigned i = 0; i < MAX_SHARED; i++) {
WaitForSingleObject(sharedSemaphore, INFINITE);
}
}
void ReleaseExclusive()
{
ReleaseSemaphore(sharedSemaphore, MAX_SHARED, nullptr);
}
Since we are using
WaitForSingleObject, we can also add
a timeout, so that the caller can decide to abandon the operation
if they can’t claim the lock.
bool AcquireSharedWithTimeout(DWORD timeout)
{
return WaitForSingleObject(sharedSemaphore, timeout) == WAIT_OBJECT_0;
}
bool AcquireExclusiveWithTimeout(DWORD timeout)
{
DWORD start = GetTickCount();
for (unsigned i = 0; i < MAX_SHARED; i++) {
DWORD elapsed = GetTickCount() - start;
if (elapsed > timeout ||
WaitForSingleObject(sharedSemaphore, timeout - elapsed) == WAIT_TIMEOUT)) {
// Restore the tokens we already claimed.
if (i > 0) {
ReleaseSemaphore(sharedSemaphore, i, nullptr);
}
return false;
}
}
return true;
}
Exclusive acquisition is tricky because we have to call
WaitForSingleObject multiple times,
with decreasing timeouts as time passes. If we run out of time,
then we need to give back the tokens we had prematurely
claimed.
There’s still a problem here. We’ll look at it next time.
The post Developing a cross-process reader/writer lock with limited readers, part 1: A semaphore appeared first on The Old New Thing.
Claude unlearns things that we had settled a long time ago. It fumbles around with a process, making it worse with every iteration, the same fumbling it did five days ago when it initially learned how to do what it can't do now. Usually when I regress in software, I am responsible for it, i did something to break it, but here's a tool that's capable of derailing us with me doing nothing new. In that way it behaves more like an imperfect human than a GIGO machine.
The GUARD Act Isn’t Targeting Dangerous AI—It’s Blocking Everyday Internet Use [Deeplinks]
Lawmakers in Congress are moving quickly on the GUARD Act, an age-gating bill restricting minors’ access to a wide range of online tools, with a key vote expected this week. The proposal is framed as a response to alarming cases involving “AI companions” and vulnerable young users. But the text of the bill goes much further, and could require age gates even for search engines that use AI.
Tell Congress: oppose the guard act
If enacted, the GUARD Act won’t just target a narrow category of risky chatbots. It would require companies to verify the age of every user — then block anyone under 18 from interacting with a huge range of online systems. It would block minors from everyday online tools, undermine parental guidance, and force adults to sacrifice their privacy. In the process, it would require services to implement speech-restricting and privacy-invasive age-verification systems for everyone—not just kids.
Under the GUARD Act’s broad definitions, a high school student could be barred from asking homework help tools questions about algebra problems. A teenager trying to return a product could be kicked out of a standard customer-service chat.
The concerns behind this bill are serious. There have been troubling reports of AI systems engaging in harmful interactions with young users, including cases involving self-harm. Those risks deserve attention. But they call for targeted solutions, like better safeguards and enforcement against bad actors, not sweeping restrictions. The bill’s sponsors say they’re targeting worst-case scenarios — but the bill regulates everyday use.
The problem starts with how the bill defines an “AI chatbot.” It covers any system that generates responses that aren’t fully pre-written by the developer or operator. Such a broad definition sweeps in the basic functionality of all AI-powered tools.
Then there’s the definition of an “AI companion,” which minors are banned from using entirely. An AI companion is any chatbot that produces human-like responses and is designed to “encourage or facilitate” interpersonal or emotional interaction. That may sound aimed at simulated “friends” or therapy chatbots. But in practice, it’s much fuzzier.
Modern chatbots are designed to be conversational and helpful. A homework helper might say “good question” before walking a student through a problem. A customer service chatbot may respond empathetically to a complaint (“I’m sorry you’re having this problem.”) A general-purpose assistant might ask follow-up questions. All of these could be seen as facilitating “interpersonal” interaction — and triggering the GUARD Act.
Faced with steep penalties and unclear boundaries, companies are unlikely to take chances on letting young people use their online tools. They’ll block minors entirely or strip their tools down to something less useful for everyone. The result isn’t a narrow safeguard—it’s a broad restriction on everyday online interactions.
Start with a student getting help with homework. Under the GUARD Act, the service must verify the user’s age using more than a simple checkbox—it must rely on a “reasonable age verification” measure, which could require a government ID or a third-party age-checking system. If the system decides a user is under 18, the company must decide if its tool qualifies as an “AI Companion.” If there’s any risk it does, the safest move is to block access entirely.
The same logic applies to everyday customer service. A teenager trying to fix an order issue gets routed to a chatbot, and the company faces a choice: build a full age-verification system for a routine interaction, or restrict access to avoid liability. Many will choose the latter.
This isn’t a narrow restriction aimed at a few risky products. It’s a compliance regime that pushes companies to block or limit any product that generates text for minors, across the board.
The GUARD Act doesn’t just affect minors. The bill takes a big step towards an internet that only works when users are willing to upload a valid ID or comply with other invasive age-verification schemes. Companies must verify the age of every user—not through a simple self-declaration, but through a “reasonable age verification” system tied to the individual.
In practice, that means collecting sensitive personal information: government IDs, financial data, or biometric identifiers. Companies can outsource verification, but they remain legally responsible. And the law requires ongoing verification, so this isn’t a one-time check. Worse, studies consistently show that millions of people have outdated information on their IDs, such as an old address, or do not have government ID. Should services require ID, many folks without current or any ID will be shut out.
And for those who do have compliant ID, turning over this information repeatedly creates obvious risks. Databases of sensitive identity information become targets for breaches. Anonymous or pseudonymous use of online tools becomes harder or impossible.
To keep minors away from certain chatbots, the GUARD Act would require everyone to prove who they are just to use basic online tools. That’s a steep tradeoff. And it doesn’t actually address the specific harms the bill is supposed to solve.
The GUARD Act’s broad scope is enforced with steep penalties. Companies can face fines of up to $100,000 per violation, enforced by federal and state officials. At the same time, key terms like “AI companion” rely on vague concepts such as “emotional interaction.” That combination will lead to overblocking. Faced with legal uncertainty and serious liability, companies won’t parse small distinctions. They’ll restrict access, limit features, or block minors entirely.
That is the unfortunate result of the GUARD Act, even though the concerns animating it are worthy of fixing. But the GUARD Act’s broad terms will apply far beyond the concerning scenarios.
In the end, that means a more restricted and more surveilled internet. Teenagers would lose access to tools they rely on for school and everyday tasks. Everyone else faces new barriers, including ID checks. Smaller developers, who aren’t able to absorb compliance costs and legal risk, would be pushed out, leaving the largest companies even more dominant.
Young people — and all people — deserve protection from genuinely harmful products. But this bill doesn’t do that. It trades away privacy, access, and useful technology in exchange for a blunt system that misses the mark.
Congress could act soon. Tell them to reject the GUARD Act.
Tell Congress: say no to mandatory online id checks
New version of XML-RPC package for JavaScript. It now handles POST messages that don't have a body.
Fedora Linux 44 has been released [LWN.net]
The Fedora Project has announced the release of Fedora Linux 44. There are "what's new" articles for Fedora Workstation, Fedora KDE Plasma Desktop, and Fedora Atomic Desktops. The Fedora Asahi Remix for Apple Silicon Macs, based on Fedora 44, is also available. See the Fedora Spins page for a full list of alternative desktop options.
Fedora Linux 44 Workstation ships with the latest GNOME release, GNOME 50. This comes with a long list of refinements to your desktop, including everything from accessibility to color management and remote desktop. Many of the applications that are installed by default on Fedora Workstation have also seen improvements, from Document Viewer to File Manager and Calendar. To learn more about these and other changes, you can read the GNOME 50 release notes.
KDE Plasma Desktop: If you are a KDE user, you should also notice a couple of very obvious changes. Fedora KDE Plasma Desktop 44 is based on the latest Plasma 6.6, which includes the new Plasma Login Manager and Plasma Setup to provide a more cohesive and integrated experience from the moment the computer is powered on for the first time. The installation process has been simplified, enabling you to easily set up Fedora KDE Plasma Desktop for a computer for a friend or a loved one.
The release notes include important changes between Fedora 43 and Fedora 44 for desktop users, developers, and system administrators.
[$] Strawberry is ripe for managing music collections [LWN.net]
There are dozens of music-player applications for Linux; the options range from bare-bones programs that only play local files to full-blown music-management projects with a full suite of tools for managing (and playing) a music collection. Strawberry is in the latter category; it has a bumper crop of features, including smart playlists, support for editing music metadata tags, the ability to organize music files, and more.
Moseying Around Cincinnati’s Asian Food Festival [Whatever]
I still have more posts to do over
my trip to Colorado (I cannot seem to get through that dang
trip!), but I wanted to post about my experience at
Cincinnati’s Asian Food
Festival because it just happened this past weekend and I
thought some fresh content was a good way to get me into a writing
mood.
I was so excited for this festival. I had it on my calendar for two whole months prior because I couldn’t wait for it. I told multiple friends about it out of excitement. I ended up going with Kayla, Brad, and Bryant, and we went on Saturday, since it’s only a two-day festival and Saturday just worked better for everyone instead of Sunday.
The Cincinnati Asian Food Festival has been going on for fifteen years, with this past year surpassing 125,000 attendees, and they have over 60 different vendors. Most of these are food and drink vendors, but there’s also some other goods for sale and even a ZYN station set up, just in case you really needed your nicotine fix.
I am sad to say I didn’t have a super positive experience at the festival, despite my initial excitement for it. As you can imagine from hearing the words “125,000 attendees,” it was very crowded. On one hand, I’m happy that something like an Asian Food Festival would be a popular event and that all these businesses are getting a ton of traffic, but on the other hand, when you cram that many people into a three block radius, it gets very difficult to walk around.
Long lines impede the flow of foot traffic (what little flow there is) because they jut right out into the street everyone is trying to walk down, every line to order is at least twenty minutes long and then you have to wait to actually receive your food. If you’re with your friends you will absolutely lose them in the crowd unless you’re literally holding hands. You will get shoulder checked by multiple people and almost kick a pug you didn’t see. There is absolutely nowhere to sit and eat or even stand and eat. There’s also almost no shade.
For what it’s worth, these issues are not limited to just the Asian Food Festival. This is pretty much all food festivals ever. And I go to a fair amount of them. I’m honestly very tired of these issues, and I feel like the Asian Food Festival just so happened to be the straw that broke the camel’s back. You can’t have a literal food festival and then have nowhere for people to eat. You need to figure out better line control so people can actually differentiate between the line and the sea of people, and where the end of the line is.
At one point, I ordered something and then tried to move to the “pick up” area to wait for my food, but it was so intensely packed that I couldn’t move from the ordering spot. I tried to step to the side in the other direction but was met with another wall of people. The cashier ended up telling me to move, and I got frustrated because I was actively trying to, but there was nowhere to move to! Like, yes I am well aware of the line behind me, I promise I’m not just standing at the register for fun.
I mean look at this!

Imagine trying to get through this if you have a stroller, or are in a wheelchair? You’re gonna have to run someone over if you want through. There were so many points where literally just nobody was moving. Like a traffic jam, but just people standing completely still and there’s no way around anyone. So you just stand there and wait a few minutes until you can continue taking tiny-half-shuffle-steps and try not to step on the back of the shoes of the person in front of you.
Also, I know you’re probably thinking that I just happened to go during the busiest time. Well, it was open from 11am to 10pm on Saturday, and I got there at 11:45am and left at 7pm. So I was there for a hot minute. I’m sure 9pm might’ve been less crowded, but I’m also sure a lot of places would be sold out or closing down for the night by then to prep for Sunday.
Okay, so now that I’ve gotten my population qualms and lack of seating issues out of the way, let’s talk about the actual food and drinks I got.
Oh, I almost forgot, parking in a public lot nearby was $30. So, that fucking sucked. And, yes, there’s more financially savvy options of taking the bus or walking, but I live two full hours away from the Court Street Plaza where it was held, so yeah, I need somewhere to park my dang car.
It always takes me a couple passes of everything to figure out what I want to try first. I knew I wanted to start off with a coffee, and Lotus Street Foods had a Thai Iced Coffee for six dollars:

Bryant so kindly modeled my beverage for me because I was holding the actual food item I got from Lotus. Here’s their Asian fried jerky for nine dollars:

I actually really liked the flavor of the jerky. It had a sticky, sort of sweet glaze, but it was definitely hard to bite through and chew. Wasn’t quite the same texture as jerky but wasn’t the same texture as regular meat. The rice was unfortunately cold and extremely bland. Great flavor on the meat though!
For the coffee, I would’ve liked a little more condensed milk in it. It wasn’t quite creamy enough for my taste and was just a little too plain coffee-y flavored. I like a sweeter, creamier coffee though, so I know I’m not the best judge of coffee when it actually tastes like coffee. I just think the balance was a little off. And for what it’s worth this wasn’t my first time trying this drink, so I have some sweeter ones I’ve had in the past to compare it to.
Kayla really wanted to try the elote from LALO Chino Latino, especially since it wasn’t listed on their online menu that it was going to be offered:

She said it was totes delish last year, but sadly this elote missed the mark this time around so bad that she barely ate half. She let me try a bite and yeah, it was rough. The corn itself was cold and had no flavor, and was tough and almost rubbery in texture. It felt like something you shouldn’t actually be chewing on. The sauce was lackluster, and honestly if the corn itself isn’t good then the dish isn’t going to be good no matter what you put on top. So that was unfortunate.
However, I did get the Vietnamese Birria Beef Taco from them for six dollars, and their horchata coffee, also for six dollars:

I didn’t finish the Thai coffee, so I was hoping this horchata coffee was going to be the redeeming caffeine fix of the day. While I did like the horchata coffee better than my first coffee, I can confidently say it was totally lacking in horchata flavor. There were some notes of cinnamon in there, but I would not go so far as to label this as “horchata” coffee. Kayla got one too and agreed that it’s more like if you added a little bit of cinnamon to a regular latte. So that was a little disappointing.
As for the birria taco, it was so good! I know you can’t see the inside, but there was plenty of tender birria, and the cilantro and onion on top was nice and fresh. The consommé had a lot of good flavor, the outside was golden brown, and I was wishing I had got a second one.
The next place I stopped was Evolve Bake+Shop. Though it was only about 1:30, this stand was almost completely sold out of baked goods. By the time I did another once through the street, they were sold out and had gone back home to bake more goodies for Sunday. The owner was so sweet and apologetic, but honestly I’m thrilled for them that they sold out so quick. I managed to get my hands on two of their few remaining cookies: their gluten-free ube crinkle cookie, and their strawberry matcha oatmilk cookie for four dollars each:

I actually didn’t know until I looked them up on Instagram for this post, but all their baked goods are 100% vegan/plant-based! It’s nice to know there are some vegan options at the festival.
I shared the ube cookie with everyone, and the consensus was that it was pretty good, but the gluten-free aspect of it made the mouthfeel just a little bit odd. Gluten-free stuff tends to have that sort of sandy texture sometimes. But it was dense and had good flavor.
As for the strawberry matcha cookie, I had that all to myself (as I am writing this post) and it was the bomb dot com! It’s super moist and soft, and has a great balance of sweetness and earthy matcha flavor. I think these cookies were well worth the four dollars. Evolve also won Best Desserts for the third year! I’m glad for them.
For years, it has been a dream of mine to try Tang Hu Lu. If you don’t recognize the name, I’m willing to bet you’ll recognize it when you see it. It’s hard to mistake the glassy, shiny, iconic strawberries on a stick. I got this Tang Hu Lu from Tenji Sushi for ten dollars:

I was a tiny bit disappointed by the presentation of this, because the pictures they had of it showed it having mandarin orange slices and more grapes, so only getting one grape and no orange slices was a bit of a letdown, but honestly I can’t be too mad because these strawberries were so good. They were juicy and sweet and perfectly firm without being that hard unripe texture. If you’ve ever had an urge to eat glass shards and not get hurt, this is the perfect food for you. The glassy sugar coating shatters apart and crunches so damn good, sort of like rock candy. I do think ten dollars was a lot for four strawberries and one grape, but at least I finally got to try the street food I’ve always wanted to.
There was no shortage of different Asian cuisines that were represented at this festival, including Indian dishes. Kayla ended up getting these chicken lollipops and cheesy naan bites from Khaao Macha, who were the Best of Yums winner last year:

I didn’t try the chicken, but Kayla said it was good (I did sniff it and it smelled like Taco Bell’s mild sauce packets). I did try some of the naan and it was definitely yummy. I mean, you really can’t go wrong with cheesy naan. The chicken was ten dollars and you got two of them, and the naan was seven dollars. I would say the naan was sizeable for the price, and good for sharing.
At this point, we took a little break on food and watched some of the free entertainment on the main stage:

I think taiko drums are cool so this was really awesome to see, and then there was a Nepali dance performance right after this. It was very neat to see different culture’s traditions and performances. I like that the entertainment is free and they have such a variety of performances.
Back to snacking, I finally got to try my most anticipated item from the online vendor menu, Chhnagnh’s Pot Ang (roasted corn with sweet coconut sauce). I also tried their lemongrass beef skewer, and Kayla got their chicken skewer. The skewers were six dollars each and the corn was seven.

I can honestly say I’ve never had Cambodian food before, but this looked very promising. I absolutely loved the corn, it was roasted so perfectly and had great flavor. The coconut sauce wasn’t really giving coconut, but it was sweet and creamy so at least it added some texture and flavor, and weirdly enough the green onion went really well with it all. It just added a nice fresh component without overpowering anything flavor-wise.
Kayla let me try her chicken skewer and it was pretty good but the chicken was just a little dry. The beef was so delish though. It had just the right amount of lemongrass flavor in it without being overwhelming and was very tender and warm. This was my favorite savory food I tried all day.
The last thing I ate was from Fusako, and I hate to totally bash a place, but y’all. What I was presented with was egregious.
Here’s the menu on their truck:

This looked so good and impressive. Everything looked filling and decked out in garnishes and sauces and I had high hopes. I got the Mexican street corn gyoza, which was supposed to be crispy fried dumplings stuffed with sweet corn, with cotija cheese, a chili-lime aioli, lime zest, and green onion. Sounded amazing. Here’s what I got for eight dollars:

Two tiny gyoza, covered in a mess of sauce and corn, with no lime zest or green onions in sight. It looked so haphazardly thrown together. It was totally cold and the gyoza were tough instead of crispy. The entire thing lacked flavor, and the wait was so long. I was really disappointed.
I hated to leave on an L, but it was getting late.
Oh, and earlier in the day I had a really terrible yuzu mule for ten dollars.
In total, I spent $88 dollars before tip (I bought Kayla’s chicken skewer and a Thai coffee for Bryant), and usually I just chose the 15% tip option but I’m not gonna do all that math. We’ll just say around a hundred bucks.
Overall, I just wasn’t really impressed with the food or drinks I had gotten throughout the day. There were some good things but my experience overall with how crowded it was and the prices and lack of seating just kind of made for a less than ideal experience. They clearly need to open up more blocks for the festival to spread out.
I always get so excited for food truck festivals, and I keep being let down by them. Is it me? Am I the problem? Am I just not cut out for the food truck lifestyle? I hate waiting in lines and I hate standing to eat. I don’t prefer fast, casual service, and I usually like my food to come on real dishes. Oh no. Maybe it is me.
Huge shout out to the Library Square public library for keeping me from having to use a Porta-Potty. Very happy to use actual toilets and wash my fucking hands. And get some AC for a second.
I am glad I got to experience something new and hang out with my friends, but I think I won’t return next year unless they implement some kind of crowd management or cap tickets.
What sounded the best to you? Have you been to any of the previous years of the festival? Let me know in the comments, and have a great day!
-AMS
A question I'd like to put out there. Maybe AI needs the
massive data centers now, but they could definitely get more
efficient over time. There might be another Moore's Law in there.
And the work is going very fast, and maybe they're leaving other
optimizations for later. Take a look at how computers themselves
have gotten more efficient since when I started in the 1970s. It
was a miracle that I could buy a computer to put in my living room
in 1979. A couple of years before that I had a 100 pound terminal
that I could lug cross-town to show my grandfather. We may end up
with a lot of unused data centers and energy generation capacity.
But that's how great evolutionary steps work. You go where you're
called to go. We are a big Ouija board. This stuff is really
important, we're going to remove layers and layers to tech, get to
the answer sooner, and more easily, and empower people with much
less tech education that we have to do the good parts of what we
do, the fun stuff. There's art in the lower-level stuff too, but in
tech we like to bury that stuff and forget its even there. That's
how we get to build more complex machines that do more. By pushing
the repetitive complex stuff into the pipes. If this were parallel
to the development that led to smart phones, we're at the point
where we have the glass palaces with huge cooling systems, and
maybe Fortran has been invented, but it might still be machine
code.
This week is being spent, among other things, teaching
Claude how to write code that fits in with my library of apps. I
like this. It's like a painter telling an assistant the rules for
adding to the sculpture. Art has been practiced like that for a
long time. Anyway, here's an example of my side of a workflow where
we're getting its dialog management code, that works fine, so it
fits in with the other code. "these all look good, and the last one
is most important, we don't need a blob of html to be there before
you run, you create the dom structures you need. this may seem
inefficient, but it makes it much easier to add a new element, or
even more complex changes. that won't matter much to you, but when
a human is editing it matters a lot. simplicity makes work flow
better and reduces chance of being detoured by a bug that has to be
found and fixed." I didn't edit that at all. I am also teaching it
why things work the way they do because of differences between
machines and humans. I'm learning a lot about our strength and
weakness from seeing how it would work, left to its own needs (ie
no human-edited code base, just AI-edited).
In Memoriam: Tomáš Kalibera [LWN.net]
We have received the sad news that Tomáš Kalibera, a member of the R Project core team, has passed away after a short illness.
A friend who knew him well wrote to me: he was very happy, and his work fulfilled him. That is, perhaps, the best thing one can say about a life in open source — that the work mattered, that it reached millions, and that the person who did it found meaning in it.
Kalibera was mentioned in this 2019 article about C programs passing strings to Fortran subroutines. He will be greatly missed.
Security updates for Tuesday [LWN.net]
Security updates have been issued by Debian (openjdk-21 and webkit2gtk), Fedora (botan3, chromium, cockpit, firefox, flatpak, gum, libarchive, libcoap, mingw-python3, ngtcp2, nss, openssh, openssl, openvpn, PackageKit, python3-docs, python3.11, python3.12, python3.13, python3.14, vim, and xrdp), Oracle (firefox, gdk-pixbuf2, java-1.8.0-openjdk, java-21-openjdk, python3.12, python3.9, sudo, and tigervnc), Red Hat (tigervnc and xorg-x11-server-Xwayland), Slackware (mpg123 and proftpd), SUSE (emacs, firefox, fontforge, freeciv, freerdp, libngtcp2-16, libsystemd0, and strongswan), and Ubuntu (authd, clamav, glance, haproxy, jq, lcms2, nginx, nltk, ntfs-3g, packagekit, pillow, strongswan, and vim).
All FOSDEM 2026 videos are online [LWN.net]
FOSDEM's organizers have announced
that all of the video recordings "worth publishing
" from
FOSDEM 2026 are now
available.
Videos are linked from the individual schedule pages for the talks and the full schedule page. They are also available, organised by room, at video.fosdem.org/2026.
LWN's coverage of talks from FOSDEM 2026 can be found on our conference index.
When Correct Systems Produce the Wrong Outcomes [Radar]
We tend to assume that if every part of a system behaves correctly, the system itself will behave correctly. That assumption is deeply embedded in how we design, test, and operate software. If a service returns valid responses, if dependencies are reachable, and if constraints are satisfied, then the system is considered healthy. Even in distributed systems, where failure modes are more complex, correctness is still tied to the behavior of individual components. In modern AI systems, particularly those combining retrieval, reasoning, and tool invocation, this assumption is increasingly stressed under continuous operation.
This model works because most systems are built around discrete operations. A request arrives, the system processes it, and a result is returned. Each interaction is bounded, and correctness can be evaluated locally. But that assumption begins to break down in systems that operate continuously. In these systems, this behavior is not the result of a single request. It emerges from a sequence of decisions that unfold over time. Each decision may be reasonable in isolation. The system may satisfy every local condition we know how to measure. And yet, when viewed as a whole, the outcome can be wrong.
One way to think about this is as a form of behavioral drift systems that remain operational but gradually diverge from their intended trajectory. Nothing crashes. No alerts fire. The system continues to function. And still, something has gone off course.
The root of the issue is not that components are failing. It is that correctness no longer composes cleanly. In traditional systems, we rely on a simple intuition: If each part is correct, then the system composed of those parts will also be correct. This intuition holds when interactions are limited and well-defined.
In autonomous systems, that intuition becomes unreliable. Consider a system that retrieves information, reasons over it, and takes action. Each step in that process can be implemented correctly. Retrieval returns relevant data. The reasoning step produces plausible conclusions. The action is executed successfully. But correctness at each step does not guarantee correctness of the sequence.
The system might retrieve information that is contextually valid but incomplete or misaligned with the current task. The reasoning step might interpret it in a way that is locally consistent but globally misleading. The action might reinforce that interpretation by feeding it back into the system’s context. Each step is valid. The trajectory is not. This is what behavioral drift looks like in practice: locally correct decisions producing globally misaligned outcomes.
In these systems, correctness is no longer a property of individual steps. It is a property of how those steps interact over time. This breakdown is subtle but fundamental. It means that testing individual components, even exhaustively, does not guarantee that the system will behave correctly when those components are composed into a continuously operating whole.
To understand why this happens, it helps to look at where behavior actually comes from. In many modern AI systems, behavior is not encoded directly in a single component. It emerges from interaction:
Each of these elements operates with partial information. Each contributes to the next state of the system. The system evolves as these interactions accumulate. This pattern is especially visible in LLM-based and agentic AI systems, where context assembly, reasoning, and action selection are dynamically coupled. Under these conditions, behavior is dynamic and path dependent. Small differences early in a sequence can lead to large differences later on. A slightly suboptimal decision, repeated or combined with others, can push the system further away from its intended trajectory.
This is why behavior cannot be fully specified ahead of time. It is not simply implemented; it is produced. And because it is produced over time, it can also drift over time.
Modern observability systems are very good at telling us what a system is doing. We can measure latency, throughput, and resource utilization. We can trace requests across services. We can inspect logs, metrics, and traces in near real time. In many cases, we can reconstruct exactly how a particular outcome was produced. These signals are essential. They allow us to detect failures that disrupt execution. But they are tied to a particular model of correctness. They assume that if execution proceeds without errors and if performance remains within acceptable bounds, then the system is behaving as expected.
In systems exhibiting behavioral drift, that assumption no longer holds. A system can process requests efficiently while producing outputs that are progressively less aligned with its intended purpose. It can meet all its service-level objectives while still moving in the wrong direction. Observability captures activity. It does not capture alignment.
This distinction becomes more important as systems become more autonomous. In AI-driven systems, particularly those operating as long-lived agents, this gap between activity and alignment becomes operationally significant. The question is no longer just whether the system is working. It is whether it is still doing the right thing. This gap between activity and alignment is where many modern systems begin to fail without appearing to fail.
A natural response to this problem is to add more validation. We can introduce checks at each stage:
These mechanisms improve local correctness. They reduce the likelihood of obviously incorrect decisions. But they operate at the level of individual steps.
They answer questions like:
They do not answer:
A system can pass every validation check and still drift. Behavioral drift is not caused by invalid steps. It is caused by valid steps interacting in ways we did not anticipate. Increasing validation does not eliminate this problem. It only shifts where it appears, often pushing it further downstream, where it becomes harder to detect and correct.
If correctness does not compose automatically, then what determines system behavior? Increasingly, the answer is coordination. In traditional distributed systems, coordination refers to managing shared state, ensuring consistency, ordering operations, and handling concurrency. In autonomous systems, coordination extends to decisions.
The system must coordinate:
This coordination is not centralized. It is distributed across models, planners, tools, and feedback loops. In agentic AI architectures, this coordination spans model inference, retrieval pipelines, and external system interactions. The system’s behavior is not defined by any single component. It emerges from the interaction between them.
In this sense, the system is no longer just the sum of its parts. The system is the coordination itself. Failures arise not from broken components, but from the dynamics of interaction timing, sequencing, feedback, and context. This also explains why small inconsistencies can propagate and amplify. A slight mismatch in one part of the system can cascade through subsequent decisions, shaping the trajectory in ways that are difficult to anticipate or reverse.
One response to this complexity is to introduce more structure. Control planes, policy engines, and governance layers provide mechanisms to enforce constraints at key decision points. They can validate inputs, restrict actions, and ensure that certain conditions are met before execution proceeds. This is an important step. Without some form of structure, it becomes difficult to reason about system behavior at all. But structure alone is not sufficient.
Most control mechanisms operate at entry points. They evaluate decisions at the moment they are made. They determine whether a particular action should be allowed, whether a policy is satisfied, and whether a request can proceed. The problem is that many of the failures in autonomous systems do not originate at these entry points. They emerge during execution, as sequences of individually valid decisions interact in unexpected ways. A control plane can ensure that each step is permissible. It cannot guarantee that the sequence of steps will produce the intended outcome. This distinction is subtle but important: control provides structure, but not assurance.
Traditional monitoring focuses on events. A request is processed. A response is returned. An error occurs. Each event is evaluated independently. In systems exhibiting behavioral drift, behavior is better understood as a trajectory. A trajectory is a sequence of states connected by decisions. It captures how the system evolves over time. Two trajectories can consist of individually valid steps and still produce very different outcomes. One remains aligned. The other drifts. This represents a shift from failure as an event to failure as a trajectory, a distinction that traditional system models are not designed to capture.
Correctness is no longer about individual events. It is about the shape of the trajectory. This shift has implications not just for how we monitor systems, but for how we design them in the first place.
If failure manifests as drift, then detecting it requires a different set of signals. Instead of looking for errors, we need to look for patterns:
These signals are not binary. They do not indicate that something is broken. They indicate that something is changing. The challenge is that change is not always failure. Systems are expected to adapt. Models evolve. Data shifts. The question is not whether the system is changing. It is whether the change remains aligned with intent. This requires a different kind of visibility, one that focuses on behavior over time rather than isolated events. Once drift is identified, the system needs a way to respond. Traditional responses, restart, rollback, stop, assume failure is discrete and localized. Behavioral drift is neither.
What is needed is the ability to influence behavior while the system continues to operate. This might involve constraining action space, adjusting decision selection, introducing targeted validation, or steering the system toward more stable trajectories. These are not binary interventions. They are continuous adjustments.
This perspective aligns with how control is handled in other domains. In control systems engineering, behavior is managed through feedback loops. The system is continuously monitored, and adjustments are made to keep it within desired bounds. Control is no longer just a gate. It becomes a continuous process that shapes behavior over time.
This leads to a different definition of reliability. A system can be available, responsive, and internally consistent—and still fail if its behavior drifts away from its intended purpose. Reliability becomes a question of alignment over time: whether the system remains within acceptable bounds and continues to behave in ways consistent with its goals.
If behavior is trajectory-based, then system design must reflect that. We need to monitor patterns, understand interactions, treat behavior as dynamic, and provide mechanisms to influence trajectories. We are very good at detecting failure as breakage. We are much less equipped to detect failure as drift. Behavioral drift accumulates gradually, often becoming visible only after significant misalignment has already occurred.
As systems become more autonomous, this gap will become more visible. The hardest problems will not be systems that fail loudly, but systems that continue working while gradually moving in the wrong direction. The question is no longer just how to build systems that work. It is how to build systems that continue to work for the reasons we intended.
Freexian Collaborators: Monthly report about Debian Long Term Support, March 2026 (by Santiago Ruano Rincón) [Planet Debian]

The Debian LTS Team, funded by [Freexian’s Debian LTS offering] (https://www.freexian.com/lts/debian/), is pleased to report its activities for March.
During the month of March, 20 contributors have been paid to work on Debian LTS (links to individual contributor reports are located below).
The team released 24 DLAs fixing 250 CVEs.
We also welcomed two new members: Lukas Märdian and Emmanuel Arias to the team, who actually started to contribute to the LTS project several months ago.
The team continued preparing security updates in its usual rhythm. Beyond the updates targeting Debian 11 (“bullseye”), which is the current release under LTS, the team also proposed updates for more recent releases (Debian 12 (“bookworm”) and Debian 13 (“trixie”)), including Debian unstable. We highlight several notable security updates here below.
Contributions from outside the LTS Team:
As usual, the thunderbird update, released as DLA 4511-1, was prepared by its maintainer Christoph Goehre. Thanks a lot for his continuous contributions.
The LTS Team has also contributed with updates to the latest Debian releases:
Sponsors that joined recently are in bold.
CodeSOD: Lint Brush Off [The Daily WTF]
A few years back, C# added the concept of "primary constructors". Instead of declaring the storage for class members and then initializing them in the constructor, you can annotate the class itself with the required fields, and C# automatically generates a constructor for you. It's all very TypeScript and very Microsoft, and certainly cuts down on some boilerplate.
Esben B's team isn't really using them in many places, but they are using a linter which is opinionated about them. So this in-line constructor causes the linter to complain:
public DocumentNetworkController(ILookupClient service)
The linter wants you to switch this to a primary constructor. Esben didn't want to do that, and didn't want to change the global linter configuration, and so added a pragma to disable that particular warning:
#pragma warning disable IDE0290 // Use primary constructor
public DocumentNetworkController(ILookupClient service)
#pragma warning restore IDE0290
The linter didn't like this. It threw a new warning: that this suppression wasn't needed. Which was news to Esben, as clearly the suppression was needed if you wanted to make the warnings go away. The obvious solution was to disable the warning that you didn't need to disable the warning:
#pragma warning disable IDE0079, IDE0290 // Use primary constructor
public DocumentNetworkController(ILookupClient service)
#pragma warning restore IDE0290, IDE0079
Except this doesn't work. These pragmas take effect on the next
line, which means you can't disable IDE0079 on the
same line as IDE0290 and expect it to work. Which
means the final version of the code looked like this:
#pragma warning disable IDE0079 // Disable warning about not needed supression
#pragma warning disable IDE0290 // Use primary constructor
public DocumentNetworkController(ILookupClient service)
#pragma warning restore IDE0290, IDE0079
Esben writes:
So the nice recommendation to use a primary ctor ended up with 3 lines of annoying boilerplate code. Good times \o/
While yes, this is frustrating, I will say there's an element of "when the table saw keeps taking fingers off, that may be more of a you problem." I don't know the details, so I can't say, "just change the linter config or adopt its recommendation" and claim that the problem goes away, but when the tool hurts you, it's a definite sign of one of two things: it's either the wrong tool, or you're using it wrong.
Yes, I did say new shape! It is pretty incredible that there can be any such things as new shape, I mean, how do we not know all the shapes already? Also, to be fair, it is a reasonable bet that the ancient Greek's knew of this and forgot to write anything down. But certainly in my life time - this is indeed a new shape. Discovered in 2023!
So what makes it special? There is, of course, a whole wikipedia article on what is called The Einstein Problem, but I'll try and explain it simply.
This shape tessellates. Basically that means you can tile your bathroom wall with it - the shape fits together with itself to cover a surface with no gaps. Lots of shapes do this, squares, triangles, hexagons, and so on. You can rotate the tiles if needed (e.g. for triangle you have to). There are many that do not, such as pentagons, and circles, etc. You cannot tile a wall with circles without leaving gaps.
So what? Well, most tessellating shapes create a repeating pattern. Hexagons make a familiar honeycomb pattern for example. But with the Spectre you can make a pattern that does not repeat. In fact, you cannot make a repeating pattern with it at all, no matter how hard you try. Yes, some groups of tiles may appear the same in other places but even then these do not form a regular pattern, at any level.
There is some debate over the rules - this was, it seems, a competition. The rules allowed you to turn a tile over. The researchers created a shape called the "Hat" which worked but some of the tiles had to be turned over. People, quite reasonably, said "If I want to tile my bathroom wall I have to buy two sets of tiles". So the researchers came out with the "Spectre" a year later, and that works without turning over tiles. In fact if you can turn over titles, you can make a repeating pattern with it.
But basically, until this was discovered, nobody even knew if a forced aperiodic tessellating monotile shape even existed. That is what makes it a new shape.
You can now tile your bathroom wall with one type of tile and it is a non repeating pattern.
Well, you could just try placing randomly where they fit, but you quickly end up with gaps that are not Spectre shaped, and have to back track and try again.
However, there is an agorithm, published by Simon Tatham, here. I'd like to thank him for his work, but I have a word of caution if you want to use his paper. I also appreciate, as a coder, the counting from 0 all the way through.
It just so happens I had an idea how to use this shape, for reasons which will become apparent later this year I hope. So I wanted to code generating a surface covered with these tiles. Long story short - here it is, open source on Codeberg.
But this took me a couple of days, which is a long time for me, so let me explain the issues.
The principle is simple, a recursive set of meta tiles are groups of tiles in a pattern (represented has hexagons in the paper).
You can start at the top, pick a meta tile from a set of 9 different types, and that tells you how to place 7 or 8 subtitles in a honeycomb pattern, and their types (from the set of 9) and orientation. You repeat as far as you want and the last level you actually have Spectre tiles not hexagons.
You can also start at the bottom with a Spectre, and decide which of the meta tile types it shall be at random. You can then find a meta tile which includes that type, and it tells you the neighbouring Spectre tiles to place. This is then a meta tile which you can again decide is part of a higher level meta tile randomly, and that tells you what meta tiles to place for its neighbours and work down to Spectre tiles below. So you have one upward process in a loop, and at each point have a recursive downward process placing 6 or 7 neighbours at each level down. This is the approach I took.
If I started at the top I would pick one of 9 meta tiles, and maybe one of 12 orientations, and that would be it, the Spectres under that are determined by the algorithm and not any more random. By starting at the bottom, I pick one of 12 orientations and place the first tile, and pick one of 9 meta tiles, but at each level as I go up I get to pick which higher meta tile it is in, and in some cases which of two sub tiles it is. This is random at each level and makes for a much more randomised final output.
The distraction that took me most of the time trying to get this working is the rather excellent graphic representations in Simon's paper. They show a hexagon meta tile substituted with 7 or 8 joined hexagon meta tiles, and shows a hexagon meta tile substituted with a Spectre tile. These diagrams have specific orientation, and rotation, so one is lulled in to a sense of simplicity that you are literally replacing one hexagon with a set of them, each with specific orientation.
Looks pretty, and simple, but this is NOT the case!
The diagrams are actually simply a mapping, a look up table for what gets joined to what and what side. The hexagon has 6 sides, basically at the lowest level each Spectre is joined to exactly 6 other Spectre tiles (there is a special case for that G meta tile where it is two Spectre tiles, the others are all one, just to add to the fun). So you have each Spectre tiles as having 6 connection sets of edges - but these are not simple, as each of the 9 types of meta tile is a Spectre with a specific set of edges for each of the 6 sides.
The numbering is the key - on the yellow tile there is a side 0, which is actually the three edges 8, 7, and 6 (marked 0.0, 1.0, and 2.0). On the purple, there is a side 4, which is edges 13, 12, and 11 (marked 0.4, 1.4, and 2.4). But side 4 on the yellow tile is only sides 12 and 11 (marked 0.4 and 1.4). But you a see yellow side 0 and purple side 4 would fit together. Some of these 6 sides are one edge (see purple side 3), but can be as many as 6 edges in some cases. Each of the 9 meta tiles has a specific set of edges making up the 6 joint points to other tiles. Each similarly has a set of edges on the hexagon pattern, which is different for each type.
So in practice you connect the defined edges, and they end up nothing like hexagonal tiles. In fact they twist and distort all over the place. The graphical representations are really not helpful in my view, sorry. Also, I would have numbered side.subedge so 1.0, 1.1, 1.2, not 0.1, 1.1, 2.1, personally.
Once I grasped that logic, the code became simple. As I say, you start with one Spectre, and connect neighbours. You only need to know the specific 6 sets of edges for that tile. Then when you use the meta tile rules you know which set of edges that connects to on the neighbouring Spectre. It is pretty simple to then align the new Spectre connected on that edge. Having placed the 7 or 8 Spectres to make a meta tile, you then just need to know the 6 joining points on that meta tile, which are themselves 6 sets of specific Spectre tile edges within the meta tile.
One issues is these connecting sides are several edges, so I actually picked one end, e.g. for yellow it would be 8, 5, 2, 0, 13, 12, 10, and for purple it would be 8, 5, 3, 0, 13, 10 as the 6 outgoing edges. These are the first edge of each side (numbered 0.x). When placing a Spectre next to one of these, you pick the other ends, so 6, 3, 1, 13, 11, 9 for yellow and 6, 4, 1, 0, 11, 9 for purple, the last edge for each side.
So connecting yellow side 0 to purple side 4 you connect yellow edge 8 to purple edge 11. The 11 being the incoming edge. This means you don't have to think of the sets of edges, just one edge on one Spectre tile for each of the 6 outgoings sides of your meta tile, at any level. This is quite a small amount of data to hold in a simple recursive algorithm
Another thing I got wrong is I stored a list of tiles, and referenced them as the 6 sides. Each tile with a starting point and rotation so I could plot it, and align new attached tiles. But this really is not necessary, and ends up using memory. I can plot the tiles (output a path to SVG) as I go, and I just need the 6 sides of a meta tile to be the 6 sets of position, rotation, and outgoing edge number. The only memory usage is a small set of data for the level of recursion. You quickly cover a very large number of tiles in each level (multiple by 8 or 9 each time), so need very few levels of recursion.
One issue is coordinates. Ultimately the output uses pixels or millimetres to several decimal places, and indeed I allow a final output rotation. But internally all lines on the Sprectre tiles are at multiples of 30 degrees. Even so you do not want to use floating point - rounding errors will accumulate as you recurse and lead to tiles not quite aligning, and also it is not possible to test two points are the same (why you need this is explained below). So the solution is to use coordinates that are integers! How do you do that with 30 degree angles, well simple - each distances is an integer multiple of sin60 plus an integer multiple of cos60 - at the final stage you multiple out and add these. You can also make a simple table of one unit distance integers for each 30 degree angle. And a table of the relative integer offsets for each point on a Spectre at each angle. This means no floating point maths, nor sin/cos, until you actually output to SVG.
One problem which I don't think Simon's paper covers, and was unexpected, is knowing when to stop! I am trying to cover a rectangular area. How do I know I have got there?! I could just set a maximum level, but using random choices for meta tiles at each level one can find the whole things quickly gets to way bigger than the area but ends up one sided leaving gaps in your rectangle. If I had gone top down, or always picked the current tile being in the middle of the meta tile, I could maybe work out a max level, but that is not what I am doing.
After a bit of head scratching I finally worked out a way. I wanted to make a grout line on top of the final tile output so I decided to keep track of all the edges I placed. A simple start/end for each unit length edge in a list. This can be made as I go along, and the integers mean I can always match to an existing edge to plot the grout efficiently as a series of lines.
This also meant I could actually keep two lists - one a list of first use of an edge, and then moving over to another list of second use of an edge, when a tile is attached the other side.
I could also check each edge I added to a list to see if it falls (even one end) within my target rectangle, and so only keep edges I need.
But this has the side effect that as soon as my list of single used edges, within rectangle, is empty, I must have 100% covered the rectangle, as no edges that don't have a tile on one side are in the target area. I can then immediately abort the whole placement process at every level just by checking the list of single use edges is now empty.
The final challenge was the edge of rectangle. Firstly the SVG has a viewBox, and so I can simply plot tiles that fall even slightly within the rectangle, and the same for grout lines. These go off the edge, but you cannot see when looking at the final SVG.
This has a problem, if you want to use the SVG in another design, as I did, the embedding does not inherently crop the edges. But SVG has an answer for this, clipPath. It allows me to clip an object to a path, a rectangle in this case. Perfect.
The snag is support of clipPath is not that good. I don't know why, but lots of things mess up, ignoring it, or barfing in some other way. One was my resin printer, which simply ignored the whole block of tiles if it has a clipPath.
So I ended up making a whole path generation set of functions which understood cropping the path to the edge of the rectangle. I could not sleep, and ended up coding this at 2am.
The final result is I can now make an SVG of a randomised set of tessellating aperiodic Spectre monotiles, with loads of options. I even added a sort of bevel edge to the tiles with a lighting based shade.
This is shaded per level, showing how the tiles actually join up at a meta level.
What Anthropic’s Mythos Means for the Future of Cybersecurity [Schneier on Security]
Two weeks ago, Anthropic announced that its new model, Claude Mythos Preview, can autonomously find and weaponize software vulnerabilities, turning them into working exploits without expert guidance. These were vulnerabilities in key software like operating systems and internet infrastructure that thousands of software developers working on those systems failed to find. This capability will have major security implications, compromising the devices and services we use every day. As a result, Anthropic is not releasing the model to the general public, but instead to a limited number of companies.
The news rocked the internet security community. There were few details in Anthropic’s announcement, angering many observers. Some speculate that Anthropic doesn’t have the GPUs to run the thing, and that cybersecurity was the excuse to limit its release. Others argue Anthropic is holding to its AI safety mission. There’s hype and counterhype, reality and marketing. It’s a lot to sort out, even if you’re an expert.
We see Mythos as a real but incremental step, one in a long line of incremental steps. But even incremental steps can be important when we look at the big picture.
We’ve written about shifting baseline syndrome, a phenomenon that leads people—the public and experts alike—to discount massive long-term changes that are hidden in incremental steps. It has happened with online privacy, and it’s happening with AI. Even if the vulnerabilities found by Mythos could have been found using AI models from last month or last year, they couldn’t have been found by AI models from five years ago.
The Mythos announcement reminds us that AI has come a long way in just a few years: The baseline really has shifted. Finding vulnerabilities in source code is the type of task that today’s large language models excel at. Regardless of whether it happened last year or will happen next year, it’s been clear for a while this kind of capability was coming soon. The question is how we adapt to it.
We don’t believe that an AI that can hack autonomously will create permanent asymmetry between offense and defense; it’s likely to be more nuanced than that. Some vulnerabilities can be found, verified, and patched automatically. Some vulnerabilities will be hard to find but easy to verify and patch—consider generic cloud-hosted web applications built on standard software stacks, where updates can be deployed quickly. Still others will be easy to find (even without powerful AI) and relatively easy to verify, but harder or impossible to patch, such as IoT appliances and industrial equipment that are rarely updated or can’t be easily modified.
Then there are systems whose vulnerabilities will be easy to find in code but difficult to verify in practice. For example, complex distributed systems and cloud platforms can be composed of thousands of interacting services running in parallel, making it difficult to distinguish real vulnerabilities from false positives and to reliably reproduce them.
So we must separate the patchable from the unpatchable, and the easy to verify from the hard to verify. This taxonomy also provides us guidance for how to protect such systems in an era of powerful AI vulnerability-finding tools.
Unpatchable or hard to verify systems should be protected by wrapping them in more restrictive, tightly controlled layers. You want your fridge or thermostat or industrial control system behind a restrictive and constantly updated firewall, not freely talking to the internet.
Distributed systems that are fundamentally interconnected should be traceable and should follow the principle of least privilege, where each component has only the access it needs. These are bog-standard security ideas that we might have been tempted to throw out in the era of AI, but they’re still as relevant as ever.
This also raises the salience of best practices in software engineering. Automated, thorough, and continuous testing was always important. Now we can take this practice a step further and use defensive AI agents to test exploits against a real stack, over and over, until the false positives have been weeded out and the real vulnerabilities and fixes are confirmed. This kind of VulnOps is likely to become a standard part of the development process.
Documentation becomes more valuable, as it can guide an AI agent on a bug-finding mission just as it does developers. And following standard practices and using standard tools and libraries allows AI and engineers alike to recognize patterns more effectively, even in a world of individual and ephemeral instant software—code that can be generated and deployed on demand.
Will this favor offense or defense? The defense eventually, probably, especially in systems that are easy to patch and verify. Fortunately, that includes our phones, web browsers, and major internet services. But today’s cars, electrical transformers, fridges, and lampposts are connected to the internet. Legacy banking and airline systems are networked.
Not all of those are going to get patched as fast as needed, and we may see a few years of constant hacks until we arrive at a new normal: where verification is paramount and software is patched continuously.
This essay was written with Barath Raghavan, and originally appeared in IEEE Spectrum.
When there is motion, it creates an impact of the environment.
First, the path is barely noticeable. But then, others see the hint of a path and walk on it, making it more clear. Finally, the path becomes the route.
Sometimes there’s a small rut. But a rut shifts gravity and wheels or feet land in the rut, making it deeper. This is how moguls appear on ski hills as well.
When it rains, the paths and ruts fill with water, and we call them puddles.
Of course, puddles are a metaphor.
Puddles only exist when there’s been some sort of motion that caused a depression that could collect the water. If you want to see how the audience is responding, how the culture is shifting, how your customers are acting–look for the puddles.
Fill in the rut and a new one will appear somewhere else. There are almost always puddles.
Abhijith PA: Patience could've saved me time. [Planet Debian]
If I had been patient, it would have saved me time. One such instance is following.
From my early blogs, you might know I am using mutt to do email. Just after I get along with mutt, I started using notmuch. Because limit search in mutt is always a pain when you have multiple folders. And what better tool out there than notmuch-mutt to bind both these.
notmuch-mutt provide three macros by default.
macro index <F8> \
"<enter-command>set my_old_pipe_decode=\$pipe_decode my_old_wait_key=\$wait_key nopipe_decode nowait_key<enter>\
<shell-escape>notmuch-mutt -r --prompt search<enter>\
<change-folder-readonly>`echo ${XDG_CACHE_HOME:-$HOME/.cache}/notmuch/mutt/results`<enter>\
<enter-command>set pipe_decode=\$my_old_pipe_decode wait_key=\$my_old_wait_key<enter>" \
"notmuch: search mail"
macro index <F9> \
"<enter-command>set my_old_pipe_decode=\$pipe_decode my_old_wait_key=\$wait_key nopipe_decode nowait_key<enter>\
<pipe-message>notmuch-mutt -r thread<enter>\
<change-folder-readonly>`echo ${XDG_CACHE_HOME:-$HOME/.cache}/notmuch/mutt/results`<enter>\
<enter-command>set pipe_decode=\$my_old_pipe_decode wait_key=\$my_old_wait_key<enter>" \
"notmuch: reconstruct thread"
macro index <F6> \
"<enter-command>set my_old_pipe_decode=\$pipe_decode my_old_wait_key=\$wait_key nopipe_decode nowait_key<enter>\
<pipe-message>notmuch-mutt tag -- -inbox<enter>\
<enter-command>set pipe_decode=\$my_old_pipe_decode wait_key=\$my_old_wait_key<enter>" \
"notmuch: remove message from inbox"
One for search, one for reconstructing threads and one for manipulating tags, which I missed.
Now my impatient part. I have already mapped f6 for my folder
movements and in my initial days of notmuch, I only use just
search. So I never cared about the f6 macro provided by
notmuch-mutt. As time goes by I got very comfortable with notmuch.
I was stretching my notmuch legs. I started to live more on notmuch
search results date:today
tag:unread than more on the mutt index. To the problem,
since notmuch-mutt dump all results to a temp maildir location,
can’t perform flag changes back to the original maildir which
was annoying, because we need to distinguish what mail you read and
what not when you subscribed to most of all debian mailing
list.
I was under the impression that, the notmuch-mutt is not capable of doing so and I just went like that without checking docs. I started doing all crazy hack to sync these maildirs.
I even started reading notmuch-mutt codebase.
Later, I settled on notmuch-vim. Cause I can manipulate flags sync back from notmuch to maildir.
And while searching for something, I accidentally revisited the the the notmuch-mutt macro page and saw the tag manipulation. I was like :( .
If I read about the third macro patiently when added that to config, I could’ve saved time by not doing ugly hacks around it.
I think I learned my lesson.
Mustang VixSkin® Review by Jey Pawlik [Oh Joy Sex Toy]
Pluralistic: Vicky Osterweil's "The Extended Universe" (28 Apr 2026) [Pluralistic: Daily links from Cory Doctorow]
->->->->->->->->->->->->->->->->->->->->->->->->->->->->->
Top Sources: None -->

Vicky Osterweil's The Extended Universe: How Disney Killed the Movies and Took Over the World makes the kind of long, polemical, startling and illuminating argument that defines great cultural criticism; it's the sort of book that encapsulates the reasons I read criticism in the first place:
https://www.haymarketbooks.org/books/2525-the-extended-universe
My first brush with this kind of criticism came more than two decades ago, when I read John Kessel's now-classic "Creating the Innocent Killer," a critique of Orson Scott Card's Ender's Game, a book I had read and enjoyed enough to re-read several times:
https://johnjosephkessel.wixsite.com/kessel-website/creating-the-innocent-killer
Kessel's argument is that Card used Ender's Game to smuggle in some very ugly ideas, wrapped in a story that was compelling, even exhilarating. In Ender's Game, we meet Andrew "Ender" Wiggin, a small, physically weak boy possessed of a prodigious intellect and a great deal of sensitivity and empathy. Ender is tormented by an escalating series of aggressors, whom he retaliates against with overwhelming force, first to the point of lethality and then all the way to literal genocide. And here's where Card makes his move: Ender's sensitivity and empathy and intellect tell him that he must respond this way, because he can tell that his aggressors will not back off from their intention to harm him; and because Ender is so small and weak, he has to use whatever tactic his brilliant mind can devise, and if that tactic results in the death penalty for mere bullying, well, that's the bully's fault, not Ender's. Indeed, in dying at Ender's hands, these bullies re-victimize Ender, because Ender is a gentle, smart, wise, weak person, and these inescapable murders that he is goaded into committing are a stain on his soul that he can never wash away.
Before reading "Creating the Innocent Killer," I confess I didn't really understand what criticism was for. Like many people, I conflated "criticism" with "reviews," thinking of critical works as a species of inconveniently difficult-to-digest essays that might help me figure out which books to read and which movies to see.
Kessel's magnificent essay changed all that, and not in spite of the fact that Kessel had pointed out some very important problems with a book that I loved, but because of that fact. In helping me understand the ugliness hidden within something whose beauty and virtues I saw very clearly, Kessel taught me more about myself – about where my aesthetics and my values overlapped, and where they diverged. It was literally life-changing.
Like Kessel, Osterweil's 'Extended Universe' deals with media that I have a great deal of affection for – the products of the Walt Disney Company. Though I'm primarily interested in theme parks – I love a big, ambitious built environment of any description and Disney pursues these with a seriousness that few others can touch – the Disney films (and the films of the studios Disney purchased, like Marvel and Lucasfilm) are obviously intimately bound up in those theme park designs.
Osterweil has her own ambivalent affection for these movies. Like so many of us, she's been raised on them, and they've shaped how she sees the world and its stories. But – like me – Osterweil is deeply suspicious of capitalism, American imperialism, and the notion of "intellectual property," and she uses reviews of a dozen Disney films to make the case that Walt Disney and the studio he founded with his brother are standards-bearers for these odious forces, and not just in the overt ways that might immediately spring to mind, but also in subtle ways that can be teased out of a close reading of the films.
In so doing, Osterweil also makes a sharp and well-argued case that intellectual property, colonialism and racial oppression are all facets of the same drive, the drive of people who fancy themselves born to rule to dominate others, which requires that those others also be dehumanized and their work denigrated. When Walt Disney insisted that his be the only name associated with "his" movies, he was playing out the same logic that underpinned his virulent opposition to labor unions and his participation in American imperialism in Latin America.
As with Kessel, Osterweil's argument is full of surprises and illuminations that are especially vivid for those of us who have great affection for these works. As her chapter on Black Panther shows, this contradiction need not go unresolved. There is plenty of scope for fans to seize the reins of the narrative (and as her chapter on the reactionary backlash to the later Star Wars movies shows, it's not just the forces of progress and anti-racism who can pull off this move).
Like the very best criticism, Osterweil's book is more than a way to deepen your understanding of the material she dissects – it's a way to deepen your understanding of the world that produced it, and to deepen your understanding of yourself.

Zack Polanski calls for Trump to be 'kicked out' of his Scottish golf courses https://www.bbc.com/news/articles/c8954xe8yjpo
Uncovering Global Telecom Exploitation by Covert Surveillance Actors https://citizenlab.ca/research/uncovering-global-telecom-exploitation-by-covert-surveillance-actors/
What's Missing in the 'Agentic' Story https://www.mnot.net/blog/2026/04/24/agents_as_collective_bargains
Licensed to Loot https://static1.squarespace.com/static/65c9daef199ea70aa66592fe/t/69e7b2f2949631007bb3d969/1776792306864/Licenced+to+loot+AI+Data+Centre+Report.pdf
#20yrsago Frank Zappa’s anti-censorship letter https://www.flickr.com/photos/mudshark/117551768/in/set-72057594090059726/
#15yrsago Chemistry kit with no chemicals https://web.archive.org/web/20110427212354/http://blog.makezine.com/archive/2011/04/chemistry-set-boasts-no-chemicals.html
#15yrsago Russian corruption: crooked officials steal multi-billion-dollar company, $230M tax refund, then murder campaigning lawyer https://web.archive.org/web/20110426045152/http://www.foreignpolicy.com/articles/2011/04/20/russia_s_crime_of_the_century?
#15yrsago Golden-age short-change cons https://web.archive.org/web/20110429014539/https://blog.modernmechanix.com/2011/04/26/tricks-of-short-change-artists/
#10yrsago Campaigners search Londoners’ phones to help them understand the Snoopers Charter https://www.youtube.com/watch?v=szN7DlmMLYg
#10yrsago Mitsubishi’s dieselgate: cheating since 1991 https://web.archive.org/web/20160427145038/https://www.cnet.com/roadshow/news/mitsubishi-cheated-fuel-economy-tests-since-1991/#ftag=CAD590a51e
#10yrsago Bellwether: Connie Willis’s classic, hilarious novel about the science of trendiness https://memex.craphound.com/2016/04/26/bellwether-connie-williss-classic-hilarious-novel-about-the-science-of-trendiness/
#5yrsago The Big U https://pluralistic.net/2021/04/26/moolah-boolah/#poison-ivies

NYC: Techidemic with Sarah Jeong, Tochi Onyebuchi and Alia
Dastagir (PEN World Voices), Apr 30
https://worldvoices.pen.org/event/techidemic/
Barcelona: Internet no tiene que ser un vertedero (Global
Digital Rights Forum), May 13
https://encuentroderechosdigitales.com/en/
Berlin: Re:publica, May 18-20
https://re-publica.com/de/news/rp26-sprecher-cory-doctorow
Berlin: Enshittification at Otherland Books, May 19
https://www.otherland-berlin.de/de/event-details/cory-doctorow.html
Hay-on-Wye: HowTheLightGetsIn, May 22-25
https://howthelightgetsin.org/festivals/hay/big-ideas-2
SXSW London, Jun 2
https://www.sxswlondon.com/session/how-big-tech-broke-the-internet-b3c4a901
NYC: The Reverse Centaur's Guide to Life After AI with Jonathan
Coulton (The Strand), Jun 24
https://www.strandbooks.com/cory-doctorow-the-reverse-centaur-s-guide-to-life-after-ai.html
When Do Platforms Stop Innovating and Start Extracting?
(InnovEU)
https://www.youtube.com/watch?v=cccDR0YaMt8
Pete "Mayor" Buttigieg (No Gods No Mayors)
https://www.patreon.com/posts/pete-mayor-with-155614612
The internet is getting worse (CBC The National)
https://youtu.be/dCVUCdg3Uqc?si=FMcA0EI_Mi13Lw-P
Do you feel screwed over by big tech? (Ontario Today)
https://www.cbc.ca/listen/live-radio/1-45-ontario-today/clip/16203024-do-feel-screwed-big-tech
"Enshittification: Why Everything Suddenly Got Worse and What to
Do About It," Farrar, Straus, Giroux, October 7 2025
https://us.macmillan.com/books/9780374619329/enshittification/
"Picks and Shovels": a sequel to "Red Team Blues," about the heroic era of the PC, Tor Books (US), Head of Zeus (UK), February 2025 (https://us.macmillan.com/books/9781250865908/picksandshovels).
"The Bezzle": a sequel to "Red Team Blues," about prison-tech and other grifts, Tor Books (US), Head of Zeus (UK), February 2024 (thebezzle.org).
"The Lost Cause:" a solarpunk novel of hope in the climate emergency, Tor Books (US), Head of Zeus (UK), November 2023 (http://lost-cause.org).
"The Internet Con": A nonfiction book about interoperability and Big Tech (Verso) September 2023 (http://seizethemeansofcomputation.org). Signed copies at Book Soup (https://www.booksoup.com/book/9781804291245).
"Red Team Blues": "A grabby, compulsive thriller that will leave you knowing more about how the world works than you did before." Tor Books http://redteamblues.com.
"Chokepoint Capitalism: How to Beat Big Tech, Tame Big Content, and Get Artists Paid, with Rebecca Giblin", on how to unrig the markets for creative labor, Beacon Press/Scribe 2022 https://chokepointcapitalism.com
"Enshittification, Why Everything Suddenly Got Worse and What to Do About It" (the graphic novel), Firstsecond, 2026
"The Post-American Internet," a geopolitical sequel of sorts to Enshittification, Farrar, Straus and Giroux, 2027
"Unauthorized Bread": a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, April 20, 2027
"The Memex Method," Farrar, Straus, Giroux, 2027
Today's top sources:
Currently writing: "The Post-American Internet," a sequel to "Enshittification," about the better world the rest of us get to have now that Trump has torched America. Third draft completed. Submitted to editor.
"The Post-American Internet," a short book about internet policy in the age of Trumpism. PLANNING.
A Little Brother short story about DIY insulin PLANNING

This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.
https://creativecommons.org/licenses/by/4.0/
Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.
Blog (no ads, tracking, or data-collection):
Newsletter (no ads, tracking, or data-collection):
https://pluralistic.net/plura-list
Mastodon (no ads, tracking, or data-collection):
Bluesky (no ads, possible tracking and data-collection):
https://bsky.app/profile/doctorow.pluralistic.net
Medium (no ads, paywalled):
Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):
https://mostlysignssomeportents.tumblr.com/tagged/pluralistic
"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla
READ CAREFULLY: By reading this, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies ("BOGUS AGREEMENTS") that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer.
ISSN: 3066-764X
Ravi Dwivedi: A day in Vienna [Planet Debian]
On the 7th of September 2025, my friend Dione and I had a day trip to Vienna—the capital of Austria. We were attending a conference in Budapest, Hungary, which is 250 km from Vienna. So, it was a good opportunity to visit Vienna.
We took a morning train from Budapest to Vienna and got back to Budapest by night. However, booking these tickets turned out to be a bit complicated. There were many websites to book the train ticket—Hungarian Railways, Austrian Railways, and third-party sites such as Omio. All these websites had different prices for the same ticket.
I booked the tickets from the Hungarian Railways website as it was the cheapest. The train from Budapest to Vienna was €13, operated by Eurocity. Also, I had to pay €2 for the seat reservation on top. The train from Vienna to Budapest—operated by Railjet—was €21, along with €2 extra for reservation again—making it €23. The tickets for the two-way journey added up to €38.
The cost of these tickets varied depending on when one purchses them: the sooner you purchase, the lower the price. I bought my tickets 15 days ahead of the date of journey and paid just €38. In contrast, Dione booked just one day before her trip and paid around €100 for her tickets.
As for the seat reservation, long-distance trains in Europe usually require paying extra for the seat reservation. This ensures that you get your preferred seat, such as a window seat or an aisle seat. Nevertheless, you will get a seat on long-distance trains because they do not sell more tickets than there are seats. Therefore, you will get a seat without reservation as well. However, we reserved our seats so that we can sit together. This helped us more in the return part of the journey—from Vienna to Budapest—which was more crowded than the train we took from Budapest to Vienna in the morning.
On another note, reservation is mandatory on some trains in Europe, but ours wasn’t one of them. In addition, people also use rail passes, so an extra charge is required on top for reserving the seats for pass holders. On the other hand, local trains do not require seat reservations in general.
Our train’s scheduled departure was at 08:55 from the Budapest Kelenfold station. We reached the train station 40 minutes before the train’s scheduled departure. The Kelenfold station had free Wi-Fi, which was handy because I didn’t have a local SIM.
A departures board at Budapest Kelenfold station. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.
This is platform number 15 of Budapest Kelenfold station where we boarded our train. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.
Our train arrived on time. I tried to find our coach number but could not find the numbers written anywhere on the side of the coach. Luckily, we were helped by a fellow passenger who directed me to look at the doors, where the numbers were mentioned clearly!
Then we got into our compartment and took our respective seats. Our tickets were checked twice - once while the train was in Hungary and the other when in Austria. Showing the PDF of the train ticket on our mobile to the ticket inspector was good enough for the purpose. Austria and Hungary are a part of the open transit Schengen area, which means this was the extent of the border control checks we had to go through.
Interior of our Budapest to Vienna train. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.
The train also had free Wi-Fi, albeit with poor connection at times. There were no eatery options inside the train.
We deboarded at the Wien Hauptbahnhof station in Vienna. The journey was 250 km and took 2.5 hours, reaching Vienna at 11:25, which was the scheduled time.
This blue colored train was the one we took for our Budapest to Vienna journey. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.
An ÖBB train standing at a platform of Vienna train station. ÖBB is the national carrier of Austria. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.
Wien Hauptbahnhof train station. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.
At the station, we bought a 24-hour public transport pass from a ticket machine for €8. The pass includes unlimited access to all the public transport in Vienna for 24 hours. My pass was valid from the 7th of September at 11:34 to the 8th of September at 11:33. A single public transport ticket (from anywhere to anywhere) costs €2.4. A single ticket of €2.4 can be used once on any public transport in Vienna—trams, metros, and buses.
Therefore, the pass is a good deal if you are going to take at least four public transport trips in a day. Unlike the public transport pass I got in Budapest, the pass in Vienna was anonymous and not tied to the rider’s name.
My public transport pass in Vienna.
We wanted to visit the Schönbrunn Palace. The palace was reachable by subway. In order to get to the subway station, we started by going outside the station. But it was not outside. So we came back inside the station building and realized that the subway was underground.
We took the subway and deboarded at the Schönbrunn subway station—the closest one to the palace. The ride was smooth; the train was pretty silent.
By the way, like Budapest, there were no AFC gates for boarding the subway in Vienna. The stations had ticket validators instead, where you are supposed to validate your tickets before getting into the subway.
Instead of AFC gates, Vienna has ticket validators as in the picture. You need to tap your ticket in the validator before boarding the subway. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.
These validators are in place to ensure that you use your ticket only once. Unlike AFC gates, which are present in metros of most of the countries I have been to, the ticket validators don’t act as a physical barrier to enter the boarding area.
If you board the metro without validating your ticket, you will be facing hefty fines upon getting caught. I have heard that the fine is around €100. On the other hand, if you have a public transport pass like we did, then you don’t need to validate it before boarding.
In addition, there were no annoying security checks either, unlike in Indian cities. In the Delhi metro, for example, you would need to scan your bags and pass through a security check before getting to the AFC gates.
Vienna subway. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.
Now back to the story, after alighting at the Schönbrunn subway station, we walked to the Schönbrunn Palace. One can roam around outside the palace and click pictures for free. To go inside, however, requires buying tickets. The tickets for the palace can be booked in advance on the internet. We didn’t take the tickets in advance, as we decided to visit the palace at the last moment.
So we went to the ticket counter and found out that we needed to wait for 1 hour 40 minutes before going inside if we took the tickets at that moment. In addition, one ticket costs €44 (around 4000 Indian rupees). Since we had to return to Budapest in the evening and only had a few hours in the city, we decided not to go inside the palace. Instead, we clicked a few pictures outside the palace.
Schönbrunn Palace. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.
The Schönbrunn Palace is a UNESCO World Heritage Site and is a historically significant place. It servedas one of the residences of the powerful Habsburg dynasty. The palace looked so good that my friend Dione said, “It seemed like the palace was built yesterday”. This remark applied to other parts of Vienna we went to. For example, the subway stations also seemed like they were built yesterday.
A street near Schönbrunn Palace. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.
Now, we wanted to go someplace to grab a bite. I asked my friend Urbec for suggestions on where to go. They suggested we visit the steps named Strudlhofstiege, which had the added benefit of being in a neighborhood with good bakeries and buildings.
So, we took the subway and deboarded at the Roßauer Lände station, followed by walking around a kilometer to reach the stairs.
Roßauer Lände subway station. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.
Platform of the Roßauer Lände subway station. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.
The The Strudlhofstiege steps. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.
On the way, we were also looking for a place to eat. Unfortunately, it was Sunday, and Vienna closes on Sunday. That means most of the shops—including bakeries and cafés—are closed. Only places like railway stations have shops open on Sundays.
By the way, walking around in the streets of Vienna was a treat. The streets were not crowded (as it was not exactly a touristy neighborhood) and had good pedestrian infrastructure, with clean streets and separate cycling tracks. The buildings were also beautiful.
A random street in Vienna.
Another street in Vienna.
After some walking, we found a restaurant open. I grabbed the menu to check the prices. A lady at the shop asked me what I was doing, and I told her that I was browsing the menu. She said that the menu was in German. I don’t know how she knew that we didn’t know German, but it seemed like a racist thing to be told.
We roamed around further and found a café by the name of Blue Orange, where we ordered coffee and croissants. When we got our order, the waiter told us that they were having some issues, so they wouldn’t charge us for the croissant if it wasn’t good.
A picture of Blue Orange café. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.
My friend and I took a bite, and both of us didn’t like the croissant. After some time, the waiter came to us and asked whether the croissant was okay, to which we said no. Therefore, they didn’t charge us for the croissant. This was the first time something like this happened to me. It felt like I was in a different world. I added a small tip at the end for this gesture, which I had to put in a jar at the counter.
The cappuccino I ordered was €4.50, while the espresso that Dione ordered was €3.60. The croissant would have been €3.60. I remember Paris having cheaper croissants!
Then when the waiter brought our drinks out, they automatically gave me the espresso and Dione the cappuccino. Dione found this funny because there is a stereotype in her country (Australia) that men drink strong black coffee, and women drink milky drinks like cappuccinos. She found it interesting that this stereotype seems to exist in Austrian culture too.
We hopped on a tram to reach the nearest subway station and went to the Wien Hauptbahnhof station to have something before we caught our return train to Budapest.
Trams in Vienna. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.
At the station, I had Esterhazyschnitten and Punschkrapfen (thanks, Urbec, for the suggestion). The lady at the shop warned me that punschkrapfen had alcohol in it, to which I said okay.
Esterhazyschnitten was a cake made of almonds, while punschkrapfen was a jam-filled sponge cake, soaked in rum. Esterhazyschnitten was my favorite out of the two. The punschkrapfen was too sweet for my taste.
Punschkrapfen. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.
Esterhazyschnitten. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.
While the station was well-built, there were a couple of things about the Wien Hauptbahnhof station that we didn’t like. There were no seats inside the station, so we had to eat outside the building. Also, the toilets needed to be paid for. It costs 50 cents to use the toilets at this station.
The Vienna train station had departure boards all over the place. So, we went to the platform our train was to arrive on.
Departure boards in Vienna displaying information about the trains. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.
Platform and tracks at Wien Hauptbahnhof station. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.
When our train arrived, we had some difficulty locating our compartment. This train was operated by a different company (Railjet) than the one we took in the morning (Eurocity) from Budapest to Vienna, and we were able to locate the coach numbers using the digital board at the station. Each compartment had a digital board next to it on the station displaying the coach number. However, that wasn’t the problem. Even after reading the coach numbers and trying to find ours, it didn’t appear where we expected in the sequence.
When we were not able to find our coach for a while, we asked a ticket inspector of the train who was standing on the platform. He directed us towards the front side of the train. So we started running to the front side as we didn’t know how long the train stops.
As we ran toward our coach, we found out that the engine of the back train was connected with the last compartment of the train at the front. At that point, we realized that the train was a combination of two trains. At a later station, the train on the back side parted ways and went towards Vienna Airport.
Interior of the train we took from Vienna to Budapest. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.
This is the train we took for our return journey from Vienna to Budapest. It is standing on a platform in Budapest Kelenfold station. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.
We had a smooth journey and reached Budapest a couple of hours later.
Vienna is a beautiful city; we enjoyed being there, and we would like to visit the city again!
That’s it for now. Signing off. See you in the next one!
Credits: Thanks to Dione and Badri for proofreading.
What makes the web? [Scripting News]
I’ve been trying to come up with a simple test that lets you know whether some software is on the web or if it just can be made to appear in a web browser. So here we go.
If you can hook up a piece of the app to the a piece of another app then it’s on the web.
This comes from the basic feature of linking, which is the unique feature of the web.
Every other feature that makes the web the web in my experience allows two things to be part of each other.
Comment here.
Music For Your Monday: Tame Impala’s “Dracula” [Whatever]
I heard an absolute banger of an earworm this past
week, and have been listening to it nonstop ever since. I
want to bestow upon y’all Tame Impala’s new song,
“Dracula.”
If you had asked me a week ago if I liked Tame Impala, I would’ve said I was completely indifferent about him and couldn’t even name a song from him. That is still true except for “Dracula.” This song is an absolute home-run of a bop, and there’s even a remix version with JENNIE which is also very good. Here’s both versions for your listening pleasure!
And the JENNIE version:
I have been debating which version I like better, and honestly it’s so hard to decide. I listen to both an equal amount, and both are great. Can’t go wrong with the original, but I love JENNIE’s ethereal voice and the harmonizing with Tame Impala.
My favorite part of the song is how they make “Dracula” rhyme with “spectacular.” Stellar stuff, really.
I hope you enjoy this bop, and that it helps you get movin’ and groovin’ through your next week!
-AMS
Tell Congress: Oppose the GUARD Act [EFF Action Center]
The GUARD Act may look like a child-safety bill, but in practice it’s a sweeping age-gating mandate that could apply to nearly every public-facing chatbot, from customer service tools to search assistants. It would require companies to collect sensitive identity data and chill online speech. The bill would also block teens from tools they rely on every day—as well as adults who cannot prove they are over 18.
EFF has long warned that age-verification laws undermine free expression, privacy, and competition. The GUARD Act is no different. It would make the internet less free, less private, and less accessible—while consolidating power in the largest tech companies and pushing smaller developers out.
There are real concerns about harms caused by AI systems, especially for young people. But the GUARD Act responds with a blunt, overbroad solution. Instead of addressing specific risks, it imposes sweeping restrictions that affect us all.
Congress should reject the GUARD Act and focus on policies that protect users without sacrificing privacy and access.
Tell your representatives to oppose the GUARD Act now.
Urgent: Public education V vouchers [Richard Stallman's Political Notes]
US citizens: call on your federal legislators in Congress to repeal federal school vouchers and protect public education.
See the instructions for how to sign this letter campaign without running any nonfree JavaScript code--not trivial, but not hard.
How Israel struck hospitals in Lebanon [Richard Stallman's Political Notes]
*Israel escalates attacks on medics in Lebanon with deadly "quadruple tap".*
Friendly fire info as terror, Kuwait [Richard Stallman's Political Notes]
A Kuwaiti-American journalist was visiting Kuwait and made footage of the mistaken shooting of an American F-15 and reported on this. Since then, Kuwait has arrested him, possibly for publishing that, or possibly for other journalism, under repressive new "terrorism" laws which can define journalism as "terrorism" under rather vague conditions.
Congress Must Reject New Insufficient 702 Reauthorization Bill [Deeplinks]
Speaker Johnson has introduced a new fig leaf over the American surveillance state, the Foreign Intelligence Accountability Act. Introduced with only days to go before Section 702 of the Foreign Intelligence Surveillance Act (FISA) expires and the U.S. government loses one of its most invasive surveillance programs, the bill does nothing to make any of the substantial changes privacy advocates have been asking for --- most notably, it fails to give us a real warrant requirement for the FBI to snoop through the private conversations of people on U.S. soil.
Section 702 needs to be reauthorized by Congress every few years. These reauthorizations give us a chance to tinker with the language of the law and introduce some much-needed reforms. This attempt at reauthorization has been particularly fraught, but there is still time for Congress to include real protection for Americans’ civil liberties and rights. We need to make sure that when an FBI agent wants to look through Americans’ conversations scooped up as part of a national security intelligence program, they need a warrant signed by a judge just as if they were trying to search your email account or your house.
This new bill mandates that a civil liberties protection officer at the Director of National Intelligence review all queries of U.S. persons made by the FBI under this program to make sure no laws have been broken. It’s bad enough to let the intelligence community police itself, and what’s more, the assessment for illegality would be made after a U.S. person has already been spied on. This is hardly the reform we need and will likely just lead to continued abuse with no real accountability or consequences.
The bill “prohibits targeting United States persons,” but so does current law. This “change” does absolutely nothing to address what’s really happening—which is that surveillance of people in the United States is usually justified as “incidental” because Americans aren’t the “target” of the surveillance. The bill does not create a warrant requirement, it does not create any new transparency requirements, and it does not protect Americans’ privacy.
We urge Congress, and we urge you to write to your Congresspeople, to tell them this: Reject the surveillance state’s latest smokescreen known as the Foreign Intelligence Accountability Act and keep pushing for real reforms.
Dillo is an amazing web browser for those of us who want their web browsing experience to be calmer and less flashing. Dillo also happens to be a very UNIX-y browser, and their latest release, 3.3.0, underlines that.
A new
↫ Dillo 3.3.0 release notesdillocprogram is now available to control Dillo from the command line or from a script. It searches for Dillo by the PID in theDILLO_PIDenvironment variable or for a unique Dillo process if not set.
You can use this program to control your Dillo instance, with basic commands like reloading the current URL, opening a new URL, and so on, but also things like dumping the current page’s contents. I have a feeling more commands and features will be added in future releases, but for now, even the current set of commands can be helpful for scripting purposes. I’m sure some of you who live and die in the terminal are already thinking of all the possibilities here.
You can now also add page actions to the right-click context menu, so you can do things like reload a page with a Chrome curl impersonator to avoid certain JavaScript walls. This, too, is of course extensible. Dillo 3.3.0 also brings experimental support for building the browser with FLTK 1.4, and implemented a fix specifically to make OAuth work properly.
Ubuntu is going to integrate “AI”, but Canonical remains vague about the how and why [OSnews]
Ubuntu, being one of the more commercial Linux distributions, was always going to jump on the “AI” bandwagon, and Jon Seager, Canonical’s VP Engineering, published a blog post with more details.
Throughout 2026 we’ll be working on enabling access to frontier AI for Ubuntu users in a way that is deliberate, secure, and aligned with our open source values. By focusing on the combination of education for our engineers, our existing knowledge of building resilient systems and our strengthening silicon partnerships, we will deliver efficient local inference, powerful accessibility features, and a context-aware OS that makes Ubuntu meaningfully more capable for the people who rely on it
Ubuntu is not becoming an AI product, but it can become stronger with thoughtful AI integration.
↫ Jon Seager at Ubuntu Discourse
The problem with this entire post is that, much like all other corporate communications about “AI”, it’s all deceptively vague, open-ended, and weasely. Adjectives like “focused”, “principled”, “thoughtful”, and “tasteful” don’t really mean anything, and leave everything open for basically every type of slop “AI” feature under the sun. Their claims about open weights and open source models are also weakened by words like “favour” and “where possible”, again leaving the door wide open for basically any shady “AI” company’s models and features to find their way into your default Ubuntu installation.
There’s also very little in terms of concrete plans and proposed features, leaving Ubuntu users in the dark about what, exactly, is going to be added to their operating system of choice during the remainder of the year. There’s mentions of improved text-to-speech/speech-to-text and text regurgitators, but that’s about it. None of it feels particularly inspired or ground-breaking, and the veneer of open source, ethical model creation, and so on, is particularly thin this time around, even for Canonical.
I don’t really feel like I know a lot more about Canonical’s “AI” intentions for Ubuntu after reading this post than I did before, other than Ubuntu users might be able to generate text in their email client or whatever later this year. Is that really something anybody wants?
This Week’s Weird Sideswipe by Current Events [Whatever]

Apparently it’s true: The fellow who came to the Correspondent’s Dinner the other night with a bunch of weapons (and who, it should be noted, came nowhere near the president or anyone else in the ballroom), liked four Bluesky posts of mine in the last month. Which ones? I have no idea, although a cursory view of my last month of Bluesky posts shows nothing particularly spicy in a political sense. This does not surprise me, as I usually send all my really spicy political takes to Threads. Most of the last month of Bluesky posts for me were about JoCo Cruise, whacking on “AI,” photos of cats and Krissy, and talking about writing. Maybe this dude liked cat pictures? He’s arrested now and his Bluesky account is down in any event. We may never know.
My feeling about this is pretty much the same feeling I have about being in the Epstein Files: What the fuck, it’s not great, and also, it doesn’t actually have much to do with me, I’m mostly being sideswiped by this weird damn moment we’re in. I certainly don’t condone attempting to kill the president. Any president, and also, this one in particular. Among other things that would take away the fun of watching him one day rotting in prison along with the rest of his corrupt and horrible family and administration. Keep him alive! For justice!
I’m joking here about being on a federal watch list now, but I should be clear I’m pretty sure I already have an FBI file, and also that this FBI file is really super boring, so anything relating to this will almost certainly be funneled into that. I recently did an FOIA request for my file, so I suppose I will find out soon enough. In the meantime I’ll just have to imagine.
I’ve been informed that some of the folks associated with the Sad Puppies are trying to make hay of my tangential association to this fellow, which, I guess, they would, loud bad logic has always been their MO. My first thought is that when you’re related to an actual successful presidential assassin, a failed one liking your social media posts is weak sauce. My second thought was, huh, the right-wing chudguzzlers are whining about me again, whenever they do that something nice happens with my career, wonder what it will be this time. And indeed, today I got a foreign language offer on one of my books, which I happily accepted. It’s correlation, not causation, to be sure. But it sure does correlate a lot. So keep it up, right-wing chudguzzlers! We’re having our back deck rebuilt, I could use a few more foreign sales. Thanks in advance for your help.
— JS
Busy day working on new RSS-based project. Still diggin!
Version 26.1 of the pip package installer for Python has been released. Richard Si has published a blog post that looks at some of the highlights of 26.1 including dependency cooldowns, experimental support for pylock (pylock.toml) files, and resolver improvements that will move pip closer to the goal of removing its legacy resolver. The release also includes several security fixes and drops support for Python 3.9.
Discord used to be a tool that I leveraged to communicate with friends and erstwhile allies, but over the years it's increasingly become something like a car up on blocks in my front yard - something to tinker with, absent any prospect or expectation of continuous functionality. I have to constantly remind it that I don't want to use the speaker in my monitor. And mics? "Forget about it." I would say that this is an unforgivable sin but I know at least one other person who might actually prefer this state of affairs. Also, this really happened. So.
The Internet Still Works: SmugMug Powers Online Photography [Deeplinks]
SmugMug is a family-owned photo hosting and e-commerce platform that helps professional photographers run their businesses online. Founded in 2002, the company provides tools for photographers to show their work, deliver client galleries, sell prints, and manage payments.
In 2018, SmugMug purchased Flickr, the long-running photo-sharing community, which added tens of millions of active hobbyist photographers to the company’s user base.
Ben MacAskill is President and COO of SmugMug’s parent company, Awesome, which he co-founded with his family. Awesome also includes the media network This Week in Photo and the nonprofit Flickr Foundation, which focuses on preserving publicly available photography. MacAskill has been an active voice in policy discussions around Section 230 and online platform regulation. He was interviewed by Joe Mullin, a policy analyst on EFF's Activism Team.
Joe Mullin: How would you explain Section 230 to a SmugMug photographer who hasn't heard of it but relies on you to share their work, run their business.
Ben MacAskill: Section 230 allows us to run our business. We are a small, family run business. We don’t have the resources to police every single upload, every single comment, or every single engagement that happens on the site.
That includes photographers who have comments on their sites. Anywhere there’s interaction online, Section 230 protects us.
It doesn't absolve us of liability. We can't run rampant and do anything we want. It just helps protect us and make it scalable so that we can run our business.
What would you have to change if Section 230 were eliminated or significantly narrowed?
Honestly, there's a high chance that it would bankrupt platforms like ours. They're not wildly profitable. If Section 230 is done away with, we have to [check] content that goes online to make sure we’re not liable. That means policing tens of millions of uploads per day.
That would kill the business of a lot of photographers. Can you imagine—you just got married, and you’re waiting for your wedding photos for a week or two because they’re in some moderation queue?
If we don’t have legal protections, and we get one nefarious customer—if something goes sideways—then I’m liable for that.
I don't, and can't possibly know, whether every single photo is appropriate or legal, as it's uploaded. We would literally have to moderate everything before it goes online. I don’t think any business can afford that, period. I guess you could have an offshore call-center type thing. Still, it would change the entire nature of the real-time internet. Imagine posting something to Instagram and having the platform say, “Cool, we’ll get back to you in 8 to 12 days.”
What kind of content moderation do you do on SmugMug?
If a user uploads something illegal, we will report them as soon as we find it. We're not protecting them. We don’t condone or allow illegal behavior. We work very closely with organizations, nonprofits and governmental agencies to detect CSAM—child exploitative material—and we report that to the National Center for Missing and Exploited Children. We will report users, we eliminate illegal content on our platforms—which is one reason we have such a low prevalence of that problem.
But that does take effort and time to find, and there is currently no perfect solution. The tech solutions that exist can’t detect it at 100% accuracy, or anywhere close. And with tens of millions of uploads a day, going through them one by one is impossible.
How do you think more generally about protecting user speech and creative expression?
On SmugMug, we’re really focusing on professionals running their business. So we don’t have to [weigh in] on content too much.
On Flickr, we are big proponents of expression and artistic creativity. Photographers have opinions! But we do draw the line at things like hate speech and harassment. We aggressively maintain a friendly platform. Our community guidelines are very specific, that you cannot harass other customers, you cannot upload stuff classified as hate speech, or threats, or anything along those lines.
Those rules are generally policed by the community. We do have some text analysis tools, but when community members feel harassed or threatened, reports will come in. We’ll address them on a one-by-one basis and remove harassing material from our platform.
Our ability to moderate is one of the things that makes Flickr what it is. If we lose the ability to enforce our own moderation rules—or have that legislated for us—then it changes the entire nature of the community. And not in a good way. Losing the ability to moderate would permanently and forever change what we've built.
What kind of complaints or takedown requests do you receive, and how do you handle it, both in the U.S. and abroad?
Flickr is often referred to as the friendliest community online. You know, we're not dealing with a lot of hate. We're not dealing with a lot of threats. Under other frameworks, like the DMCA, we do takedowns on copyrighted material.
We’re able to handle it with a fully internal team, and we have a great track record. But the user base and the content base is so large that, if we had to assume that those tens of millions of uploads a day are problematic, the burden would be extreme.
We have a robust Trust and Safety Team, and we operate in every non-embargoed country on Earth. So we are subject to a lot of different laws and regulations: “likeness” rules and privacy rules in certain countries that don't exist here in the United States. Even state to state, there’s some varying laws. It’s a complicated framework, but we pay attention to it.
The globe responds in much the same way that Section 230 is working. That is, we operate on reports and discovery, not on pre-screening everything.
What do you think that policy makers most often misunderstand about how platforms like yours operate?
One misconception is that we are not beholden to any laws. That Section 230 absolves us of any responsibility and any liability, and we can just do whatever we want. They talk about it as “reining in tech companies,” or “holding tech companies accountable.” But I am accountable for the content on my platform. We’re not given this “get out of jail free” card.
And I think they assume all platforms don’t really care about this, that anything that is done is done begrudgingly. But we’re very proactive about keeping a clean, polite, and friendly community. We are already very aggressively policing our platform.
And even legal content gets moderated, because it might just not be appropriate for a particular community.
We enforce our rules, and much the way that other private in-person businesses will enforce their rules. If you start screaming hateful things at patrons in a coffee shop, they’re going to throw you out. They want a quiet, chill vibe where people can sip their lattes. We’re doing the same sort of things.
As an independent family owned company you’re in an ecosystem dominated by much larger platforms. How are these issues different for you as a smaller service?
I think it's a much more existential threat for middle and small tech companies. It also shuts off the next generation of these platforms. The computer science student in a dorm room right now won't have the legal protections to launch, to even try to build something new. At least not here in the United States.
[$] The rest of the 7.1 merge window [LWN.net]
By the time Linus Torvalds released 7.1-rc1 and closed the 7.1 merge window, 12,996 non-merge changesets had been pulled into the mainline repository; just over 9,000 of those arrived after the first-half summary was written. These changes were more driver-oriented than those seen earlier, but still also included many new features across the kernel as a whole.
Looking at consequences of passing too few register parameters to a C function on various architectures [The Old New Thing]
In our exploration of calling conventions for various processors on Windows, we learned that in many cases, some of the parameters are passed in registers.
Suppose that there is a function that takes two parameters, but you know that the function ignores the second parameter if the first parameter is positive. What happens if you call the function with just one parameter (say, passing zero). The function should ignore the second parameter, so why does it matter that you didn’t pass one?
Even though the function doesn’t use the parameter, it still may decide to use the storage for that parameter as a conveniently provided scratch space. For example:
int blah(int a, int b)
{
if (a <= 0) {
int c = f1();
f2(a);
return c;
} else {
return f3(a, b);
}
Is it okay to call blah with zero as its only
parameter? You aren’t passing b, but the
function doesn’t use b, so why does it
matter?
Formally, the C and C++ languages say that if you call a function with the wrong number of parameters, the behavior is undefined, so officially, you’ve broken the rules and anything can happen.
But let’s look at what types of things could go wrong.
If you pass too few parameters on the stack, and it is a callee-clean calling convention, then the callee will clean too many bytes off the stack, resulting in stack imbalance and likely memory corruption.
Even if it’s not a callee-clean calling convention, the called function will think that the memory for the parameter is present, and it may use it as scratch space, resulting in memory corruption in the stack frame of the calling function.
In our example above, the compiler might realize, “Hey, I
don’t need to allocate new memory for the variable
c. I can just reuse the memory that holds the now-dead
variable b.” In other words, it rewrites the
function as
int blah(int a, int b)
{
if (a <= 0) {
b = f1();
f2(a);
return c;
} else {
return f3(a, b);
}
Even if you don’t reserve memory for the variable
b, the compiler will assume that you did and overwrite
whatever is at the location the reserved memory should have
been.
But what if the parameters are passed in registers, and you didn’t pass enough of them?
On most processors, what happens is that the called function will try to use that register and read whatever uninitialized value happens to be lying in that register.
Except on Itanium.
One special Itanium quirk is the presence of the “Not a Thing” (NaT) bit, which is a bit attached to each general purpose register that indicates whether the register holds a valid value. The most common ways for a register to enter the NaT state are if it was the result of a failed speculative load, or if it was the result of a mathematical calculation where at least one of the inputs was itself NaT. Therefore, if your uninitialized output register happens to be a NaT left over from an earlier failed speculation, the called function might decide to spill the value onto the stack for safekeeping before using that register for something else.
extern bool is_valid(int);
int blah2(int a, int b)
{
if (is_valid(a)) {
return f3(a, &b);
} else {
return 0;
}
}
The compiler realizes that it needs to take the address of
b if a is not valid, so it has to spill
the value to memory (so that it can have an address). But writing a NaT
to memory raises a “NaT consumption” exception, so
this function crashes even in the case where it never actually uses
the b variable.
But wait, there’s more.
On Itanium, the function call mechanism is architectural rather than merely conventional. The calling function declares the number of output registers (registers that will be passed to the called function), and those registers are renumbered on entry to the called function so that they are visible starting at register r32. If a calling function says “I am passing 2 registers,” then the called function sees them as registers r32 and r33. I covered the details some time ago, but leaf functions are particularly interesting.
Leaf functions are functions that do not create a custom stack frame and simply make do with the architectural stack frame that the processor creates for them by default. And that default stack frame consists only of the inbound parameter registers. In the case of passing too few parameters to a function, that means that the default stack frame contains fewer registers than the function expects.
Architecturally, the rule is that if you read from a stacked register that lies outside the current frame, the results are “undefined”. I couldn’t find a formal definition of “undefined” in the Itanium documentation (though it’s eminently likely that I simply missed it), but I assume it means “can produce any result, including an exception, that is not dependent upon information outside the current processor execution mode.”¹ In particular, it can raise a processor exception, say, because the value of that stacked register happens to contain a leftover NaT.
The Itanium architecture takes an even stronger stance against writing a stack register that lies outside the current frame: It is required to raise an Illegal Operation fault.
I can imagine it being weird seeing an exception come out of a register-to-register move instruction.
So there you go, another case where the Itanium architecture more strictly enforces a programming rule, in this case, making sure that you pass the correct number of parameters to a function.
¹ This means that, for example, an “undefined” result in user-mode code cannot be dependent upon information available only to kernel mode.
The post Looking at consequences of passing too few register parameters to a C function on various architectures appeared first on The Old New Thing.
LibreLocal meetup in London, England, United Kingdom [Planet GNU]
May 16, 2026 at 12:00 BST (11:00 UTC).
LibreLocal meetup in Neuchâtel, Switzerland [Planet GNU]
May 21, 2026 at 16:00 CEST (14:00 UTC).
LibreLocal meetup in València, Spain [Planet GNU]
May 16, 2026 at 10:30 CEST (08:30 UTC).
LibreLocal meetup in Brasília, Distrito Federal, Brasil [Planet GNU]
May 22, 2026 at 18:00 BRT (21:00 UTC).
LibreLocal meetup in Tarragona, Catalunya, Spain [Planet GNU]
May 8, 2026 at 15:00 CEST (13:00 UTC).
LibreLocal meetup in Toronto, Ontario, Canada [Events]
May 18, 2026 at 18:00 EDT (22:00 UTC).
LibreLocal meetup in Brantford, Ontario, Canada [Events]
May 17, 2026 at 13:45 EDT (17:45 UTC).
LibreLocal meetup in Salamanca, Salamanca, Spain [Events]
May 7, 2026 at 17:00 CEST (15:00 UTC).
CodeSOD: The JSON Template [The Daily WTF]
We rip on PHP a lot, but I am willing to admit that the language and ecosystem have evolved over the years. What started as an ugly templating language is now just an ugly regular language.
But what happens when you still really want to do things with templates? Allison has inherited a Python-based, WSGI application which rejects any sort of formal routing or basic web development best practices. Their way of routing requests is simply long chains of "if condition then invokeA elif otherCondition then invokeB". Sometimes, those conditions will directly set the MIME type on the HTTP response.
They do use a templating library called Mako for generating their responses. They use it for their HTML responses, obviously. They also use it for their JSON responses, generating code like this:
{
"success": true,
"items": {
%for item in items_available.keys():
"${item}": ${items_available[item]}${',' if not loop.last else ''}
%endfor
}
}
The %for and matching %endfor mark the
Python code off, which generates JSON via string-munging, complete
with the check to make sure we're not on the last iteration of the
loop.
Like so much bad code, this offers a degree of fractal
wrongness. Instead of iterating over the keys and fetching the
items inside the loop, you could iterate for key,value in
items_available.items()- and according to the Mako docs,
that for is just a regular Python for
loop. That we're just outputting the contents of the dictionary is
itself potentially a problem- sure, if we know the types of the
dictionary, we'll know that whatever it is can be output in the
body of a JSON document, but do we really think this code is using
type annotations? I don't. And for a RESTful web service, I'm
always going to feel weird about using a success field
when ideally the HTTP status code could convey most of that
information (and yes, I know there are reasons to still put status
in the body, I just hate it).
Of course, the real issue is just: Python's built in JSON serialization is actually pretty advanced. And performant! You don't need any of this, you could just do something like:
return json.dumps({"success": true, "items": items_available})
No templates. No formatting. No worries about how the data gets represented. Well, still worries, because JSON serialier will throw exceptions if it doesn't know what to do with a type. But then at least you get that exception on the server side and aren't sending the client a malformed document.
In any case, this is a good demonstration that you can write bad PHP in any language.
Show Your Work: The Case for Radical AI Transparency [Radar]
A colleague told me something recently that I keep thinking about.
She said, unprompted, that she appreciated seeing both sides of my AI conversations. Not just the output. The full thread. My prompts, the AI’s responses, the back and forth, the dead ends, the iterations. She said it made her trust me more.
This piece is an example of that. The conversation that produced it exists. A raw transcript would be longer, messier, and significantly less useful than what you’re reading now. What you’re reading is the annotated version, the part where judgment entered the artifact. That’s not a disclaimer. That’s the argument.
I’ve been transparent about using AI in my work from the start. Partly because I wrote a book on data ethics and hiding it felt wrong. Partly because I’ve spent 25 years watching technology adoption go sideways when the human dimension gets treated as an afterthought. But her comment made me realize something more specific was happening when I showed the conversation rather than just the output.
It’s worth unpacking why.
In the 1990s, Harvard Business School professor Dorothy Leonard introduced the concept of “deep smarts” in her book Wellsprings of Knowledge: the experience-based expertise that accumulates over decades of practice, the kind of judgment that lives in people’s heads and doesn’t reduce to documentation. She also introduced a companion concept that has stayed with me: core competency as core rigidity. The very depth that makes expertise valuable also makes it hardest to transfer. Experts often can’t fully articulate what they know because they’ve stopped experiencing it as knowledge. They experience it as just seeing clearly.
Leonard’s work was about organizational knowledge transfer: how companies preserve institutional wisdom when experienced people retire or leave. That’s been a challenge since the first consultant ever billed an hour. What’s different right now is that the tools to actually solve it have arrived simultaneously with the largest demographic wave of executive retirement in American history.
What’s interesting about this particular moment is that the same dynamic is now showing up at the individual level in how practitioners interact with AI. The tacit knowledge at stake isn’t a retiring VP’s intuition. It’s your own judgment, your own expertise, your own hard-won understanding of what a project or organization actually needs. And the question isn’t how to transfer it before you walk out the door. It’s whether you can see it clearly enough to know when the AI is substituting for it.
The natural impulse is to clean up the AI interaction before sharing anything with a collaborator, a team, or a stakeholder. Show the polished output, not the messy process. You don’t want them thinking you just handed your work to a machine.
That instinct produces a disingenuous outcome.
When you hide the process, the people you’re working with have no way to evaluate how the work was made, what judgment calls went into it, or where your expertise ended and the AI’s pattern-matching began. You’ve made the process invisible. And invisible AI processes erode trust, slowly and quietly, over time.
The instinct to hide is also, if we’re honest, a little defensive. It assumes the people in the room can’t tell the difference between AI output and practitioner judgment. Most of them can. And the ones who can’t yet will figure it out. Hiding the seams doesn’t make the work more credible. It just defers the reckoning.
Here’s what took me longer to see.
Hiding the process doesn’t just affect how others perceive you. It erodes your own clarity about where your expertise is actually operating.
To understand why, it helps to be precise about what AI actually is. AI is a pattern matcher, a deeply sophisticated one, trained on more human-generated content than any single person could read in a thousand lifetimes. That’s its power (core competency) and its limitation (core rigidity) simultaneously, and the two are inseparable. The very scale that makes it extraordinary is also the boundary that defines what it cannot do. It is extraordinarily good at producing the most likely next thing given what came before. What it cannot do is know what you actually need, when the obvious answer is the wrong one, or when the stated goal isn’t the real goal. It has no judgment about context, relationship, or organizational reality. It has patterns. Incomprehensibly vast ones. But patterns.
That distinction matters because of what happens when you stop paying attention to it.
I’ve watched it happen in my own work. You share a draft with someone and they’re impressed. They quote a formulation back at you, something that sounds sharp and considered. And you realize, tracing it back, that the formulation came from the AI. Not because the AI invented it, but because you said something rougher and less precise earlier in the conversation, and the AI reflected it back in cleaner language. The idea was yours. The AI gave it a polish you then forgot to account for. The person quoting it back thought they were seeing your judgment. They were seeing your thinking laundered through a pattern matcher and returned to you at higher resolution.
That’s the subtler version of the problem. Not that AI invents things. It’s that it can reflect your own thinking back with more confidence and clarity than you put in, and that gap is easy to mistake for the AI contributing something it didn’t.
When you route everything through a polished output layer, you stop noticing the moments where you pushed back, redirected, rejected the first three versions, reframed the question entirely. Those moments are where your judgment lives. They’re the difference between using AI and being used by it. It’s Leonard’s core rigidity problem, applied inward: The very fluency that makes AI feel useful can make your own expertise invisible to you.
When the process stays hidden, the knowledge stays local and static. When it’s visible, it becomes something you and the people around you can actually work with and build on. The reason transparency benefits your audience is the same reason it benefits you: It keeps the scope of your judgment visible and therefore expandable. That’s not just an ethical argument. That’s the amplification mechanism.
Which is also what makes the upside real rather than consoling. When you stay in the process rather than just collecting outputs, work that would have taken days now takes hours. Your thinking gets sharper because you have to articulate it precisely enough for the AI to be useful. The people developing fastest right now aren’t the ones offloading the most. They’re the ones using AI as a thinking partner and staying in the conversation.
Here’s the paradox at the center of it: The more clearly you see the AI as a pattern matcher, the more human you have to be in working with it. The more human you are, the more useful the output. The tool doesn’t replace the practitioner. It reveals them.
Transparency isn’t just an ethical practice. It’s a cognitive one.
I’ve started calling this radical AI transparency. Not a policy, not a compliance framework, not a disclosure checkbox. A practice. Something you can actually do Monday morning.
Here’s how it shows up concretely:
Before you’re deep in a project or collaboration, surface how you use AI and genuinely explore how others do. Not as a disclosure (“I want you to know I use AI tools”) but as a real exchange. What are you using? What do you trust it for? Where are you still skeptical? The comfort level and sophistication in the room will vary more than you expect, and knowing that before you’re mid-deliverable matters.
This is also how you build the psychological foundation for showing your work later. If the people you’re working with have never heard you talk about AI before and you suddenly share a full chat thread, it lands differently than if you’ve already had the conversation.
This is partly an orchestration problem and I won’t pretend otherwise. There’s cutting and pasting involved. The tools haven’t caught up to the practice yet, which is itself worth naming honestly when the topic comes up.
A few approaches that help: a running document per project where you paste key threads as they happen (not retroactively, you’ll never do it retroactively), dated and labeled by what you were working on. Claude and most other major AI tools now offer conversation export, which produces a complete record you can archive. The low-tech version, a single shared document per engagement, is underrated for its simplicity.
The reason to do this isn’t just for sharing. It’s for your own reference. Being able to go back and see what you asked, what the AI produced, what you changed and why, builds a record of your judgment over time. That record is professionally valuable in ways that are hard to anticipate until you have it.
Not every thread is self-explanatory to someone who wasn’t in it. Context is everything, and raw transcripts without context are a lot to ask anyone to parse.
A sentence or two before the thread begins. A note at the moment where the direction changed. A brief flag on what you rejected and why. This is where your voice enters the artifact, and it transforms a raw AI exchange into a demonstration of judgment. The annotation is the work. It’s where you show what you saw that the AI didn’t, what you knew that the prompt couldn’t capture, and what made the third version better than the first two.
This is also where the most useful material for future reference lives. Annotations are the deep smarts layer on top of the raw exchange. They’re what makes a conversation a record.
AI makes mistakes. It conflates, confabulates, and hallucinates. It gives you the confident wrong answer with the same tone as the confident right one. It misses context that any competent person in the room would have caught.
These aren’t bugs to apologize for or hide. They’re the clearest window into what the tool actually is. AI makes mistakes in a specifically human way because it was trained on human output. Think of it as rubber duck debugging at professional scale. The AI is a duck that talks back, which is useful and occasionally misleading, which is exactly why you have to stay in the room. When you’re transparent about the errors, and even a little good-humored about them, you’re teaching the people around you something true about the technology. That’s more useful than pretending it’s a black box that either works or doesn’t.
The people who build the most durable trust around AI are usually the ones most comfortable saying: “The first version of this was wrong and here’s how I caught it.”
What I’ve described so far is an individual practice. But the same principles scale.
Teams and organizations adopting AI face a version of the same problem. The impulse to treat AI outputs as authoritative, to make the process invisible to colleagues and stakeholders, to optimize for the appearance of capability rather than its actual development, produces the same trust erosion. Just at greater scale and with less ability to course-correct.
The teams that will navigate AI adoption well are the ones that treat transparency not as a risk to manage but as a methodology. Where the process of building with AI, including the corrections, the overrides, the moments where human judgment superseded the model, is part of how the organization learns what it actually believes and values. That’s Leonard’s knowledge transfer problem at institutional scale, and the practitioners who understand both dimensions will be the ones leading those conversations.
That’s a much larger conversation. But it starts with the same Monday morning practice.
Show the conversation. Not just the output.
When you show your AI conversations, you’re not demonstrating that you needed help.
You’re demonstrating that you understand what you’re working with. AI is a pattern matcher, trained on more human-generated content than any single person could read in a thousand lifetimes. What it cannot do is know what you need. That requires judgment, context, relationship, and the kind of hard-won expertise that doesn’t reduce to pattern matching, no matter how good the patterns are.
You’re demonstrating that you know the difference between the pattern and the judgment. That you were present enough in the process to know when to push back, when to redirect, when to throw out the output entirely and start over. That you understand, precisely, what the tool can and cannot do, and that you stayed in the room to do the part it can’t.
That’s a meaningful professional signal. It says: “I am not confused about what AI is. I am not outsourcing my judgment. I am using a very powerful pattern matcher as a thinking partner, and I know which one of us is doing which job.”
That’s the work. That’s always been the work.
The tool just makes it visible now. That’s not a threat. That’s an opportunity.
Claude is a large language model developed by Anthropic. Despite having read more human-generated content than any person could consume in a thousand lifetimes, it still required significant editorial direction, at least three rejected drafts, and occasional reminders about em-dashes. The full conversation transcript is available upon request. It is longer, messier, and significantly less useful than what you just read. Which was rather the point.
Emergency Pedagogical Design: How Programming Instructors Are Scrambling to Adapt to GenAI [Radar]
ChatGPT has been publicly available for over three years now, and generative AI is woven into the tools students use every day: web search, word processors, code editors. You might assume that by now, most programming instructors have figured out how to handle it. But when my collaborators and I went looking for computing instructors who had made meaningful changes to their course materials in response to GenAI, we were surprised by how few we found. Many instructors had updated their course policies, but far fewer had actually redesigned assignments, assessments, or how they teach.
I’m Sam Lau from UC San Diego, and together with Kianoosh Boroojeni (Florida International University), Harry Keeling (Howard University), and Jenn Marroquin (Google), we’re presenting a research paper at CHI 2026 on this topic. We wanted to understand: What happens when programming instructors try to shape how students interact with GenAI tools, and what gets in their way?
To find out, we interviewed 13 undergraduate computing instructors who had gone beyond policy changes to make concrete updates to their courses: redesigning assignments, building custom tools, or overhauling assessments. We also surveyed 169 computing faculty, including a substantial proportion from minority-serving institutions (51%) and historically Black colleges and universities (17%). What we found is that instructors are doing a kind of design work that nobody trained them for, under conditions that make it very hard to succeed.
Here’s a summary of our findings:
We call this work emergency pedagogical design, drawing an analogy to the “emergency remote teaching” that instructors had to perform when COVID-19 forced courses online overnight. Just as emergency remote teaching was distinct from carefully designed online learning, emergency pedagogical design is distinct from thoughtfully integrating AI into pedagogy. Instructors are reacting in real time, with limited resources and no playbook.
We observed four defining properties. First, the work is reactive: Instructors didn’t plan for GenAI; they’re retrofitting courses that were designed before these tools existed. Second, it’s indirect: Unlike a UX designer who can change an interface, instructors can’t modify ChatGPT or Copilot, so they can only try to influence student behavior through policies, assignments, and course infrastructure. Third, instructors rely on ambient evidence like office-hour conversations and staff anecdotes rather than controlled evaluations. And fourth, instructors feel pressure to act now rather than wait for research or best practices to emerge.
Across our interviews and survey, five barriers came up again and again.
Fragmented buy-in. Most instructors we surveyed were personally open to adopting GenAI in their teaching: 81% described themselves as open or very open. But only 28% said the same about their colleagues. The result is that instructors who want to make changes often work in isolation, piloting course-specific tweaks without support or coordination from their departments.
Policy crosswinds. In the absence of top-down guidance, instructors set their own GenAI policies on a per-course basis. As one instructor put it, “From a student perspective, it’s the wild west. Some courses allow GenAI usage, some don’t.” Students have to track different rules for every class, and policies rarely distinguish between paid and unpaid tools, or between stand-alone chatbots and GenAI embedded in everyday software like code editors. 78% of surveyed instructors agreed that unequal access to paid GenAI tools could worsen disparities in learning outcomes.
Implementation challenges. Instructors wanted to shape how students used GenAI, not just whether they used it, but their options were indirect. Some made small adjustments, like permitting GenAI in specific labs. Others went further: One instructor required students to submit design documents before asking GenAI to generate code; another built a custom chatbot that offered conceptual help without writing code for students. 80% of surveyed instructors rated GenAI integration as important or very important, but only 37% reported actually using GenAI tools in course activities often.
Assessment misfit. Several instructors described a striking pattern: Students performed well on take-home assignments but struggled on proctored assessments. One instructor reported that a third of his 450-person class scored zero on a skill demonstration that required writing a short function from scratch, even though assignment grades had been fine. The problem wasn’t just that students were using GenAI to complete homework; it was that instructors had no reliable way to see how students were interacting with these tools day-to-day. Some instructors responded by shifting credit toward oral “stand-up” meetings and written explanations, but this created new challenges around grading consistency and staffing.
Lack of resources. This was the barrier that tied everything together. 53% of surveyed instructors said they lacked sufficient resources to implement GenAI effectively, and 62% said they didn’t have enough time given their workload. The gap was especially stark at minority-serving institutions: MSI instructors were more likely to report insufficient resources (62% vs. 43%) and heavier teaching loads (70% teaching 3+ courses per term versus 54%). All 10 respondents who taught six or more courses per term were from MSIs. Meanwhile, the interviewees who had made the most ambitious changes tended to have lighter teaching loads, external funding, or the ability to hire lots of course staff, advantages that most instructors don’t have.
One striking finding is that the instructors doing the most to improve student-AI interactions were also the most privileged in terms of time, staffing, and funding. One instructor needed over 50 course staff members to run weekly stand-up meetings for 300 students. Others spent their own money on API costs. These are not scalable models.
If only well-resourced institutions can afford to adapt their curricula, GenAI risks widening the very inequities that education is supposed to reduce. Students at under-resourced institutions could fall further behind, not because their instructors don’t care but because those instructors are teaching six courses a term with no additional support.
When surveyed instructors were asked what would help most, the top answers were faculty training and support, evidence of GenAI’s impact, and funding. What if universities, funders, and HCI researchers worked together with instructors to make emergency pedagogical design sustainable for all instructors, not just the most privileged ones?
Check out our paper here and shoot me an email (lau@ucsd.edu) if you’d like to discuss anything related to it! And if you’re an instructor yourself, we’re building free resources and curriculum over at https://www.teachcswithai.org/.
Medieval Encrypted Letter Decoded [Schneier on Security]
Sent by a Spanish diplomat. Apparently people have been working on it since it was rediscovered in 1860.
Grrl Power #1455 – Tactical tactile [Grrl Power]
A normal 2 handed sword weighs 5-8 pounds (granted, there’s a very broad range of what constitutes a “two handed sword”), whereas a bearing sword weighs 14-15 pounds, and are roughly 7 and a half feet long, including the handle. Not impossible to swing, of course, but probably foolish to actually wade into a battle with one, since even regular sized weapons and moderate armor will sap someone’s endurance pretty quickly. Bearing swords are, as far as I’m aware, purely ceremonial.
At least on our non-magical Earth. The Grrl-verse clearly has demons, oni, aliens, were-dinosaurs, all kinds of things that might actually be able to wield a sword on that scale. Dabbler’s “Soulreaver” sword is technically a vierhander (man, it’s been a while since she used that) since the handle is long enough for her to really apply some leverage on it if she needs to, but I’m not sure if mechanically, gripping a sword or a bat with 4 hands would really give you a lot of extra striking power, or if all those elbows would get in the way on the windup or backswing.
The sword Maxima is using was sourced from Dabbler’s treasure horde, and clearly didn’t come from Earth, so it’s hard to say who it was originally forged for. All we can really tell about it is that whatever it’s forged from, it’s probably not all that heavier than an equivalently sized steel sword (and it’s definitely not made of steel) because Sydney can lift it. IIRC, I think I said that the other sword Max picked out was made of Ultronium and weighed about 40 lbs.
As someone with ADHD, I know I can be distracted in the middle of a sentence when someone is talking to me. It leads to a lot of “Uh, yeah…” or “Oh… what?” responses, and has definitely made people think my hearing is a lot worse than it is. And if someone is giving me directions that are more complex than “last door on the left,” they may as well just pull a series of random words from the dictionary.
On that front, Sydney is actually usually pretty focused. It’s possible her meds are wearing off for the day and it’s getting close to bedtime.
Finally, here we go! I took the suggestion that I
just use an existing panel for a starting point, thinking it would
save time… I guess it technically did, but a 5 character
vote incentive just isn’t the way to
go.
Patreon, of course, has actual topless version.
Double res version will be posted over at Patreon. Feel free to contribute as much as you like.
Mike Gabriel: KVM Support inside LXC Containers [updated] [Planet Debian]

Yesterday, I had to add support for running KVM virtual machines inside an LXC container. More as a reminder to myself, in case I ever have to do this again, here the simple recipe:
Enable lxc.autodev and execute hook script to be executed after
initial /dev creation (updated 20260428: lxc.cgroup2.*
instead of lxc.cgroup.*):
[...]
# Auto-create /dev nodes and add native KVM support to the LXC container
lxc.autodev = 1
lxc.hook.autodev = /var/lib/lxc/.hooks/lxc-hook.kvm-support
lxc.cgroup2.devices.allow = c 10:232 rwm
lxc.cgroup2.devices.allow = c 10:238 rwm
lxc.cgroup2.devices.allow = c 10:241 rwm
[...]
[added 20260408] On the internet, you can find a recipe that simply bind-mounts /dev/kvm from the host in to the LXC container. However, this fails if group ID of POSIX group kvm differs between host and container.
The following script I placed at /var/lib/lxc/.hooks/lxc-hook.kvm-support (on the LXC host!):
#!/bin/sh
# set up native KVM support in LXC container
mknod -m 0660 ${LXC_ROOTFS_MOUNT}/dev/kvm c 10 232
chown :kvm ${LXC_ROOTFS_MOUNT}/dev/kvm
mknod -m 0660 ${LXC_ROOTFS_MOUNT}/dev/vhost-net c 10 238
chown :kvm ${LXC_ROOTFS_MOUNT}/dev/vhost-net
mknod -m 0660 ${LXC_ROOTFS_MOUNT}/dev/vhost-vsock c 10 241
chown :kvm ${LXC_ROOTFS_MOUNT}/dev/vhost-vsock
In terms of cost, serving a small ramekin of toasted pistachio nuts is a tiny portion of what an airline spends in transporting someone first class.
In fact, it’s such a relatively small expense that it’s easy to simply avoid it. Send the money to the bottom line and focus on the parts that are actually worth paying for.
Gratuitous bonuses send signals.
They tell the customer that you have the resources and confidence to pay attention to the little things.
They help distinguish extraordinary items from ordinary ones (after all, the folks in coach show up at the arrivals gate at exactly the same time).
And they deliver a story of status, one that’s internalized and often shared.
I’ve never seen a product or service that couldn’t be improved with metaphorical warm pistachios.
Pass the nuts.
Pluralistic: The enshittification multiverse (27 Apr 2026) [Pluralistic: Daily links from Cory Doctorow]
->->->->->->->->->->->->->->->->->->->->->->->->->->->->->
Top Sources: None -->

It's official: you have my consent and enthusiastic blessing to apply "enshittification" to things that aren't digital platforms! Semantic drift is good, actually:
https://pluralistic.net/2024/10/14/pearl-clutching/#this-toilet-has-no-central-nervous-system
With that out of the way, let's talk about how enshittification can be usefully applied to gambits that worsen something in order to shift value from the users of that thing to the person doing the worsening.
Here's the crux: in life, there are many zero-sum situations in which others' pain is your profit. The most basic example of this is profit margins: as your profit margin climbs, so do the prices paid by others. The more money a customer gives you for whatever you're selling, the less money that customer has to spend on other things they want.
This is the fatal flaw in the economist's justification for surveillance pricing (when the price you're quoted is based on surveillance data about the urgency of your needs and your ability to pay): a seller who commands higher prices from a buyer deprives other sellers of that buyer's money.
The airline that knows you can't miss a funeral and also knows how much purchasing power is available on your credit card can charge you every cent you can afford – but that means that the coffee shop owner who normally sells you a latte in the morning will lose out on your business for months while you dig yourself out of that hole.
Tim Wu has a good example of this: imagine a world in which electricity utilities were unregulated and got to charge "market rates" for their products. Prior to the current wave of cheap, efficient solar, electrical power was a "natural monopoly." In nearly every circumstance, a given person would end up with just one source of power, and life without power was nearly unimaginable. In that situation, the power company's "rational" decision would be to charge you everything you could afford for the least electricity you could survive on: enough to keep your fridge and a few lights on. That means that you would be deprived of the value of, say, a clock radio and a coffee-maker, and the manufacturers of the clock radio and the coffee-maker would likewise suffer the loss of your business.
So the "monopoly" part is key to this story. The more alternatives you have, the harder it is to squeeze you on prices. Airport concessionaires can charge $12 for a Coke on the "clean" side of a TSA checkpoint because realistically you can't leave the airport and get a Coke elsewhere – and if you do, you can't bring it through the checkpoint.
Any source of lock-in becomes an invitation to shift value away from your customers and suppliers to yourself. High "switching costs" are always a precondition for enshittification – otherwise the people you're trying to enshittify will simply take their business elsewhere:
https://www.eff.org/deeplinks/2021/08/facebooks-secret-war-switching-costs
That's why market concentration is so central to the enshittification story: when the number of competitors in a sector dwindles to a cartel (or a duopoly or a monopoly) companies find it easy to fix prices so there's no point in shopping around, and they can capture their regulators and harness the power of the state to block other companies from entering the market with a better deal:
https://pluralistic.net/2023/02/05/small-government/
Now that we understand the role that switching costs, regulatory capture, and market concentration play in enshittification, let's put them together to propose a framework for applying enshittification to things other than digital platforms:
Enshittification happens when someone sets out to reduce your choices, and then uses that lock-in to make things worse for you in order to make things better for themself.
Note that this definition requires a degree of intent. Enshittification isn't just bargaining hard when you find yourself in a position of strength. It's what happens when you set out to systematically weaken other people's bargaining position in anticipation of a future opportunity to fuck them over in order to improve your own situation.
So if the business lobby bribes Republican state legislators to pass "right to work" laws that make it nearly impossible for workers to unionize, and then the businesses involved worsen their workers' pay and conditions, we can call that enshittification. If they can bind workers to noncompete "agreements" that make it illegal for the cashier at Wendy's to get $0.25/h more at the McDonald's, that's even more enshittifying:
https://pluralistic.net/2025/11/10/zero-sum-zero-hours/#that-sounds-like-a-you-problem
Or if shitty men lobby to end anti-discrimination laws (making it much harder for a single woman to survive on her paycheck) and to end no-fault divorce (to make it much harder for a woman to leave the husband she marries to survive in a world where it's legal to discriminate against her in the workplace), in anticipation of being able to be a shitty husband without losing their wives, they are enshittifying marriage (applying this to the effort to kill the concept of "marital rape" is left as an exercise for the reader).
This can also be applied to politics. Restrictions on immigration and out-migration are both preludes to state enshittification, since a population that can't leave for another state will, on average, put up with more abuse from their political classes without leaving. Tying your work visa to your employer is very enshittification-friendly:
One of the questions I get most frequently is "what about AI and enshittification?" This is a complicated question! Obviously, AI is very enshittification-prone: as "black boxes" that do not produce reliable, deterministic outputs, AI products have a lot of intrinsic cover for their enshittifying behavior.
If you ask a chatbot to recommend a product and it steers you toward an inferior option that generates a higher commission for the company, who can say whether that was the chatbot cheating, or if it was a "hallucination?" Likewise, if you ask a chatbot to solve your problem and it does so in an inefficient way that burns a zillion tokens (which you have to pay for), is that the chatbot malfunctioning, or is that price-gouging?
https://pluralistic.net/2025/08/16/jackpot/#salience-bias
Beyond this, AI is very useful for plain old enshittification. Surveillance pricing – changing prices or wages based on the other person's desperation and ability to pay – is something AI is very good at:
https://pluralistic.net/2026/01/21/cod-marxism/#wannamaker-slain
And AI companies can enshittify their products in all the traditional ways: after a customer integrates AI in their lives and businesses in ways that are hard to escape, the AI company can raise prices, insert ads, and route queries to cheaper models that cost less to run and produce worse outputs.
But here's where there's a critical difference between enshittifying AI and enshittifying a profitable tech business like app stores or search engines. AI is the money-losingest project the human race has ever attempted. At $1.4 trillion and counting, the AI companies and their "frontier models" are so deep in the red that I can't see any way that any of these firms will survive:
https://pluralistic.net/2026/04/16/pascals-wager/#doomer-challenge
So, on the one hand, as these companies find themselves ever-more cash-strapped, they will be severely tempted to enshittify their products. But on the other hand, if these companies are doomed no matter what they do, then the enshittification will take care of itself when they go bankrupt.

The New Credit Union Model: First Expand Members’ Economic Freedom– Then Become their Oppressor https://chipfilson.com/2026/04/the-new-credit-union-model-first-expand-members-economic-freedom-then-become-their-oppressor/
The case for lunar socialism https://www.lukewsavage.com/p/the-case-for-lunar-socialism
The Reverse Centaur's Guide to Life After AI (Signed Edition) https://uk.bookshop.org/p/books/the-reverse-centaur-s-guide-to-life-after-ai-signed-edition-how-to-think-about-artificial-intelligence-before-it-s-too-late-cory-doctorow/bb87965fc9cc08b9?ean=9781472641991
Slightly Drunk on Wonder https://patrickcostello.substack.com/p/new-book-in-the-works
#25yrsago Jakob Nielsen on reputation managers https://www.nngroup.com/articles/reputation-managers-are-happening/
#25yrsago EFF's sharing friendly music license https://web.archive.org/web/20010429045301/https://www.eff.org/IP/Open_licenses/20010421_eff_oal_pr.html
#25yrsago Speedle: what links are forwarded most online? https://web.archive.org/web/20010401084047/http://www.speedle.com/
#20yrsago RIP Jane Jacobs, urban activist https://web.archive.org/web/20061009063708/http://www.canada.com/topics/news/story.html?id=fe1de18f-6b6e-473d-b0cb-0cc422dcf661&k=25935
#20yrsago Why fan fiction is so important https://nielsenhayden.com/makinglight/archives/007464.html#007464
#20yrsago California got its name from fanfic https://nielsenhayden.com/makinglight/archives/007464.html#122035
#20yrsago DMCA revision proposal will jail Americans for “attempting” infringment https://web.archive.org/web/20060502093524/https://ipaction.org/blog/2006/04/bill-hollywood-cartels-dont-want-you_24.html
#20yrsago Vista’s endless parade of warnings won’t create security https://www.schneier.com/blog/archives/2006/04/microsoft_vista.html
#15yrsago Passover poem about robots: “When We Were Robots in Egypt” https://reactormag.com/when-we-were-robots-in-egypt/
#15yrsago Naipaul’s rules for beginning writers https://web.archive.org/web/20110508152004/http://www.indiauncut.com/iublog/article/vs-naipauls-advice-to-writers-rules-for-beginners/
#15yrsago Rules for golfing during the blitz https://directorblue.blogspot.com/2011/04/stiff-upper-lip.html
#15yrsago New Zealand’s rammed-through copyright law includes mass warrantless surveillance and publication of accused’s browsing habits https://www.stuff.co.nz/technology/digital-living/4922854/Copyright-change-about-more-than-idle-threats
#15yrsago State Dept adding intrusive, semi-impossible questionnaire for US passport applications https://web.archive.org/web/20110427025422/https://www.consumertraveler.com/today/state-dept-wants-to-make-it-harder-to-get-a-passport/
#10yrsago A Burglar’s Guide to the City: burglary as architectural criticism https://memex.craphound.com/2016/04/25/a-burglars-guide-to-the-city-burglary-as-architectural-criticism/
#10yrsago EFF to FDA: the DMCA turns medical implants into time-bombs https://www.eff.org/files/2016/04/22/electronic_frontier_foundation_comments_cybersecurity_in_medical_devices_.pdf
#10yrsago James Clapper: Snowden accelerated cryptography adoption by 7 years https://web.archive.org/web/20160425161451/https://theintercept.com/2016/04/25/spy-chief-complains-that-edward-snowden-sped-up-spread-of-encryption-by-7-years/
#10yrsago Australian MP sets river on fire https://web.archive.org/web/20170518083229/https://www.yahoo.com/news/australian-politician-sets-river-fire-protest-fracking-064640159.html
#10yrsago Fantasy accounting: how the biggest companies in America turn real losses into paper profits https://www.nytimes.com/2016/04/24/business/fantasy-math-is-helping-companies-spin-losses-into-profits.html
#10yrsago Leading Republicans send letters in support of Dennis Hastert, pedophile https://www.chicagotribune.com/2016/04/22/more-than-40-letters-in-support-of-hastert-made-public-before-sentencing/
#5yrsago Guess who's doing a usury in Iowa https://pluralistic.net/2021/04/24/peloton-usury/#going-nowhere-fast
#1yrago Every complex ecosystem has parasites https://pluralistic.net/2025/04/24/hermit-kingdom/#simpler-times

NYC: Techidemic with Sarah Jeong, Tochi Onyebuchi and Alia
Dastagir (PEN World Voices), Apr 30
https://worldvoices.pen.org/event/techidemic/
Barcelona: Internet no tiene que ser un vertedero (Global
Digital Rights Forum), May 13
https://encuentroderechosdigitales.com/en/speakers//
Berlin: Re:publica, May 18-20
https://re-publica.com/de/news/rp26-sprecher-cory-doctorow
Berlin: Enshittification at Otherland Books, May 19
https://www.otherland-berlin.de/de/event-details/cory-doctorow.html
Hay-on-Wye: HowTheLightGetsIn, May 22-25
https://howthelightgetsin.org/festivals/hay/big-ideas-2
SXSW London, Jun 2
https://www.sxswlondon.com/session/how-big-tech-broke-the-internet-b3c4a901
NYC: The Reverse Centaur's Guide to Life After AI with Jonathan
Coulton (The Strand), Jun 24
https://www.strandbooks.com/cory-doctorow-the-reverse-centaur-s-guide-to-life-after-ai.html
When Do Platforms Stop Innovating and Start Extracting?
(InnovEU)
https://www.youtube.com/watch?v=cccDR0YaMt8
Pete "Mayor" Buttigieg (No Gods No Mayors)
https://www.patreon.com/posts/pete-mayor-with-155614612
The internet is getting worse (CBC The National)
https://youtu.be/dCVUCdg3Uqc?si=FMcA0EI_Mi13Lw-P
Do you feel screwed over by big tech? (Ontario Today)
https://www.cbc.ca/listen/live-radio/1-45-ontario-today/clip/16203024-do-feel-screwed-big-tech
"Enshittification: Why Everything Suddenly Got Worse and What to
Do About It," Farrar, Straus, Giroux, October 7 2025
https://us.macmillan.com/books/9780374619329/enshittification/
"Picks and Shovels": a sequel to "Red Team Blues," about the heroic era of the PC, Tor Books (US), Head of Zeus (UK), February 2025 (https://us.macmillan.com/books/9781250865908/picksandshovels).
"The Bezzle": a sequel to "Red Team Blues," about prison-tech and other grifts, Tor Books (US), Head of Zeus (UK), February 2024 (thebezzle.org).
"The Lost Cause:" a solarpunk novel of hope in the climate emergency, Tor Books (US), Head of Zeus (UK), November 2023 (http://lost-cause.org).
"The Internet Con": A nonfiction book about interoperability and Big Tech (Verso) September 2023 (http://seizethemeansofcomputation.org). Signed copies at Book Soup (https://www.booksoup.com/book/9781804291245).
"Red Team Blues": "A grabby, compulsive thriller that will leave you knowing more about how the world works than you did before." Tor Books http://redteamblues.com.
"Chokepoint Capitalism: How to Beat Big Tech, Tame Big Content, and Get Artists Paid, with Rebecca Giblin", on how to unrig the markets for creative labor, Beacon Press/Scribe 2022 https://chokepointcapitalism.com
"Enshittification, Why Everything Suddenly Got Worse and What to Do About It" (the graphic novel), Firstsecond, 2026
"The Post-American Internet," a geopolitical sequel of sorts to Enshittification, Farrar, Straus and Giroux, 2027
"Unauthorized Bread": a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, 2027
"The Memex Method," Farrar, Straus, Giroux, 2027
Today's top sources:
Currently writing: "The Post-American Internet," a sequel to "Enshittification," about the better world the rest of us get to have now that Trump has torched America. Third draft completed. Submitted to editor.
"The Post-American Internet," a short book about internet policy in the age of Trumpism. PLANNING.
A Little Brother short story about DIY insulin PLANNING

This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.
https://creativecommons.org/licenses/by/4.0/
Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.
Blog (no ads, tracking, or data-collection):
Newsletter (no ads, tracking, or data-collection):
https://pluralistic.net/plura-list
Mastodon (no ads, tracking, or data-collection):
Bluesky (no ads, possible tracking and data-collection):
https://bsky.app/profile/doctorow.pluralistic.net
Medium (no ads, paywalled):
Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):
https://mostlysignssomeportents.tumblr.com/tagged/pluralistic
"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla
READ CAREFULLY: By reading this, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies ("BOGUS AGREEMENTS") that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer.
ISSN: 3066-764X
New Comic: Microboned
Girl Genius for Monday, April 27, 2026 [Girl Genius]
The Girl Genius comic for Monday, April 27, 2026 has been posted.
Waking Up, p13 [Ctrl+Alt+Del Comic]
The post Waking Up, p13 appeared first on Ctrl+Alt+Del Comic.
| Feed | RSS | Last fetched | Next fetched after |
|---|---|---|---|
| @ASmartBear | XML | 10:49, Saturday, 02 May | 11:30, Saturday, 02 May |
| a bag of four grapes | XML | 11:00, Saturday, 02 May | 11:42, Saturday, 02 May |
| Ansible | XML | 10:49, Saturday, 02 May | 11:29, Saturday, 02 May |
| Bad Science | XML | 11:07, Saturday, 02 May | 11:56, Saturday, 02 May |
| Black Doggerel | XML | 10:49, Saturday, 02 May | 11:30, Saturday, 02 May |
| Blog - Official site of Stephen Fry | XML | 11:07, Saturday, 02 May | 11:56, Saturday, 02 May |
| Charlie Brooker | The Guardian | XML | 11:00, Saturday, 02 May | 11:42, Saturday, 02 May |
| Charlie's Diary | XML | 10:35, Saturday, 02 May | 11:23, Saturday, 02 May |
| Chasing the Sunset - Comics Only | XML | 11:07, Saturday, 02 May | 11:56, Saturday, 02 May |
| Coding Horror | XML | 11:21, Saturday, 02 May | 12:08, Saturday, 02 May |
| Comics Archive - Spinnyverse | XML | 10:49, Saturday, 02 May | 11:33, Saturday, 02 May |
| Cory Doctorow's craphound.com | XML | 11:00, Saturday, 02 May | 11:42, Saturday, 02 May |
| Cory Doctorow, Author at Boing Boing | XML | 10:49, Saturday, 02 May | 11:30, Saturday, 02 May |
| Ctrl+Alt+Del Comic | XML | 10:35, Saturday, 02 May | 11:23, Saturday, 02 May |
| Cyberunions | XML | 11:07, Saturday, 02 May | 11:56, Saturday, 02 May |
| David Mitchell | The Guardian | XML | 10:42, Saturday, 02 May | 11:25, Saturday, 02 May |
| Deeplinks | XML | 10:49, Saturday, 02 May | 11:33, Saturday, 02 May |
| Diesel Sweeties webcomic by rstevens | XML | 10:42, Saturday, 02 May | 11:25, Saturday, 02 May |
| Dilbert | XML | 11:07, Saturday, 02 May | 11:56, Saturday, 02 May |
| Dork Tower | XML | 11:00, Saturday, 02 May | 11:42, Saturday, 02 May |
| Economics from the Top Down | XML | 10:42, Saturday, 02 May | 11:25, Saturday, 02 May |
| Edmund Finney's Quest to Find the Meaning of Life | XML | 10:42, Saturday, 02 May | 11:25, Saturday, 02 May |
| EFF Action Center | XML | 10:42, Saturday, 02 May | 11:25, Saturday, 02 May |
| Enspiral Tales - Medium | XML | 10:49, Saturday, 02 May | 11:34, Saturday, 02 May |
| Events | XML | 10:35, Saturday, 02 May | 11:23, Saturday, 02 May |
| Falkvinge on Liberty | XML | 10:35, Saturday, 02 May | 11:23, Saturday, 02 May |
| Flipside | XML | 11:00, Saturday, 02 May | 11:42, Saturday, 02 May |
| Flipside | XML | 10:49, Saturday, 02 May | 11:34, Saturday, 02 May |
| Free software jobs | XML | 10:49, Saturday, 02 May | 11:29, Saturday, 02 May |
| Full Frontal Nerdity by Aaron Williams | XML | 10:35, Saturday, 02 May | 11:23, Saturday, 02 May |
| General Protection Fault: Comic Updates | XML | 10:35, Saturday, 02 May | 11:23, Saturday, 02 May |
| George Monbiot | XML | 10:42, Saturday, 02 May | 11:25, Saturday, 02 May |
| Girl Genius | XML | 10:42, Saturday, 02 May | 11:25, Saturday, 02 May |
| Groklaw | XML | 10:35, Saturday, 02 May | 11:23, Saturday, 02 May |
| Grrl Power | XML | 11:00, Saturday, 02 May | 11:42, Saturday, 02 May |
| Hackney Anarchist Group | XML | 11:07, Saturday, 02 May | 11:56, Saturday, 02 May |
| Hackney Solidarity Network | XML | 10:49, Saturday, 02 May | 11:34, Saturday, 02 May |
| http://blog.llvm.org/feeds/posts/default | XML | 10:49, Saturday, 02 May | 11:34, Saturday, 02 May |
| http://calendar.google.com/calendar/feeds/q7s5o02sj8hcam52hutbcofoo4%40group.calendar.google.com/public/basic | XML | 10:49, Saturday, 02 May | 11:29, Saturday, 02 May |
| http://dynamic.boingboing.net/cgi-bin/mt/mt-cp.cgi?__mode=feed&_type=posts&blog_id=1&id=1 | XML | 10:49, Saturday, 02 May | 11:34, Saturday, 02 May |
| http://eng.anarchoblogs.org/feed/atom/ | XML | 11:07, Saturday, 02 May | 11:53, Saturday, 02 May |
| http://feed43.com/3874015735218037.xml | XML | 11:07, Saturday, 02 May | 11:53, Saturday, 02 May |
| http://flatearthnews.net/flatearthnews.net/blogfeed | XML | 10:49, Saturday, 02 May | 11:30, Saturday, 02 May |
| http://fulltextrssfeed.com/ | XML | 10:42, Saturday, 02 May | 11:25, Saturday, 02 May |
| http://london.indymedia.org/articles.rss | XML | 11:21, Saturday, 02 May | 12:08, Saturday, 02 May |
| http://pipes.yahoo.com/pipes/pipe.run?_id=ad0530218c055aa302f7e0e84d5d6515&_render=rss | XML | 11:07, Saturday, 02 May | 11:53, Saturday, 02 May |
| http://planet.gridpp.ac.uk/atom.xml | XML | 11:21, Saturday, 02 May | 12:08, Saturday, 02 May |
| http://shirky.com/weblog/feed/atom/ | XML | 10:49, Saturday, 02 May | 11:33, Saturday, 02 May |
| http://thecommune.co.uk/feed/ | XML | 10:49, Saturday, 02 May | 11:34, Saturday, 02 May |
| http://theness.com/roguesgallery/feed/ | XML | 10:35, Saturday, 02 May | 11:23, Saturday, 02 May |
| http://www.airshipentertainment.com/buck/buckcomic/buck.rss | XML | 11:07, Saturday, 02 May | 11:56, Saturday, 02 May |
| http://www.airshipentertainment.com/growf/growfcomic/growf.rss | XML | 10:49, Saturday, 02 May | 11:33, Saturday, 02 May |
| http://www.airshipentertainment.com/myth/mythcomic/myth.rss | XML | 11:00, Saturday, 02 May | 11:42, Saturday, 02 May |
| http://www.baen.com/baenebooks | XML | 10:49, Saturday, 02 May | 11:33, Saturday, 02 May |
| http://www.feedsapi.com/makefulltextfeed.php?url=http%3A%2F%2Fwww.somethingpositive.net%2Fsp.xml&what=auto&key=&max=7&links=preserve&exc=&privacy=I+accept | XML | 10:49, Saturday, 02 May | 11:33, Saturday, 02 May |
| http://www.godhatesastronauts.com/feed/ | XML | 10:35, Saturday, 02 May | 11:23, Saturday, 02 May |
| http://www.tinycat.co.uk/feed/ | XML | 10:49, Saturday, 02 May | 11:29, Saturday, 02 May |
| https://anarchism.pageabode.com/blogs/anarcho/feed/ | XML | 10:49, Saturday, 02 May | 11:33, Saturday, 02 May |
| https://broodhollow.krisstraub.comfeed/ | XML | 10:49, Saturday, 02 May | 11:30, Saturday, 02 May |
| https://debian-administration.org/atom.xml | XML | 10:49, Saturday, 02 May | 11:30, Saturday, 02 May |
| https://elitetheatre.org/ | XML | 11:21, Saturday, 02 May | 12:08, Saturday, 02 May |
| https://feeds.feedburner.com/Starslip | XML | 11:00, Saturday, 02 May | 11:42, Saturday, 02 May |
| https://feeds2.feedburner.com/GeekEtiquette?format=xml | XML | 10:42, Saturday, 02 May | 11:25, Saturday, 02 May |
| https://hackbloc.org/rss.xml | XML | 10:49, Saturday, 02 May | 11:30, Saturday, 02 May |
| https://kajafoglio.livejournal.com/data/atom/ | XML | 11:07, Saturday, 02 May | 11:56, Saturday, 02 May |
| https://philfoglio.livejournal.com/data/atom/ | XML | 11:21, Saturday, 02 May | 12:08, Saturday, 02 May |
| https://pixietrixcomix.com/eerie-cutiescomic.rss | XML | 11:21, Saturday, 02 May | 12:08, Saturday, 02 May |
| https://pixietrixcomix.com/menage-a-3/comic.rss | XML | 10:49, Saturday, 02 May | 11:33, Saturday, 02 May |
| https://propertyistheft.wordpress.com/feed/ | XML | 10:49, Saturday, 02 May | 11:29, Saturday, 02 May |
| https://requiem.seraph-inn.com/updates.rss | XML | 10:49, Saturday, 02 May | 11:29, Saturday, 02 May |
| https://studiofoglio.livejournal.com/data/atom/ | XML | 11:07, Saturday, 02 May | 11:53, Saturday, 02 May |
| https://thecommandline.net/feed/ | XML | 11:07, Saturday, 02 May | 11:53, Saturday, 02 May |
| https://torrentfreak.com/subscriptions/ | XML | 10:42, Saturday, 02 May | 11:25, Saturday, 02 May |
| https://web.randi.org/?format=feed&type=rss | XML | 10:42, Saturday, 02 May | 11:25, Saturday, 02 May |
| https://www.dcscience.net/feed/medium.co | XML | 11:07, Saturday, 02 May | 11:56, Saturday, 02 May |
| https://www.DropCatch.com/domain/steampunkmagazine.com | XML | 10:49, Saturday, 02 May | 11:30, Saturday, 02 May |
| https://www.DropCatch.com/domain/ubuntuweblogs.org | XML | 11:07, Saturday, 02 May | 11:53, Saturday, 02 May |
| https://www.DropCatch.com/redirect/?domain=DyingAlone.net | XML | 11:21, Saturday, 02 May | 12:08, Saturday, 02 May |
| https://www.freedompress.org.uk:443/news/feed/ | XML | 10:35, Saturday, 02 May | 11:23, Saturday, 02 May |
| https://www.goblinscomic.com/category/comics/feed/ | XML | 10:49, Saturday, 02 May | 11:29, Saturday, 02 May |
| https://www.loomio.com/blog/feed/ | XML | 11:07, Saturday, 02 May | 11:53, Saturday, 02 May |
| https://www.newstatesman.com/feeds/blogs/laurie-penny.rss | XML | 10:49, Saturday, 02 May | 11:30, Saturday, 02 May |
| https://www.patreon.com/graveyardgreg/posts/comic.rss | XML | 11:21, Saturday, 02 May | 12:08, Saturday, 02 May |
| https://www.rightmove.co.uk/rss/property-for-sale/find.html?locationIdentifier=REGION^876&maxPrice=240000&minBedrooms=2&displayPropertyType=houses&oldDisplayPropertyType=houses&primaryDisplayPropertyType=houses&oldPrimaryDisplayPropertyType=houses&numberOfPropertiesPerPage=24 | XML | 10:42, Saturday, 02 May | 11:25, Saturday, 02 May |
| https://x.com/statuses/user_timeline/22724360.rss | XML | 10:49, Saturday, 02 May | 11:29, Saturday, 02 May |
| Humble Bundle Blog | XML | 11:21, Saturday, 02 May | 12:08, Saturday, 02 May |
| I, Cringely | XML | 10:35, Saturday, 02 May | 11:23, Saturday, 02 May |
| Irregular Webcomic! | XML | 10:49, Saturday, 02 May | 11:30, Saturday, 02 May |
| Joel on Software | XML | 11:07, Saturday, 02 May | 11:53, Saturday, 02 May |
| Judith Proctor's Journal | XML | 10:49, Saturday, 02 May | 11:29, Saturday, 02 May |
| Krebs on Security | XML | 10:49, Saturday, 02 May | 11:30, Saturday, 02 May |
| Lambda the Ultimate - Programming Languages Weblog | XML | 10:49, Saturday, 02 May | 11:29, Saturday, 02 May |
| Looking For Group | XML | 10:49, Saturday, 02 May | 11:33, Saturday, 02 May |
| LWN.net | XML | 10:49, Saturday, 02 May | 11:30, Saturday, 02 May |
| Mimi and Eunice | XML | 10:49, Saturday, 02 May | 11:34, Saturday, 02 May |
| Neil Gaiman's Journal | XML | 10:49, Saturday, 02 May | 11:29, Saturday, 02 May |
| Nina Paley | XML | 11:21, Saturday, 02 May | 12:08, Saturday, 02 May |
| O Abnormal – Scifi/Fantasy Artist | XML | 10:49, Saturday, 02 May | 11:34, Saturday, 02 May |
| Oglaf! -- Comics. Often dirty. | XML | 10:35, Saturday, 02 May | 11:23, Saturday, 02 May |
| Oh Joy Sex Toy | XML | 10:49, Saturday, 02 May | 11:33, Saturday, 02 May |
| Order of the Stick | XML | 10:49, Saturday, 02 May | 11:33, Saturday, 02 May |
| Original Fiction Archives - Reactor | XML | 11:00, Saturday, 02 May | 11:42, Saturday, 02 May |
| OSnews | XML | 10:49, Saturday, 02 May | 11:34, Saturday, 02 May |
| Paul Graham: Unofficial RSS Feed | XML | 10:49, Saturday, 02 May | 11:34, Saturday, 02 May |
| Penny Arcade | XML | 11:00, Saturday, 02 May | 11:42, Saturday, 02 May |
| Penny Red | XML | 10:49, Saturday, 02 May | 11:34, Saturday, 02 May |
| PHD Comics | XML | 11:07, Saturday, 02 May | 11:56, Saturday, 02 May |
| Phil's blog | XML | 10:35, Saturday, 02 May | 11:23, Saturday, 02 May |
| Planet Debian | XML | 10:49, Saturday, 02 May | 11:34, Saturday, 02 May |
| Planet GNU | XML | 10:49, Saturday, 02 May | 11:30, Saturday, 02 May |
| Planet Lisp | XML | 11:07, Saturday, 02 May | 11:56, Saturday, 02 May |
| Pluralistic: Daily links from Cory Doctorow | XML | 10:49, Saturday, 02 May | 11:29, Saturday, 02 May |
| PS238 by Aaron Williams | XML | 10:35, Saturday, 02 May | 11:23, Saturday, 02 May |
| QC RSS | XML | 11:21, Saturday, 02 May | 12:08, Saturday, 02 May |
| Radar | XML | 11:00, Saturday, 02 May | 11:42, Saturday, 02 May |
| RevK®'s ramblings | XML | 11:07, Saturday, 02 May | 11:53, Saturday, 02 May |
| Richard Stallman's Political Notes | XML | 11:07, Saturday, 02 May | 11:56, Saturday, 02 May |
| Scenes From A Multiverse | XML | 11:21, Saturday, 02 May | 12:08, Saturday, 02 May |
| Schneier on Security | XML | 10:49, Saturday, 02 May | 11:29, Saturday, 02 May |
| SCHNEWS.ORG.UK | XML | 10:49, Saturday, 02 May | 11:33, Saturday, 02 May |
| Scripting News | XML | 11:00, Saturday, 02 May | 11:42, Saturday, 02 May |
| Seth's Blog | XML | 11:07, Saturday, 02 May | 11:53, Saturday, 02 May |
| Skin Horse | XML | 11:00, Saturday, 02 May | 11:42, Saturday, 02 May |
| Tales From the Riverbank | XML | 11:07, Saturday, 02 May | 11:56, Saturday, 02 May |
| The Adventures of Dr. McNinja | XML | 10:49, Saturday, 02 May | 11:34, Saturday, 02 May |
| The Bumpycat sat on the mat | XML | 10:49, Saturday, 02 May | 11:29, Saturday, 02 May |
| The Daily WTF | XML | 11:07, Saturday, 02 May | 11:53, Saturday, 02 May |
| The Monochrome Mob | XML | 10:49, Saturday, 02 May | 11:30, Saturday, 02 May |
| The Non-Adventures of Wonderella | XML | 10:42, Saturday, 02 May | 11:25, Saturday, 02 May |
| The Old New Thing | XML | 10:49, Saturday, 02 May | 11:33, Saturday, 02 May |
| The Open Source Grid Engine Blog | XML | 11:21, Saturday, 02 May | 12:08, Saturday, 02 May |
| The Stranger | XML | 10:49, Saturday, 02 May | 11:34, Saturday, 02 May |
| towerhamletsalarm | XML | 11:07, Saturday, 02 May | 11:53, Saturday, 02 May |
| Twokinds | XML | 11:00, Saturday, 02 May | 11:42, Saturday, 02 May |
| UK Indymedia Features | XML | 11:00, Saturday, 02 May | 11:42, Saturday, 02 May |
| Uploads from ne11y | XML | 11:07, Saturday, 02 May | 11:53, Saturday, 02 May |
| Uploads from piasladic | XML | 10:42, Saturday, 02 May | 11:25, Saturday, 02 May |
| Use Sword on Monster | XML | 11:21, Saturday, 02 May | 12:08, Saturday, 02 May |
| Wayward Sons: Legends - Sci-Fi Full Page Webcomic - Updates Daily | XML | 11:07, Saturday, 02 May | 11:53, Saturday, 02 May |
| what if? | XML | 10:49, Saturday, 02 May | 11:30, Saturday, 02 May |
| Whatever | XML | 11:07, Saturday, 02 May | 11:56, Saturday, 02 May |
| Whitechapel Anarchist Group | XML | 11:07, Saturday, 02 May | 11:56, Saturday, 02 May |
| WIL WHEATON dot NET | XML | 10:49, Saturday, 02 May | 11:33, Saturday, 02 May |
| wish | XML | 10:49, Saturday, 02 May | 11:34, Saturday, 02 May |
| Writing the Bright Fantastic | XML | 10:49, Saturday, 02 May | 11:33, Saturday, 02 May |
| xkcd.com | XML | 10:42, Saturday, 02 May | 11:25, Saturday, 02 May |