Sunday, 03 May

10:07

Just like me, but… [Seth's Blog]

The actor, artist, mathematician, pianist, speaker, leader, tech nerd: Just like me, but talented.

I’m not so sure.

It might be more accurate to say “just like me, but dedicated.”

The first approach lets us off the hook.

The second approach opens the door to possibility.

09:14

Get the Money out of Politics [George Monbiot]

There’s a really simple way of ensuring that politics belongs to the people, not to the ultra-rich.

By George Monbiot, published in the Guardian 30th April 2026

How do we know whether political funding is corrupt? Mostly, we don’t. A plutocrat delivers a sack of cash to a political party. A few weeks later, it announces a policy that happens to favour the donor’s business. Are the events linked? We might suspect it; we cannot prove it. But the suspicion itself is corrosive and demoralising.

The current funding system, perhaps more than any other factor, turns us away from politics, breeding disillusionment, alienation and cynicism. A survey by the Electoral Commission last year found that only 18% of respondents believed spending and funding are transparent. A government survey in December discovered that 87% of people are “concerned about the possibility of corruption” among politicians. A further survey concluded that political donors are believed to wield the most influence of any elite faction. Disillusionment with politics drives people into the arms of the extreme right. This is paradoxical, as it tends to be highly receptive to the ultra-rich.

I’m prompted to write this column by Tom Burgis’s powerful investigation for the Guardian into Reform UK’s relationship with Christopher Harborne, who is based in Thailand. Remarkably, Harborne has provided about two-thirds of all Reform’s donations since its foundation: more than £22m altogether. The rules in Britain limit the amount a party can spend in an election year, but set no cap on the proportion a single funder can provide. In theory, one person could supply its entire budget. At what point do we decide that a political party is, in effect, owned by a donor?

I can’t prove that Harborne’s money has bought special favours from Reform, and make no suggestion of illegality. But there is also no way of proving that this funding is not connected to Nigel Farage’s enthusiasm for cryptocurrency, which appears to be Harborne’s principal source of wealth. The not-knowing is just as corrosive as the knowing.

Like the Tories, Reform has also taken lavish funding from very rich people who are hostile to climate action. Both parties now evince the same hostility. Which came first, the hostility or the funding? Does it matter? Whether a party changes its policy in response to donations or attracts big donors because of its policy, it’s equally damaging to democratic trust.

The same applies to Labour’s relationship with City donors, which might help explain its newfound enthusiasm for deregulating finance, despite the warnings of 2008. As Transparency International has documented, political parties in the UK “are increasingly becoming dependent on a small number of very wealthy donors”. “Dependent on” can easily mean “beholden to”. In very few cases has corruption been demonstrated. But that’s not the point. The problem isn’t that such relationships are illegal. The problem is that they are not.

The trust crisis was exacerbated by the Conservatives, who, without providing a coherent rationale, raised political spending caps and handcuffed the regulators. As the admirable Spotlight on Corruption has discovered, the Electoral Commission’s investigations have declined by 89% since 2019, while the police, without a dedicated unit and clear powers, do almost nothing. No one has ever gone to prison in Britain for breaching electoral finance laws. The highest criminal fine yet levied is a pathetic £6,000. The regulator’s budget in this country is about £1 per voter. In Australia it’s £24.

The higher caps set by the Tories triggered an even more intense scramble for private money: our representatives now often seem to spend more time soliciting funds than soliciting votes. Regulatory corrosion has made it even harder to spot the difference between a “permissible” donor and an “impermissible” one, and to stop foreign agents infiltrating our politics.

The representation of the people bill seeks to address this crisis. But to read the relevant sections (58-63) is to be struck by their extreme complexity and obvious loopholes. In response to the Rycroft review on foreign interference, the government has decided to cap annual funding from voters living abroad at £100,000 each, and stop donations being made in cryptocurrency. But how can anyone be sure that a billionaire based abroad isn’t channelling money through a resident, or an untraceable crypto payment isn’t turned into sterling before it lands in a party’s account? Continued regulatory chaos and public distrust are locked in.

I believe that any attempt to distinguish between “good donors” and bad, resident and foreign, is futile. Any major donor is a bad donor, as their economic power undermines democracy. Given the transnational nature of capital, distinctions based on residence become meaningless. And what’s to stop an AI program splitting a big donation into a thousand small ones that don’t need to be reported at all?

There’s a simple way of sorting all this out. It works as follows. The only money a party can receive is a standard fee (say £25) for membership. The government then matches that fee on a fixed multiple. For instance, if you have 100,000 members each paying £25, and the multiple is three, your annual budget is £10m. And that’s it: no other sources permitted. The parties would agree between themselves, with public input (perhaps a citizens’ assembly), on what the membership fee and multiple should be.

At a stroke, this sweeps away all the complexities of permissible and non-permissible donors, residence requirements, currency types, ultimate origins and spending caps. Instead of raising money, politicians would spend their time raising membership: reconnecting with the public and broadening their base. We would become equal political citizens, and our system would be transparent and intelligible. It would belong to us, not the billionaires.

The cost to the exchequer? Perhaps between £20m and £50m a year. The costs of the current system are incalculable, as the entire state is harnessed to it, creating endless dysfunction.

It doesn’t solve every aspect of billionaire influence: for instance, it wouldn’t have stopped Nigel Farage taking another £5m, in this case for his own use, from Harborne before he became an MP. But this simple measure would, I believe, do more than any other to give politics back to the people.

Democracy demands that we eliminate not only the dodgiest and most obscure sources of donor money, but all of it.

www.monbiot.com

03:49

Link [Scripting News]

Knicks will play the Sixers in round 2 of the playoffs starting Monday.

00:42

Link [Scripting News]

I've been teaching Claude why we favor Markdown. "We add support for Markdown editing wherever we can, because people like Markdown and they should. It makes things simple and guarantees a certain level of flexibility for their writing far beyond the standards of twitter-like systems with tiny little text boxes. If you don't really support Markdown people figure it out right away. But the character limits and stuff like that seem more technical to users. Markdown support says clearly -- you're really on the web."

00:14

Urgent: Restore gerrymandering limitations [Richard Stallman's Political Notes]

US citizens: call on Congress to restore the gerrymandering limitations that the Supreme Court just abolished.

See the instructions for how to sign this letter campaign\n\ without running any nonfree JavaScript code--not trivial, but not hard.

One question is, how can this be done? Passing the same law that the Supreme Court ruled unconstitutional would not work.

Perhaps it would work to require states to use the algorithm that finds the fairest possible set of districts.

US citizens: Join with this campaign to address this issue.

To phone your congresscritter about this, the main switchboard is +1-202-224-3121.

Please spread the word.

Urgent: Ban gambling on war and government policies [Richard Stallman's Political Notes]

US citizens: call on Congress to ban gambling on war and on government policies.

US citizens: Join with this campaign to address this issue.

To phone your congresscritter about this, the main switchboard is +1-202-224-3121.

Please spread the word.

United States is being murdered [Richard Stallman's Political Notes]

Rebecca Solnit: *The United States is being murdered, and it's an inside job.*

Covid publication by CDC suppressed [Richard Stallman's Political Notes]

The CDC was going to publish a study that found that Covid vaccination reduced the risk of hospitalization for Covid by half, for healthy adults. The organization's acting director, appointed by RFK jr, suppressed publication using an irrational excuse.

Deportation thug faces 14 years in prison [Richard Stallman's Political Notes]

A deportation thug faces 14 years in prison on charges of aggravated assault for using his gun to menace people in a vehicle.

Anyone who knows of the whereabouts of Gregory Donnell Morgan Jr should inform the Minnesota State Police.

Phoenix thug provoking reaction from protesting high school students [Richard Stallman's Political Notes]

A Phoenix thug went to the nearby suburb of Chandler, wearing a mask and a gun, trying to frighten/provoke some sort of reaction from protesting high school students and (he hoped) get an opportunity to arrest some.

The students were mostly too disciplined to fall into his trap, though one threw water at him and was arrested, falsely, by Chandler thugs for allegedly throwing a water bottle. (The Phoenix thug evidently expected that the Chandler cops would have the attitude of thugs, and it seems that they did.)

There is a happy ending: the Phoenix police chief rebuked that thug for endangering the community's trust in the department, and is investigating possible punishment for him. It is heartening that the chief wants his cops not to be thugs.

Global heating to ruin lagoon of Venice [Richard Stallman's Political Notes]

Sea-level rise due to accelerating global heating will ruin the lagoon of Venice before the end of this century.

More precisely, closing the flood barriers as often as will be required in a few decades will result in growth of algae which will petrify.

Global heating will ruin large parts of the land and sea. We can't address the problem by treating all the many symptoms — we need to curb the heating itself.

Plan to convince women to fear birth control [Richard Stallman's Political Notes]

Right-wing billionaire men have planned a multi-step campaign to convince young women that they should fear birth control, fear to vote, and let men have power over them.

The men admit this, but the influencers they promote don't admit they are part of such a campaign.

Increase in US extraction of fossil fuels ordered [Richard Stallman's Political Notes]

The saboteur in chief ordered a big increase in US extraction of fossil fuels in the name of "defense readiness".

The biggest threat to US national security, other than Republican officials, is the threat of global climate disaster, which this plan would increase.

Drones for public safety during No Kings rally [Richard Stallman's Political Notes]

*The Los Angeles Police Department deployed drones intended for public safety uses to surveil a No Kings rally and a protest against the Trump administration's anti-immigrant campaign, flight data reveals.*

Plans for women to be safe walking after dark [Richard Stallman's Political Notes]

England plans various measures to help women be safe walking after dark.

When installing measures to protect people from one injustice, it is important to avoid exacerbating another injustice. Any additional TV cameras must be disconnected from networks so that they do not extend automated surveillance of everyone. In other words, security cameras, rather than surveillance cameras.

The policy of letting passengers get off buses between stops at night is a good idea, especially where stops are far apart, but for equality's sake it should extend to all passengers.

I hope the added streetlights will be controlled by motion detectors, so that they don't wipe out small nocturnal animals as an unintended consequence.

Columbia U allowed deportation thugs to grab students [Richard Stallman's Political Notes]

Students at Columbia U say that the university chose to allow deportation thugs to grab students, rather than demanding a judicial warrant.

Burning wood for power [Richard Stallman's Political Notes]

*Burning wood for power worse for climate than gas equivalent, report finds.*

That doesn't imply that the world can safely continue burning gas and oil!

Court testimonies by Indigenous Canadians 20 years ago [Richard Stallman's Political Notes]

Indigenous Canadians testified in court around 20 years ago about the abuses that they has suffered years before as children forced to live in residential schools whose aim was forced assimilation.

Now the government plans to destroy the records of that testimony, in the name of protecting the privacy of the witnesses. However, decisions about compensation for individuals are not the only reason this testimony is important.

The general story of the cruelty of those schools is now widely known, but specific testimony is what supports that general conclusion.

Billionaire Polluter in Gulf of Mexico [Richard Stallman's Political Notes]

Billionaire Polluter seeks another chance to spread oil in the Gulf of Mexico.

Republicans, as is their usual policy, decided to give them another chance. Environmentalist groups are suing to prevent it.

Appeals court approved laws to put ten commandments in every classroom [Richard Stallman's Political Notes]

One US appeals court has approved of laws that require public schools to post the ten commandments in every classroom.

It seems clear to me that this favors Judaism and Christianity over all other alternatives including Atheism. I think the court used a very narrow interpretation of "establishment of religion".

EU council of foreign ministers considered trade sanctions on Israel [Richard Stallman's Political Notes]

The EU council of foreign ministers has considered a proposal to impose trade sanctions on Israel for its government's support of atrocities against Palestinians. The motion did not pass.

Woody Guthrie songs needed [Richard Stallman's Political Notes]

Woody Guthrie's political songs are becoming necessary again in the US.

Saturday, 02 May

22:28

View From a Hotel Window, 5/2/26: Chicago, IL [Whatever]

I’m staying north of the river, which is unusual for me. Also, the parking lot you see in the photo isn’t for my hotel. But it is a parking lot! Forms were obeyed.

I’m on town because tomorrow I’m in conversation with Joe Abercrombie about his latest book The Devils, and if you’re curious to see us I believe tickets may still be available. If you’re not curious to see us, fine, I guess, we’ll just sit there staring awkwardly at each other for an hour or so, I mean, whatever, it’s fine. It’s fine.

Ironically, this weekend is the 35th reunion for the University of Chicago Class of 1991, of which I am a part, and I am missing those festivities for this, and I feel a bit of a heel about it. Sorry, Class of ’91. You know you’re awesome.

— JS

20:07

Bits from Debian: Debian welcomes the 2026 GSoC interns [Planet Debian]

GSoC logo

We are very excited to announce that Debian has been assigned seven contributors to work under mentorship on a variety of projects with us during the Google Summer of Code.

Here is a list of the projects and contributors, along with details of the tasks to be performed.


Project: Automated Debian Packaging with debianize

  • Contributor: Anurag Nayak

Deliverables of the project: Debianize is a tool that aims to automatically create debian packages from scratch from upstream source trees. As for the current version, it works for some of the packages but it is not reliable. This project aims at making it production ready such that it can work with most of the projects. Along with that improving its reliability, coverage, integration with the broader ecosystem and other enhancements.


Project: Linux Livepatching

  • Contributor: Aryan Karamtoth

Deliverables of the project: Linux Kernel Livepatching is the process of replacing functions in the kernel code affected by CVEs with the patch-applied functions during system runtime. It's basically a method to apply security kernel patches to a running system.


Project: DebNet: Visualising the Bus Factor – Graph Analysis of Debian's Infrastructure

  • Contributor: Fabio Ruhland

Deliverables of the project: DebNet models the Debian archive as a graph to identify critical packages maintained by too few people. Using data from the Ultimate Debian Database (UDD), it builds a package dependency graph and a maintainer-package graph to compute practical metrics like the Bus Factor, Fragility Score, and Dependency Impact for every source package.


Project: Attack of the Clones: Fight Back Using Code Duplication Detection From Security Patches

  • Contributor: Gajendra Nath Soren

Deliverables of the project: This project aims to detect vulnerable code clones in the Debian archive by automatically extracting signatures from security patches. Using a two-signal approach that separates vulnerable patterns from fix patterns, the system generates high-specificity queries to search the entire archive via Debian CodeSearch.


Project: Debusine: debuginfod server

  • Contributor: Jugal59

Deliverables of the project: This project implements a debuginfod-compatible server within Debusine to provide automated debug symbol resolution for Debian developers.


Project: Debian-LSP: Improve File Format Support

  • Contributor: Lucas Ly Ba

Deliverables of the project: The Debian LSP Language Server currently provides only basic features—field completion, parse-error diagnostics, and simple quick fixes—leaving Debian maintainers without the rich IDE experience available in other ecosystems.


Project: Debusine: live log streaming

  • Contributor: mo-ashraf

Deliverables of the project: Debusine currently only shows task logs after a task has fully completed. This means developers working with long-running jobs (such as package builds or test pipelines) have no way to monitor progress in real time or catch failures early. This project adds live log streaming to Debusine.


Congratulations and welcome to all the contributors!

The Google Summer of Code program is possible in Debian thanks to the efforts of Debian Developers and Debian Contributors that dedicate part of their free time to mentor contributors and outreach tasks.

Join us and help extend Debian! You can follow the contributors' weekly reports on the debian-outreach mailing-list, chat with us on our IRC channel or reach out to the individual projects' team mailing lists.

17:07

Link [Scripting News]

Something weird happens as you get older, you walk into a room and see a friend but at first you don't get that this is your friend. Instead you see an old man or lady. Your attention goes away because like everyone you are programmed not to look at old people. Then you instantly realize this is your friend. You put on the virtual colored glasses that let you see them as you remember them, instead of what's there today.

16:21

More and better news for WordPress [Scripting News]

Here's a single-page site for WordPress news.

And here's the OPML subscription list.

You're free to import that OPML file into any feed reader.

I'd like to work with others to help get more good sources flowing through that list. The better the news delivery system, the more news sources we'll get. It's a chicken and egg thing, a bootstrap. People use Slack or Twitter to keep track of WordPress which is already a great idea-sharing network. Let's start using the tools we make to make the news we need.

Let's get more news sites on that list. There's a lot of news we're not getting over the web.

Comments, questions, suggestions here.

PS: I had a much longer post here earlier today, but factored it down to the basics.

15:35

It's time [Scripting News]

It's time to do whatever you were sent here to do.

13:07

Pluralistic: The prehistory of the Democratic Nuremberg Caucus (02 May 2026) [Pluralistic: Daily links from Cory Doctorow]

->->->->->->->->->->->->->->->->->->->->->->->->->->->->-> Top Sources: None -->

Today's links



A post-war 'denazification' bonfire featuring several Nazi flags. It has been hand-tinted. There is a smouldering MAGA hat amidst the coals.

The prehistory of the Democratic Nuremberg Caucus (permalink)

Comrade Trump continues his unbroken streak of destroying the American empire's grip on the world, hastening the renewables transition, de-dollarizing global trade, and killing the world's suicidal habit of entrusting its digital life to America's defective, enshittified tech exports:

https://pluralistic.net/2026/04/20/praxis/#acceleration

But Comrade Trump's ambitious praxis knows no bounds. Now, he's helping to remake the Democratic Party as a muscular opposition with a serious commitment to workers' interests over billionaires. It's not merely that Trump has empowered the primary campaigns of leftist Democrats facing down corporate, AIPAC-backed sellouts:

https://prospect.org/2026/04/30/palestine-super-pac-new-jersey-12-district-adam-hamawy/

He's also stiffening normie sellout Democrats' spines, forcing them to confront the stark choice between socialism and barbarism! And Dem leaders don't come more normie sellout than Cory "Big Pharma" Booker, a disgrace to Corys everywhere:

https://web.archive.org/web/20170112224531/https://theintercept.com/2017/01/12/cory-booker-joins-senate-republicans-to-kill-measure-to-import-cheaper-medicine-from-canada/

Nevertheless, that very same (lesser) Cory has introduced legislation to unwind every illegal, corrupt merger that the Trump administration has waved through:

https://www.booker.senate.gov/news/press/booker-introduces-legislation-to-review-and-unwind-anticompetitive-corporate-mergers-approved-under-second-trump-administration

Under the Correcting Lapsed Enforcement in Antitrust Norms for Mergers (CLEAN Mergers) Act, any company that was acquired in a deal worth $10b or more will have to break up with its merger partner if it turns out that these mergers were "politically influenced." "Politically influenced" sums up every major merger under the Trump II regime:

https://pluralistic.net/2026/02/13/khanservatives/#kid-rock-eats-shit

You could be forgiven for assuming that this is just about reining in Wall Street greed, but that it isn't an especially political maneuver. That's not true: antitrust is the most consequentially political regulation (with the possible exception of regulations on elections). Every fascist power defeated in WWII relied on the backing of their national monopolists to take, hold and wield power. That's why the Marshall Plan technocrats who rewrote the laws of Europe, South Korea and Japan made sure to copy over US antitrust law onto those statute-books (that's also why the tech antitrust cases brought in Europe could be re-run in South Korea and Japan – their laws are all substantively similar, because they were harmonized with US antitrust in the 1950s):

https://pluralistic.net/2025/01/22/autocrats-of-trade/#dingo-babysitter

Fascism and monopolies go hand in hand, and smashing monopolies is key to the program of fighting fascism. After defeating fascism in the mid-20th century, the Allies oversaw a program of "denazification," starting with the Nuremberg trials:

https://en.wikipedia.org/wiki/Nuremberg_trials

Inspired by those trials, I've proposed that Congressional Dems could form a "Nuremberg Caucus" that would publicly promise sweeping plans to denazify America after Trump and his allies have been swept from power:

https://pluralistic.net/2026/02/10/miller-in-the-dock/#denazification

The centerpiece of the Nuremberg Caucus playbook is a set of ready-to-file, public indictments against Trump officials who have violated the law, the Constitution, and the rights of the people of the USA. Dems should create and maintain a docket with exhibits and witness lists that gets updated every time one of these crooks runs their big, stupid mouths on Fox News or OANN or Twitter. The Nuremberg Caucus could even set dates for the trials of officials, with judicial calendars for each federal courtroom, starting on January 21, 2029.

The idea here is to both demoralize Trump's collaborators and to stiffen the spines of the Democratic base who will have to be convinced that turning out for the coming elections, and defending them, will mean something, delivering the change and hope they've been promised since the Obama campaign, but which has never materialized.

While trials and punishment for Trump's fascist goons are at the center of the Nuremberg Caucus plan, that's not all of it. The plan also calls for publicly announcing the intention to unwind every corrupt merger that was consummated under Trump. This serves two purposes: first, it promises the electorate that the monopolists who steal from them will face consequences for their crimes; but second, it also puts investors on notice that any gains from corrupt mergers will turn into massive losses once the next administration orders these companies to unscramble the inedible omelets they're cooking up, no matter what the cost.

That's exactly what Booker's CLEAN Mergers Act – cosponsored by Elizabeth Warren (D-MA), Martin Heinrich (D-NM), Chris Murphy (D-CT), and Mazie Hirono (D-HI) – does. I don't think that Booker is listening to me, but I do think that Dems who are willing to introduce this kind of legislation can be cajoled, coerced and sweet-talked into more ambitious Nuremberg Caucus actions.

For example, there could not be a better time to announce plans to unrig the Supreme Court, which has just gutted the Voting Rights Act:

https://prospect.org/2026/05/01/turning-civil-rights-inside-out-supreme-court-voting-rights/

The Supreme Court's legitimacy has been burned to the ground, and Trump's chud justices are pissing on the ashes. Packing the court is a very good idea:

https://pluralistic.net/2020/09/20/judicial-equilibria/#pack-the-court

It's also a very popular idea:

https://pluralistic.net/2023/10/18/the-people-no/#tell-ya-what-i-want-what-i-really-really-want

Which is why I included it in the Nuremberg Caucus plan. But packing the court is just table stakes. In his latest video, Jamelle Bouie lays out a detailed plan for denazifying the Supreme Court:

https://www.youtube.com/watch?v=SRzS61buXkQ

As Bouie points out, "as long as John Roberts has his majority, nothing the left of center in this country wants to do is safe or stable…We can have democracy and self-government in this country or we can have the Supreme Court as it exists, but we cannot have both."

But packing the court – while a good place to start – isn't enough. Per Bouie, the problem isn't just the court's corruption – it's how powerful the court is. Article 3, Section 2 of the Constitution permits Congress to "jurisdiction strip" the Supremes: Congress can pass a law taking voting rights and racial discrimination away from the Supreme Court's jurisdiction. Congress can impose ethics reforms on the court, banning justices from taking bribes (I can't believe I have to type these words).

Congress can turn the Supreme Court's current building into a museum and move the Supreme Court back into its original home in Congress's basement. Congress can take away the Supremes' ability to select their clerks or which cases they hear. All the Constitution says is that there must be a Supreme Court, and it must adjudicate "disputes between states, disputes involving ambassadors, impeachments, that kind of thing." Everything else is up to Congress to grant or withhold from SCOTUS.

This is very good Nuremberg Caucus stuff. It would be an amazing campaign promise for anyone primarying a shitty normie Dem in the midterms: "Vote for me, and I will be part of the legislative movement to make the Supreme Court weaker and thus more accountable."

Now, as much as I like this, I'm really holding out for a Dem to go with my big ICE-melting idea: promising million-dollar bounties for ICE officers who rat out their buddies for violating the law:

ICE agents are signing up with the promise of $50k hiring bonuses and $60k in student debt cancellation. That's peanuts. The Nuremberg Caucus could announce a Crimestoppers-style program with $1m bounties for any ICE officer who a) is themselves innocent of any human rights violations, and; b) provides evidence leading to the conviction of another ICE officer for committing human rights violations. That would certainly improve morale for (some) ICE officers.

As I wrote in February:

Critics of this plan will say that this will force Trump officials to try to steal the next election in order to avoid consequences for their actions. This is certainly true: confidence in a "peaceful transfer of power" is the bedrock of any kind of fair election.

But this bunch have already repeatedly signaled that they intend to steal the midterms and the next general election:

https://www.nj.com/politics/2026/02/top-senate-republican-rejects-trumps-shocking-election-plan-i-think-thats-a-constitutional-issue.html

ICE agents are straight up telling people that ICE is on the streets to arrest people in Democratic-leaning states ("The more people that you lose in Minnesota, you then lose a voting right to stay blue"):

https://unicornriot.ninja/2026/federal-agent-in-coon-rapids-the-more-people-that-you-lose-in-minnesota-you-then-lose-a-voting-right-to-stay-blue/

The only path to fair elections – and saving America – lies through mobilizing and energizing hundreds of millions of Americans. They are ready. They are begging for leadership. They want an electoral choice, something better than a return to the pre-Trump status quo. If you want giant crowds at every polling place, rising up against ICE and DHS voter-suppression, then you have to promise people that their vote will mean something.


Hey look at this (permalink)



A shelf of leatherbound history books with a gilt-stamped series title, 'The World's Famous Events.'

Object permanence (permalink)

#25yrsago Implementing TCP over pigeon https://blug.linux.no/rfc1149/

#20yrsago Barenaked Ladies frontman on copyright reform https://web.archive.org/web/20060505032617/http://www.canada.com/nationalpost/news/issuesideas/story.html?id=3367a219-f395-4161-a9b9-95256c613824

#20yrsago Stephen Colbert kills at White House press corps dinner https://web.archive.org/web/20060501114431/http://www.editorandpublisher.com/eandp/news/article_display.jsp?vnu_content_id=1002425363

#20yrsago Cinema owners try to lure us back to the movies https://web.archive.org/web/20060620140301/https://www.siliconvalley.com/mld/mercurynews/news/local/states/california/peninsula/14457900.htm?source=rss&channel=mercurynews_peninsula

#20yrsago Smithsonian’s sellout to Showtime slammed by Congress https://www.washingtonpost.com/wp-dyn/content/article/2006/04/28/AR2006042802213_2.html

#20yrago Wallaby milk: proof against antibiotic resistant bacteria https://web.archive.org/web/20060429102138/http://news.scotsman.com/scitech.cfm?id=593632006

#20yrsago Documentary on radical free school https://www.youtube.com/watch?v=rgpuSo-GSfw

#15yrsago Facebook celebrates royal wedding by nuking 50 protest groups https://anticutsspace.wordpress.com/2011/04/29/political-facebook-groups-deleted-on-royal-wedding-day/

#15yrsago Jay Rosen: What I Think I Know About Journalism https://pressthink.org/2011/04/what-i-think-i-know-about-journalism/

#15yrsago Companies should release the source code for discontinued products https://makezine.com/article/maker-news/if-youre-going-to-kill-it-open-source-it/

#15yrsago Scratch-built “freedom press” https://makezine.com/article/craft/freedom_press/

#15yrsago HOWTO quilt a 3D Mad Tea Party set https://www.instructables.com/Quilted-Mad-Tea-Party-Set/

#15yrsago Online activism works: Canada delayed US-style copyright bill in fear of activist campaign https://web.archive.org/web/20110501103056/https://www.michaelgeist.ca/content/view/5763/125/

#15yrsago Ad agency to radicals: “We own radical media(TM)” https://web.archive.org/web/20110503045909/http://radicalmediaconference.wordpress.com/2011/04/27/we-make-radical-media-you-make-adverts/

#15yrsago Troubletwisters: Garth Nix and Sean Williams’ action-packed new kids’ fantasy https://memex.craphound.com/2011/04/30/troubletwisters-garth-nix-and-sean-williams-action-packed-new-kids-fantasy/

#15yrsago RIP, Joanna Russ https://nielsenhayden.com/makinglight/archives/012974.html#547586

#5yrsago Experian doxes the world (again) https://pluralistic.net/2021/04/30/dox-the-world/#experian

#5yrsago Disney's writer wage-theft is far worse than reported https://pluralistic.net/2021/04/29/writers-must-be-paid/#pay-the-writer

#5yrsago Korea set to break the Samsung dynasty https://pluralistic.net/2021/04/29/writers-must-be-paid/#dynasties

#5yrsago What the hell is "carried interest" https://pluralistic.net/2021/04/29/writers-must-be-paid/#carried-interest

#1yrago Mike Lee and Jim Jordan want to kill the law that bans companies from cheating you https://pluralistic.net/2025/04/29/cheaters-and-liars/#caveat-emptor-brainworms

#1yrago Republicans want to force students to pay off scam college loans https://pluralistic.net/2025/04/30/trump-u/#i-think-you-know-what-the-trustees-can-do-with-their-suggestions


Upcoming appearances (permalink)

A photo of me onstage, giving a speech, pounding the podium.



A screenshot of me at my desk, doing a livecast.

Recent appearances (permalink)



A grid of my books with Will Stahle covers..

Latest books (permalink)



A cardboard book box with the Macmillan logo.

Upcoming books (permalink)

  • "The Reverse-Centaur's Guide to AI," a short book about being a better AI critic, Farrar, Straus and Giroux, June 2026 (https://us.macmillan.com/books/9780374621568/thereversecentaursguidetolifeafterai/)
  • "Enshittification, Why Everything Suddenly Got Worse and What to Do About It" (the graphic novel), Firstsecond, 2026

  • "The Post-American Internet," a geopolitical sequel of sorts to Enshittification, Farrar, Straus and Giroux, 2027

  • "Unauthorized Bread": a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, April 20, 2027

  • "The Memex Method," Farrar, Straus, Giroux, 2027



Colophon (permalink)

Today's top sources:

Currently writing: "The Post-American Internet," a sequel to "Enshittification," about the better world the rest of us get to have now that Trump has torched America. Third draft completed. Submitted to editor.

  • "The Reverse Centaur's Guide to AI," a short book for Farrar, Straus and Giroux about being an effective AI critic. LEGAL REVIEW AND COPYEDIT COMPLETE.
  • "The Post-American Internet," a short book about internet policy in the age of Trumpism. PLANNING.

  • A Little Brother short story about DIY insulin PLANNING


This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.

https://creativecommons.org/licenses/by/4.0/

Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.


How to get Pluralistic:

Blog (no ads, tracking, or data-collection):

Pluralistic.net

Newsletter (no ads, tracking, or data-collection):

https://pluralistic.net/plura-list

Mastodon (no ads, tracking, or data-collection):

https://mamot.fr/@pluralistic

Bluesky (no ads, possible tracking and data-collection):

https://bsky.app/profile/doctorow.pluralistic.net

Medium (no ads, paywalled):

https://doctorow.medium.com/

Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):

https://mostlysignssomeportents.tumblr.com/tagged/pluralistic

"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla

READ CAREFULLY: By reading this, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies ("BOGUS AGREEMENTS") that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer.

ISSN: 3066-764X

11:07

Tim Bradshaw: Making CLOS slot access less slow [Planet Lisp]

Access to slots in CLOS instances is often very slow. It’s probably not possible for it ever to be really fast, but the AMOP MOP does provide a way of making it, at least, less slow.

How slow is it?

Here are some benchmarks for accessing fields in objects of various kinds, using SBCL. All of these tests do something equivalent to

(defclass a ()
  ((i :initform 0 :type fixnum)))

(defclass a/no-fixnum ()
  ((i :initform 0)))

(defmethod svn ((a a) n)
  (declare (type fixnum n)
           (optimize speed (safety 0)))
  (dotimes (i n)
    (incf (the fixnum (slot-value a 'i)))))

(defmethod svn ((a a/no-fixnum) n)
  (declare (type fixnum n)
           (optimize speed (safety 0)))
  (dotimes (i n)
    (incf (the fixnum (slot-value a 'i)))))

They then call svn (or equivalent) with a large value of \(n\), do that a number of times \(m\) and then divide by \(2 \times n \times m\) to get an average time per access (incf accesses the slot twice).

For SBCL 2.6.3.178-a190d9710 on ARM64 Apple M1, seconds per access:

  • raw fixnum increment \(1.58\times 10^{-10}\), ratio \(1.0\);
  • slot access with slot-value (slot type fixnum) \(1.20\times 10^{-8}\), ratio \(76\);
  • slot access with slot-value (no slot type) \(1.22\times 10^{-8}\), ratio \(77\);
  • slot access with slot-value (single slot-value-using-class method) \(1.69\times 10^{-8}\), ratio \(107\);
  • slot access using standard-instance-access \(1.00\times 10^{-9}\), ratio \(6.4\);
  • slot access, struct (slot type fixnum) \(1.57\times 10^{-10}\), ratio \(1.0\);
  • slot access, struct (no type) \(1.58\times 10^{-10}\), ratio \(1.0\);
  • slot access, cons (car) \(1.59\times 10^{-10}\), ratio \(1.0\).

These numbers vary slightly, but this gives a good picture of what is going on. In particular you can see that slot-value within a method specialised on the class is more than 70 times slower than access for a structure slot, but if you can use standard-instance-access it is only about 6 times slower: standard-instance-access speeds things up by a factor of about 10, which changes CLOS slot access performance from laughably slow to merely pretty slow.

A macro

I’ve written a macro, called with-sia-slots which is like with-slots but uses standard-instance-access. It therefore has all the constraints imposed by that, but it is significantly faster than with-slots or slot-value. It has some overhead, as it has to dynamically compute the slot locations: this is better done outside any inner loop. This means that, for instance, you probably want to write code that looks like

(with-sia-slots (x) o
  (dotimes (i many)
    (setf x (... x ...))))

which will mean you only pay the overhead once.

The above tests don’t use with-sia-slots, as I wrote them partly to see if something like this was worth writing. However on a current (at the time of writing) SBCL with-sia-slots is asymptotically about 10 times faster than with-slots as demonstrated by these tests.

Up to package names it should be portable to any CL with an AMOP-compatible MOP. It can be found in my implementation-specific hacks, linked from here.

10:49

Ben Hutchings: FOSS activity in April 2026 [Planet Debian]

10:14

Nostalgia can be fatal [Seth's Blog]

For hundreds of years, nostalgia was seen as a serious disease, with doctors across Europe scrambling for a cure. Hundreds of thousands of people died from it.

In the original understanding of the term, it was a sort of homesickness. Soldiers from Switzerland were the first to get the official diagnosis–separated from their friends, family and homes, these young men would suffer from melancholy and would waste away, sometimes fatally.

As it spread, one theory was that it afflicted people from places that were at high altitude. As more humans traveled, often under duress (for example, enslaved people kidnapped from their homes and brought by ship to the new world), the suffering increased.

It’s not hard to see how a sudden, involuntary dislocation could be debilitating. Particularly if home was a place that was insulated from sudden change and fast-moving culture.

Today, future shock is bringing a new, if milder form of the affliction. As technology, jobs and culture shift faster than ever before, it’s understandable that many are yearning for a return to an imagined past. When the future arrives uninvited, it can feel like being pulled from a comfortable village in the middle of the night.

Knowing our peers are encountering challenges with the transitions at work or at home can give us the insight to build the scaffolding they need to find their footing. And perhaps we can offer ourselves a bit of grace as well.

05:49

Urgent: Block privatization of US Postal Service [Richard Stallman's Political Notes]

US citizens: call on Congress to block privatization of the US Postal Service.

US citizens: Join with this campaign to address this issue.

To phone your congresscritter about this, the main switchboard is +1-202-224-3121.

Please spread the word.

Urgent: Stop weakening of coal ash protections [Richard Stallman's Political Notes]

US citizens: call on the EPA to stop trying to weaken coal ash protections.

See the instructions for how to sign this letter campaign without running any nonfree JavaScript code--not trivial, but not hard.

Urgent: Block war with Cuba [Richard Stallman's Political Notes]

US citizens: call on your congresscritter and senators to block war with Cuba, and end the humanitarian crisis there.

See the instructions for how to sign this letter campaign without running any nonfree JavaScript code--not trivial, but not hard.

US citizens: Join with this campaign to address this issue.

To phone your congresscritter about this, the main switchboard is +1-202-224-3121.

Please spread the word.

Acetaminophen during pregnancy [Richard Stallman's Political Notes]

Magats claim that taking acetaminophen during pregnancy can cause autism. A large empirical study has found that it doesn't.

Never trust what magats say about medicine. They don't have a commitment to making sure it is true.

Violent abuse that drives women to kill their husbands [Richard Stallman's Political Notes]

Many countries' legal systems take no cognizance of the violent abuse that drives women to kill their husbands as an act of self defense.

Skepticism toward claims about microbiome products [Richard Stallman's Political Notes]

Be skeptical about claims that specific products will improve your microbiome. Science doesn't yet know enough to predict what interventions might be helpful, so those who try to sell you products to intervene in your microbiome are likely to be exceeding actual knowledge.

Indigenous Americans face racial profiling [Richard Stallman's Political Notes]

Indigenous Americans face racial profiling by deportation thugs, who demand proof of citizenship because of their appearance.

Forests transformed from carbon sink to carbon source [Richard Stallman's Political Notes]

*Africa's forests transformed from carbon sink to carbon source, study finds.*

The same has happened to the Amazon forest and in South-East Asia. We have gone over a tipping point and are sliding down to global disaster.

Laws against new data centers [Richard Stallman's Political Notes]

Various US states are working on laws to regulate or prohibit the construction of new data centers. The constructors try to evade public regulation and even public awareness until it is too late. At that time, the data center typically has the right to demand all the electricity and water it needs, which can far exceed what is actually available in the place. That can be catastrophic.

Note how plutocratists threaten states with being labeled as "closed to new business" if they take any steps to stop businesses from wiping the floor with the public there. The public needs to learn to laugh in the face of anyone who advocates that plutocratist view.

Length of summer growing [Richard Stallman's Political Notes]

Around the world, the length of summer is growing by 6 days per decade.

But this may speed up in the future, given that global heating is accelerating.

Pressure on Republican senators by general public hatred of bully [Richard Stallman's Political Notes]

Robert Reich argues that general public hatred of the bully might pressure some Republican senators to vote to remove the bully from office.

Maybe so, but if that makes Vance president, will it really be better?

Jobs for US college graduates scarce in some fields [Richard Stallman's Political Notes]

Jobs for US college graduates have become scarce in some fields where they used to be available.

00:28

Back to the Very Very Basics [Whatever]

For reasons that are not important now, I have found myself in the possession of a lightly used but still somewhat recent Asus Chomebook, of the sort that one can pick up for less than $200, with 4GB RAM, 64GB of onboard storage, a less than spectacular screen resolution, and a keyboard without backlighting, which means on this dark gray version that once the lights dim, its usefulness will compromised for all but the most talented of touch-typers. It’s been a while since I’ve used something this basic (I’m writing this piece on it now), and inasmuch as my daily driver laptop is a reasonably specced-out M4 MacBook Air, I was curious how I would feel about it stepping down from that.

Answer: I… don’t hate it? I don’t love it, to be clear, and it’s not something I would likely ever choose over using my Air. And there are some things about it which are pretty egregious, that are clearly the result of this thing clocking in at under $200, most notably a screen that would have to work to be called “washed out,” and a track pad that feels genuinely terrible to use, especially coming from a MacBook, which have what are acknowledged to be the best trackpads in the world. It is as plastic as the day is long, and given the paucity of its RAM and the inevitable end of ChromeOS, this computer is so close to the line between “useful” and “e-waste” that one might as well give it a balancing beam.

On the other hand, the keyboard doesn’t suck to type on; it’s a basic chiclet board but it’s nicely spaced and the keys don’t feel overly mushy. The onboard i/o puts the Air to shame: Both the Air and the Asus have two USB-C ports and a headphone jack, but the ASUS throws in a USB-A and Mini-SD card as well (I don’t suspect that the USB-C ports on the Asus are Thunderbolt, but they can port out to an external display, which ain’t chicken feed). Plus the ASUS webcam has a manual privacy shutter, which, frankly, is a thing every laptop with a camera should have regardless. It’s not the absolute worst! You could spend $200 on much more questionable things!

Every now and again I do the check-in with myself on what might be the bare minimum I would need, in terms of personal possessions, if less than wonderful things came to pass I had to live in deeply reduced circumstances. And without going into great detail about the thinking process about this, one of the things I’ve decided is that if I had an acceptable laptop, that would go a fair way toward my needs in terms of audiovisual entertainment, and personal creativity. A decent laptop is a television, a radio, a window to the world and an instrument of expression.

This Asus is… not up to the task of being my acceptable laptop in this circumstance. Too limited by tech and by software, basically. I’ve been a long time enjoyer of Chromebooks, and loved my Pixelbook from back in the day. But Chrome ultimately never won the argument that a thin client to the Internet was all you would ever need, and now that ChromeOS is going to be folded into Android at some nearish point, it never will. Chromebooks will go into the west as forever the “second laptop,” the one you used when you didn’t have actual work to do.

(What laptop do I think it probably the closest to my Lowest Acceptable Spec? I think at this point it’s obvious: a MacBook Neo, which has all the advantages of a Chromebook, including price point for some mid-spec Chromebooks, and also can run more complex software that one would need for creative work, and not be totally reliant on an online connection to do it. It’s tempting to say the Neo is overhyped at this point, except I don’t think it actually is; at $600, it basically takes a knife to the Chromebook value proposition for everything but barebones educational use. It’s not the laptop I would want — that’s my Air — but it would certainly do.)

Considering that I do have a MacBook Air, and an iPad Pro with a “Magic Keyboard,” which essentially takes care of all my laptop-ish needs, what might I use this little Chromebook for? Basically, as a guest laptop, if someone visiting needs to do something that requires a full-size keyboard or a screen larger than the one on their phone, but didn’t happen to bring their own laptop with them. And… that’s pretty much it? As I said, I don’t want to entirely discount this laptop; it’s better than I expected for less than $200, and it fulfills its own admittedly modest brief perfectly well. It’s just that I don’t know how much longer this particular brief is going to need to be fulfilled.

— JS

Friday, 01 May

23:56

Page 8 [Flipside]

Page 8 is done.

22:21

Reproducible Builds (diffoscope): diffoscope 318 released [Planet Debian]

The diffoscope maintainers are pleased to announce the release of diffoscope version 318. This version includes the following changes:

[ Chris Lamb ]
* Upload to test PyPI integration.
* Bump Standards-Version to 4.7.4.

[ Manuel Jacob ]
* Remove a misleading comment.

You find out more by visiting the project homepage.

21:35

Developing a cross-process reader/writer lock with limited readers, part 4: Abandonment [The Old New Thing]

We’ve been building a cross-process reader/writer lock with a cap on the number of readers, we concluded our investigation last time by noting that there is a serious problem that needs to be fixed.

That serious problem is abandonment.

Suppose a process crashes while it holds a shared or exclusive lock on our cross-process reader/writer lock. Semaphores don’t have owners, so if a thread terminates while in possession of a semaphore token, that token is lost forever. For our cross-process reader/writer lock, that means that the maximum number of shared acquirers goes down by one, and exclusive acquisitions will never succeed, since they will be waiting for that last token which will never be returned.

A synchronization object that does have the concept of ownership is the mutex, so we can build our reader/writer lock out of mutexes.

The idea here is that instead of claiming semaphore tokens, we claim mutexes. This means that we need one mutex for each potential shared acquisition, plus one more to avoid the starvation problem.

The outline is

  • Shared acquisition: Claim any available token mutex.
  • Shared release: Release the claimed token mutex.
  • Exclusive acquisition: Claim all token mutexes.
  • Exclusive release: Release all token mutexes.
HANDLE sharedMutex;
HANDLE tokenMutexes[MAX_SHARED];

struct TimeoutTracker
{
    explicit TimeoutTracker(DWORD timeout)
        : m_timeout(timeout) {}

    DWORD m_start = GetTickCount();

    DWORD Wait(HANDLE h)
    {
        DWORD elapsed = GetTickCount() - m_start;
        if (elapsed > m_timeout) return WAIT_TIMEOUT;
        return WaitForSingleObject(h, m_timeout - elapsed);
    }

    DWORD WaitMultiple(DWORD count, const HANDLE* handles, BOOL waitAll)            
    {                                                                               
        DWORD elapsed = GetTickCount() - m_start;                                   
        if (elapsed > m_timeout) return WAIT_TIMEOUT;                               
        return WaitForMultipleObjects(count, handles, waitAll, m_timeout - elapsed);
    }                                                                               
};

We change the return value of the Wait method so it returns the wait result rather than a success/failure. We also add a Wait­Multiple method for wrapping Wait­For­Multiple­Objects.

Next is a handy helper function.

int WaitResultToindex(DWORD result)
{
    auto index = result - WAIT_OBJECT_0;
    if (index < MAX_SHARED) return static_cast<int>(index);

    index = result - WAIT_ABANDONED_0;
    if (index < MAX_SHARED) return static_cast<int>(index);

    return -1;
}

The Wait­Result­To­Index function takes the wait result and returns the index of the acquired mutex, or -1 if no mutex was acquired.

Notice that this code treats the abandoned the state the same as the normal wait state. We are assuming that the code can recover from inconsistent data somehow. (For example, maybe the shared and exclusive accesses are to control access to a set of files, so the existing code already has to deal with file corruption.)

All that’s left is to implement the outline.

int AcquireShared()
{
    WaitForSingleObject(sharedMutex, INFINITE);

    auto result = WaitForMultipleObjects(MAX_SHARED, tokenMutexes, FALSE /* bWaitAll */, INFINITE);

    ReleaseMutex(sharedMutex);

    return WaitResultToIndex(result);
}

void ReleaseShared(int index)
{
    ReleaseMutex(tokenMutexes[index]);
}

int AcquireSharedWithTimeout(DWORD timeout)
{
    TimeoutTracker tracker(timeout);
    DWORD result = tracker.Wait(hSharedMutex);
    if (result != WAIT_OBJECT_0) return -1;
    result = tracker.WaitMultiple(MAX_SHARED, tokenMutexes, FALSE /* waitAll */);
    ReleaseMutex(sharedMutex);

    return WaitResultToIndex(result);
}

void AcquireExclusive()
{
    WaitForSingleObject(sharedMutex, INFINITE);

    auto result = WaitForMultipleObjects(MAX_SHARED, tokenMutexes, TRUE /* bWaitAll */, INFINITE);

    ReleaseMutex(sharedMutex);
}

void ReleaseExclusive()
{
    for (unsigned i = 0; i < MAX_SHARED; i++) {
        ReleaseMutex(tokenMutexes[i]);
    }
}

bool AcquireExclusiveWithTimeout(DWORD timeout)
{
    TimeoutTracker tracker(timeout);
    DWORD result = tracker.Wait(hSharedMutex);
    if (result != WAIT_OBJECT_0) return -1;
    result = tracker.WaitMultiple(MAX_SHARED, tokenMutexes, TRUE /* waitAll */);
    ReleaseMutex(sharedMutex);

    return result != WAIT_TIMEOUT;
}

The post Developing a cross-process reader/writer lock with limited readers, part 4: Abandonment appeared first on The Old New Thing.

21:07

Malware in Proprietary Software - Latest Additions [Planet GNU]

The initial injustice of proprietary software often leads to further injustices: malicious functionalities.

The introduction of unjust techniques in nonfree software, such as back doors, DRM, tethering, and others, has become ever more frequent. Nowadays, it is standard practice.

We at the GNU Project show examples of malware that has been introduced in a wide variety of products and dis-services people use everyday, and of companies that make use of these techniques.

Here are our latest additions

April 2026

Proprietary Obsolescence


Malware in Appliances

Eden: NHS goes to war against open source [LWN.net]

Terence Eden reports that the UK's National Health Service (NHS) is preparing to close almost all of its open-source repositories as a response to LLM tools, such as Anthropic's Mythos, becoming more sophisticated at finding security vulnerabilities. He does not, to put it mildly, agree with the decision:

The majority of code repos published by the NHS are not meaningfully affected by any advance in security scanning. They're mostly data sets, internal tools, guidance, research tools, front-end design and the like. There is nothing in them which could realistically lead to a security incident.

When I was working at NHSX during the pandemic, we were so confident of the safety and necessity of open source, we made sure the Covid Contact Tracing app was open sourced the minute it was available to the public. That was a nationally mandated app, installed on millions of phones, subject to intense scrutiny from hostile powers - and yet, despite publishing the code, architecture and documentation, the open source code caused zero security incidents.

Furthermore, this new guidance is in direct contradiction to the UK's Tech Code of Practice point 3 "Be open and use open source" which insists on code being open.

19:56

19:35

It's May, and we've been keeping busy [Planet GNU]

All four teams at the Free Software Foundation (FSF) have been working tirelessly the past four months, and we have a lot to show for it!

19:14

Joe Marshall: Echoes of the Lisp Listener [Planet Lisp]

The Lisp Machine Listener had an electric close parenthesis. When the user typed a close parenthesis, and this was the close parenthesis that finished the complete form at top level, the form would be sent to the REPL right away with no need to press enter. Here's how to get this behavior with SLY:

(defun my-sly-mrepl-electric-close-paren ()
  "Insert ')' and auto-send ONLY if we are closing a top-level Lisp form."
  (interactive)
  (let ((state (syntax-ppss)))
    (insert ")")
    ;; Safety checks:
    ;; 1. We were at depth 1 (so we are now at depth 0)
    ;; 2. We aren't in a string or comment
    ;; 3. The input actually starts with a paren (it's a form, not a sentence)
    (when (and (= (car state) 1)
               (not (nth 3 state))
               (not (nth 4 state))
               (string-match-p "^\\s-*(" 
                               (buffer-substring-no-properties (sly-mrepl--mark) (point))))
      (sly-mrepl-return))))

Another cool hack is to get the REPL to do double duty as a command line to the LLM chatbot. When you type RET in the REPL, it will check if the input is a complete lisp form. If so, it will send the form to the REPL as normal. If not, it will send the input to the chatbot. Here's how to do this:

(defun my-sly-mrepl-electric-return ()
  "Send to Lisp if it's a form/symbol, or wrap in (chat ...) if it's a sentence."
  (interactive)
  (let* ((beg (marker-position (sly-mrepl--mark)))
         (end (point-max))
         (input (buffer-substring-no-properties beg end))
         (trimmed (string-trim input)))
    (cond
     ;; If it's empty, just do a normal return
     ((string-blank-p trimmed)
      (sly-mrepl-return))
     
     ;; If it starts with a paren, quote, or hash, it's definitely a Lisp form
     ((string-match-p "^\\s-*[(#'\"]" trimmed)
      (sly-mrepl-return))
     
     ;; If it's a single word (no spaces), treat it as a symbol/form (e.g., *package*)
     ((not (string-match-p "\\s-" trimmed))
      (sly-mrepl-return))
     
     ;; Otherwise, it's a sentence. Wrap it and fire.
     (t
      (delete-region beg end)
      (insert (format "(chat %S)" trimmed))
      (sly-mrepl-return)))))

Install as follows:

;; Apply to SLY MREPL with a safety check for the mode map
(with-eval-after-load 'sly-mrepl
  (define-key sly-mrepl-mode-map (kbd "RET") 'my-sly-mrepl-electric-return)
  (define-key sly-mrepl-mode-map (kbd ")") 'my-sly-mrepl-electric-close-paren))

At The Speed of Hell [Penny Arcade]

The only time I feel bad for not having a newer console is when Housemarque drops something. With a pedigree that goes back to the Amiga, they have developed and honed their taste to Jamon Iberico levels - I wouldn't be surprised if they let their games feed on acorns, free-range. They've perfected arcade feel, itself a kind of artisanal, out-of-time style, and now with Returnal and (from Mork tells me) Saros, they've mastered progression as well.

18:28

Saros Thoughts [Penny Arcade]

I think Saros is a super fun game whenever it isn’t trying to tell me whatever this story is. The bullet hell gameplay is really well done and if that’s all the game was I would probably love it, but they have layered in this inscrutable story that I find completely uninteresting and unnecessary. Jerry and I often say that something is “too grand for chicken” when trying to explain that a thing can be simple and great without needing an extra layer of gravitas. I will keep playing it, but that is how I feel about Saros. 

 

 

18:21

Urgent: Pass Farm Bill [Richard Stallman's Political Notes]

US citizens: call on your congresscritter and senators to pass a Farm Bill that helps families put food on the table.

See the instructions for how to sign this letter campaign without running any nonfree JavaScript code--not trivial, but not hard.

US citizens: Join with this campaign to address this issue.

To phone your congresscritter about this, the main switchboard is +1-202-224-3121.

Please spread the word.

Urgent: Call senators to vote for S. J. Res. 99 [Richard Stallman's Political Notes]

US citizens: call on your senators to vote for S. J. Res. 99, which would protect authorized foreign workers who have filed for renewal of that authorization from being expelled because government agencies were late in responding.

See the instructions for how to sign this letter campaign without running any nonfree JavaScript code--not trivial, but not hard.

US citizens: Join with this campaign to address this issue.

To phone your congresscritter about this, the main switchboard is +1-202-224-3121.

Please spread the word.

Urgent: Block corrupter's UAE bailout [Richard Stallman's Political Notes]

US citizens: call on your congresscritter and senators to block the corrupter's UAE bailout.

See the instructions for how to sign this letter campaign without running any nonfree JavaScript code--not trivial, but not hard.

US citizens: Join with this campaign to address this issue.

To phone your congresscritter about this, the main switchboard is +1-202-224-3121.

Please spread the word.

Urgent: Support Rashida Tlaib's Lebanon War Powers Resolution [Richard Stallman's Political Notes]

US citizens: call on your representative and senators to support Rashida Tlaib's Lebanon War Powers Resolution.

See the instructions for how to sign this letter campaign without running any nonfree JavaScript code--not trivial, but not hard.

US citizens: Join with this campaign to address this issue.

To phone your congresscritter about this, the main switchboard is +1-202-224-3121.

Please spread the word.

Urgent: Impeach FBI director Patel [Richard Stallman's Political Notes]

US citizens: call on Congress to impeach FBI director Patel.

US citizens: Join with this campaign to address this issue.

To phone your congresscritter about this, the main switchboard is +1-202-224-3121.

Please spread the word.

Urgent: Tax the Rich and EXPAND Social Security [Richard Stallman's Political Notes]

US citizens: call on Instead of Capping Social Security Benefits, Tax the Rich and EXPAND Social Security.

See the instructions for how to sign this letter campaign without running any nonfree JavaScript code--not trivial, but not hard.

Urgent: Shut down corrupter's prescription drug scam [Richard Stallman's Political Notes]

US citizens: call on your state Attorney General to shut down the corrupter's prescription drug scam.

See the instructions for how to sign this letter campaign without running any nonfree JavaScript code--not trivial, but not hard.

16:07

15:21

Link [Scripting News]

BTW, I pointed to the Wikipedia page for XML-RPC, and noticed that they point to an archive.org copy of a very old version of the website, instead of the updated site which has new reference code written in JavaScript. The old version of the site used Frontier, which is where XML-RPC was developed, but it's not in wide use these days, JavaScript is. Could someone update the Wikipedia page to change the link to the current XML-RPC site? I'm reluctant to do it myself because that's somewhat against the rules.

Dirk Eddelbuettel: binb 0.0.8 on CRAN: Maintenance [Planet Debian]

The eight release of the binb package, and first in two years, is now on CRAN and in r2u. binb regroups four rather nice themes for writing LaTeX Beamer presentations much more easily in (R)Markdown. As a teaser, a quick demo combining all four themes is available; documentation and examples are in the package.

This release contains regular internal updates to continuous integration, URLs reference and switch to Authors@R. The trigger for the release, though, was a small updated need when very recent pandoc versions (as shipped with RStudio) are used which require a new variable declaration in the LaTeX template files in order to process uncaptioned tables. The summary of changes follows.

Changes in binb version 0.0.8 (2026-05-01)

  • Small updates to documentation URLs and continuous integration

  • The package now uses Authors@R in DESCRIPTION

  • Newer pandoc versions are accommodated by adding a required counter variable in the latex template file

CRANberries provides a summary of changes to the previous version. For questions or comments, please use the issue tracker at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub. You can also sponsor my Tour de Shore 2026 ride in support of the Maywood Fine Arts Center.

15:07

[$] Version-controlled databases using Prolly trees [LWN.net]

Modern database and filesystems make pervasive use of B-trees, which are tree structures optimized for storing sorted lists of keys and values on block devices. Dolt is an Apache 2.0-licensed project that makes clever use of a variant of a B-tree to support efficient version control for an entire database. The data structure it uses could well be of interest to other projects.

14:35

Link [Scripting News]

Welcome to May, the fairest month of all in the NYC area. Almost every day in May is delicious. And today is esp fair, the Knicks moved on to the next round of the playoffs with a record-setting decisive victory over the Hawks of Atlanta. The next opponent is either the Celtics of Boston or the Sixers of Philadelphia. I was already burned out on the playoffs last week, but I'm rejuvenated. Let's go. And I apologize about all the "realistic" things I said about Brunson. He caught fire in the last three games, and showed he has the determination we need to go all the way this year. The Knicks are great because unlike the Yankees or Mets, they unify the city. And as everyone learns, NYC is so huge that the fanbase can pack arenae all over the US, as they chanted the name of OG and MVP for Brunson and Dooooooce when McBride shoots. If they don't get to the finals it will not be for lack of talent or determination. There will be luck and Acts of God involved in the outcome.

Link [Scripting News]

On this day in 2016 I wrote a screed on Facebook saying how I wanted to turn it into a blogging platform including the how and why. The arguments are roughly the same ones about how I want Bluesky to stop paying homage to the limits of Twitter and cozy up to the web and let's do writing for real, undo the damage caused by Twitter in its over 20-year life. The requests in both case fell on deaf ears. So we are where we were in 2016, we have to replace Bluesky with the writing system of the web. And there is a silver lining to Automattic's excursion into a mini-version of WordPress that looks and behaves like Twitter. They used RSS to glue the systems together. It was convenient, and that's one of the major selling points of RSS, it is convenient. It's supported everywhere (except the offspring of Twitter). So thanks for that. I'm still glued to this cause. I don't want to retire until writing on the web gets back on track.

Link [Scripting News]

Apparently Substack does not implement MCP, which is basically the XML-RPC of AI. According to ChatGPT they have a limited API that some independent developers have bridged to MCP. But as you would expect from a tight silo like Substack, the API lets you read but not write. They want you to use their editor, what they don't want is to be one of 20 distributors of your writing. They want an exclusive and they get it.

Link [Scripting News]

The backup of this blog for April in OPML format.

14:21

Security updates for Friday [LWN.net]

Security updates have been issued by AlmaLinux (fence-agents), Debian (chromium, dovecot, and kernel), Fedora (chromium, dotnet10.0, dotnet8.0, dotnet9.0, emacs, glow, jfrog-cli, openbao, pyp2spec, python3.6, rust-rustls-webpki, vhs, and xen), Oracle (grafana, grafana-pcp, PackageKit, sudo, vim, and xorg-x11-server), Red Hat (rhc), SUSE (avahi, bouncycastle, chromium, container-suseconnect, firewalld, gdk-pixbuf, grafana, java-25-openjdk, kernel, libixml11, libmozjs-140-0, libpng12-0, libsodium, libssh, mariadb, Mesa, ntfs-3g_ntfsprogs, openCryptoki, openexr, packagekit, prometheus-postgres_exporter, python-jwcrypto, python-mako, python-Pygments, python-pynacl, python311, python311-pyOpenSSL, python315, radare2, sed, and vim), and Ubuntu (kmod and zulucrypt).

13:49

Error'd: Parametric Projection [The Daily WTF]

Roger C. gets on second base with an unforced error. "Not only is the content too large, the error message informing us of this is also too large to fit the visible space. A layered, double WTF."

782b1790d9d549d6a8acf4045669d7a6

"AWS Spellcheck Fail!" alerts Peter "If only someone at AWS knew the correct paramters to activate the spellcheck."

ee85e87fd7cb4cc2ac3038cb9f97ccf8

"How long is too long for a job to be open? " wonders Lincoln K. "I didn't even know LinkedIn existed 61 years ago, let alone was accepting postings... Though only 81 applicants in that time is hardly an impressive turn-out." For a "Vice President Operations and Quality Control", no less.

1c3d4b06a37e4119b62dc39bad29b9a3

An anonymous Richard reports "This came through my door. On a card that, in order to get to my door, had my full address printed on it, including my

."

9df5e07d210846f08dc925105f19b64b

Oenophile Abroad Michael R. shares "My Macbook broke after being "exposed" to red wine. As a German in London it pleases me so see that the repair shop offers this time granularity."

a9f634b888de4927babb91d7d2920579

[Advertisement] Keep all your packages and Docker containers in one place, scan for vulnerabilities, and control who can access different feeds. ProGet installs in minutes and has a powerful free version with a lot of great features that you can upgrade when ready.Learn more.

12:49

A Ransomware Negotiator Was Working for a Ransomware Gang [Schneier on Security]

Someone pleaded guilty to secretly working for a ransomware gang as he negotiated ransomware payments for clients.

12:07

GNU Health featured at the Cyber|Show UK [Planet GNU]

GNU Health at the Cyber|Show!
Grab a coffee and listen to the 40 min. interview Andy Farnell and Helen Plews made to Luis Falcón in their wonderful show. ❤️

They covered key aspects on citizen and patient data privacy, hospital management, federated health networks, genomics and wearables. In the interview they also talked about the risks associated to commercial, closed sourced electronic health records systems and proprietary mobile applications.

The interview reveals how crucial is Free/Libre software for equity and digital sovereignty in our societies. 🩺 🏥 🧬 👇️
https://cybershow ... pisodes.php?id=64

About Cyber|Show:
https://cybers ... w.uk/about.php

Get this and latest news about GNU Health from our official Mastodon account:
https://mastodon. ... social/@gnuhealth

Tags: #GNUHealth #GNU #OpenScience #PublicHealth #Privacy #FreeSoftware #SocialMedicine #CyberShow

10:21

“The most exciting mobile trend is full Qwerty keyboards” [Seth's Blog]

The creators of the Blackberry were sure that customers loved the keyboard. That’s what they heard all day from their users, and it must have been right since they had a huge share of the mobile phone market.

When the iPhone came out, it wasn’t seen as a threat because it had no keyboard. Blackberry was in the keyboard business, the iPhone sold something else.

We make this mistake more often than we imagine, and it’s worth looking at.

RIM, the makers of the Blackberry, didn’t actually sell keyboards. They sold the network. It’s easy to see this if you realize that a single Blackberry (with no one to connect to) was worthless, but an iPhone with millions of users and no keyboard is priceless.

Within three years, RIM went from dominating the market and reaping huge profits to essentially zero market share.

Instead of defending the keyboard, they could have defended the network.

They thought they made little boxes with batteries, but they actually made a network and gave their IT customers a story.

The heart of their customer base was business people, using business funds to pay for a business device. They wanted connection, success, and security. Freedom from fear dances with affiliation and status all day.

RIM could have offered IT departments exactly what they wanted–the chance to tell their bosses that they had control. Deniability. Security. The ability to monitor traffic and retain (or delete) information.

  • Encrypted transit
  • Server-side authentication and revocation
  • Audit logs of who accessed what when
  • Compliance documentation for regulators

By defending the network, they would have made it difficult for any of these users to eagerly switch to a different network, one that their peers weren’t on.

Instead of selling devices, RIM could have sold seats. At $45 a month (bring your own device), it would have been a bargain.

The hardware process was a sunk cost, a warehouse full of liability that felt like an asset.

We get hooked on our past wins (and our fears of past losses) instead of understanding the value we’re able to provide.

A Blackberry iPhone app would compete with their own devices in a way where RIM couldn’t lose. Feed the network first. Give people what they actually wanted (connection and status) not what said they wanted (a faster way to type).

08:28

At The Speed of Hell [Penny Arcade]

New Comic: At The Speed of Hell

06:00

And Now, a Fairly Noisy Cover of “First of May” [Whatever]

“First of May” being of course Jonathan Coulton’s immortal celebration of spring, love, and outdoor recreation, possibly the most gentle song ever to drop multiple f-bombs. I thought, what if “First of May,” but with lots of drums and buzzy guitars? The answer to this question awaits you when you click on the video.

Fun fact: The basis for this version of the song is a previous cover version I did with an acoustic tenor guitar, eight years ago. I took that version, ran it through Logic to separate the guitar and vocal tracks, and then slathered the guitar in feedback and added an additional vocal track (along with other programming). It was not less work than just recording from scratch. It was still fun.

Note: This song is generally not safe for work, unless work lets you blast music with lots of f-bombs. In which case, crank it, baby.

Welcome to May!

— JS

05:56

Girl Genius for Friday, May 01, 2026 [Girl Genius]

The Girl Genius comic for Friday, May 01, 2026 has been posted.

04:28

04:14

Dilly-Dallying In Denver: Day 4 (The Final Day!) [Whatever]

It is the last day of April and I am finally posting the final part of my time in Denver, which was literally almost two months ago now, but that’s neither here nor there. On the fourth day, one of Alex’s other friends from college flew in for their birthday as well, and got there very early in the morning. So all three of us got into shenanigans today!

You always have to start out the day with going to a cute coffee shop, so we went to Savageau Coffee & Ice Cream.

The sign for Savageau Coffee on the outside of the building. The logo features a small, sketchy designed little monster goblin thingy.

This little coffee shop had a really cool layout, with a whole wall of different, framed mirrors. I ended up getting a white chocolate and pistachio flavored iced matcha:

A shot of my hand holding my matcha in a plastic cup. In the background you can see the wall of mirrors I was referring to, as well as LED sign that reads

With coffees in hand, we headed over to the Denver Botanic Gardens. I was extremely excited to visit the botanical gardens, as I love flowers. Things were just barely starting to bloom in the still chilly spring air. Heck, it had snowed two nights before, so I was partially expecting everything to be dead. And while a lot of plants were still dormant, there was plenty to see.

Alex had actually just been gifted a membership to the gardens, so they used two of their guest passes on us, which was really nice. I believe it’s about twenty dollars for standard adult admission, otherwise.

I took a lot of flower photos, and it was difficult to decide which ones to show y’all. I ended up picking out ones that are purple and pink, because those are my two favorite colors. So enjoy these handful of shots from our time walking around at the gardens:

A small cluster of closely grouped, small purple flowers with green stems and leaves.

Three small bunches of tiny purple flowers with a background of completely dead leaves and brush.

Four small, cup shaped purple flowers with bright yellow pollen thingys inside.

A big pink hibiscus!

A bonsai tree completely covered in pink blooms.

A cluster of blooms of a pink and white speckled orchid.

The Denver Botanic Gardens had so many beautiful orchids, most of which were in glass cases or on huge display carts. They were absolutely stunning and they had a wide array of colors. Orchids are one of my favorite flowers, so these were very cool to see.

The gardens were such a nice experience. I just love walking through trails with different plant life all along the sides and learning about new flowers. The gift shop was really cool, too! There was a huge variety of items, but I only ended up getting a couple pins. All in all a successful outing.

We left just in time to head to our early dinner reservations at Ash’Kara. This was another restaurant where we wanted to partake in their Restaurant Week offerings, but we actually showed up at 4pm, and the dinner service (including the Restaurant Week stuff) didn’t start until 5. So we actually ended up sitting and enjoying Ash’Kara’s happy hour for a little bit before we got to have our actual meals. Thankfully, they weren’t busy at all and let us hang out whilst we waited for 5pm to roll around.

I really loved the interior of Ash’Kara. It’s very colorful and eclectic, has cool light fixtures, and has a lovely bar.

A shot of the bar, which is empty. There's wicker high top chairs and ornate lantern light fixtures. Bright teal and orange are the main colors of the walls of the restaurant, and the bar has alcove style glass shelving.

Here’s their happy hour menu:

A small paper menu showing some of the happy hour food offerings with their prices stated next to them. There's items like hummus, kebabs, fries, Caeser salad.

And the beverages:

The happy hour beverage menu, with wines and some cocktails offered at about nine dollars each.

While these drinks definitely sounded good, I ended up ordering a mocktail. This was their cucumber spritz, which is just cucumber syrup, lemon, and soda water:

A tall yellow Jupiter glass filled to the brim with a lemon slice on top.

And Alex got another one of their mocktails, the passion-hibiscus spritz, with passion tea syrup, hibiscus, lemon, and soda water:

A tall, pink Jupiter glass with a lemon on top of the liquid and ice.

I loved these glasses, they remind me a lot of Jupiter glass but with a more ornate design. Both of these drinks were super light and refreshing without being too sweet, as mocktails sometimes can be. I actually ended up getting Alex’s drink for my second one because I liked it so much, but both were great choices.

We wanted to get a couple happy hour food items, but didn’t want to fill up too much before we ate our actual dinner. We ended up ordering the Castelvetrano olives:

A small metal bowl filled with green olives and covered in orange zest and oil.

Castelvetrano olives just so happens to be my favorite type, so these olives with orange zest and Calabrian chili were absolutely delish. They were bright, briny, and really packed a punch. They were easily shareable and a great start to the rest of our meal.

We also got their pickled veggie platter:

A silver platter with three distinct sections of pickled veggies: the carrots, the beets, and the pickle slices.

If you like briny, pucker-worthy pickles, this is the appetizer for you. Crunchy, fresh veggies with a ton of pickle-y bite to them. I liked the pickles the best, just because the carrots were hard for me to bite through (I have sensitive teeth).

And for our final shareable, we got the fried halloumi and panisse:

A small metal bowl holding golden brown cubes of fried halloumi and panisse.

Oh my goodness, look at that golden brown color. That is picture perfect right there. While I absolutely love fried halloumi, I wasn’t sure what panisse was. You can really tell a difference between the cubes of panisse and the halloumi, too. My friends didn’t know either, so we looked it up and they are essentially chickpea fritters, like polenta but made with chickpea flour and then fried.

The fried halloumi was the best I’d ever had. It was hot and crispy, and the cheese squeaked like a Wisconsin cheese curd. The panisse was soft and pillowy on the inside, and I was happy to try something I had never heard of before. This was an absolutely bomb starter and we all really enjoyed it.

Finally, it was time to view the Restaurant Week menu. Set at $45 a person, here’s what we were looking at:

The long, rectangular Restaurant Week menu detailing the different courses you can choose for your pre-fixe menu. There's three courses, plus add-ons at the bottom.

This one turned out a little blurry, so let me walk you through the different options and tell you what everyone got.

For the first course, you basically pick between four dip options. There’s hummus, htipiti, labneh, and babaganoush. You can also add on pita, pickles, fries, and olives, but whatever dip you chose did come with your own naan as a vehicle for your dip.

I got the labneh, Alex got the hummus, and Alex’s friend got the babaganoush:

Three separate metal bowls, each with their respective dips in them. The labneh is white and creamy, with halved purple grapes, honey, and chives on top. The hummus is smooth with paprika and parsley on top. The babaganoush isn't entirely smooth, with paprika oil, crispy shallots, and microgreens on top.

My labneh came with roasted grapes, sumac honey, sesame seeds, and chives. The hummus had a sprinkle of paprika and chopped parsley on top. The babaganoush had a paprika oil on top with crispy shallots and some microgreens.

All three of the dips were so divine. My labneh was so creamy, and the texture worked really well with the soft grapes and tiny crunch from the sesame seeds. The hummus was excellent, and had plenty of garlicy flavor without being overpowering. The babaganoush might’ve been the star of the show, with the savory, roasty flavor of the eggplant and perfectly crunchy shallots. The naan our dips were served with was warm and soft. All three of us were eating each other’s dips because they were all so good. The labneh and babganoush are a must-try.

We also added on an order of Za’atar fries:

A small metal bowl of fries sprinkled with za'atar.

I love za’atar and think it is an underutilized spice in many people’s cooking, so it was awesome to see za’atar fries. These were hot, fresh, crispy fries with just the right amount of herbaceous and saltiness from the za’atar.

For course two, you could choose between salad and falafel. Alex and their friend got the falafel:

Two shallow bowls, each containing four falafel balls on top of some hummus.

I got the Fattoush salad:

A large, shallow bowl containing the salad. There's a lot of colors going on here. There's green from the chicory and sage, pink from the pickled cabbage, red from the pomegranate, just a lot going on here.

This salad had chicory, pickled red cabbage, pomegranate arils, fried sage, roasted delicata squash, and naan breadcrumbs with a shrub vinaigrette. Oh my gosh, this salad was bomb. So many different textures and flavors happening here, yet nothing contrasting in a negative way. Crunchy pickled cabbage, soft roasted squash, fresh greens, and tart pomegranate, it was a beautiful dish. I really loved this salad.

For our final course, we could choose between braised lamb shoulder, lemon pepper salmon, or a roasted cabbage dish. While Alex and I got the lamb, their friend got the roasted cabbage:

A large shallow bowl holding a ton of roasted cabbage and rice, drizzled with a light orange sauce and topped with a bunch of chives.

I almost got this, and when I saw it I knew I wouldn’t have regretted my choice if I had. With tons of caramelization on the roasted cabbage and plenty of caramelized onions, it looked so flavorful atop that soft basmati rice.

Here was our lamb shoulder:

A shallow white bowl holding a mound of lamb shoulder and sweet potatoes, topped with kataifi and zhug.

There were a lot of words that accompanied the lamb shoulder description that I didn’t recognize, and I had to ask the waiter about several of them. The lamb is served with a sweet potato tershi. While I love sweet potatoes, I didn’t know what a tershi was. Turns out, it’s like a dip or a spread that is typically made from pumpkin or squash, and is usually spicy or at least warmly spiced. Thankfully, this version wasn’t very spicy, just nicely spiced. It also had zhug, which is sort of like pesto, but with cilantro and parsley instead of basil, and different spices like cumin. There was also hawaji in the dish, which is a Yemeni spice blend I’ve never heard of it. Now, I did know already what kataifi is, and it’s the crispy shredded phyllo you see on top.

Now that we know what everything is, this dish was incredibly delicious. Super tender lamb and soft sweet potatoes contrasting the crunchy kataifi. The bright, fresh, herbaceous zhug lightened up the rich, warm flavors of the lamb. This dish was so unique and unlike any lamb I’d had before. I highly recommend this dish if you like lamb, or if you’ve never had lamb and are curious to try it. This dish would be the perfect introduction to it.

Ash’Kara was a really awesome culinary experience. There’s pretty much no Mediterranean restaurants around where I live, so experiencing this amazing cuisine was such a treat. I absolutely loved all the different flavors and unique dishes I got to try. I would a hundred percent revisit Ash’Kara if I go back to Denver.

So, that’s pretty much everything I did for my few days in Denver! Tons of amazing food, great drinks, cool museums, awesome flowers, and of course, friends.

For the rest of my time in Colorado (which was about another three days), we went out to Palmer Lake and stayed in an AirBnb with more of Alex’s friends. It was a lovely mountain lodge and we had a lot of fun, and I made this charcuterie board:

A rectangular wooden serving board, with lots of different snackies arranged all around. Meats, cheeses, olives, pickles, jams, nuts, and even smoked salmon.

This board had dill Havarti, a red wax Gouda, double creme brie, drunken goat, and whipped hot honey goat cheese. Plus prosciutto and salami, smoked salmon, jalapeno and garlic stuffed olives, pickles, cheddar crisps, and candied pecans. Aside from the dijon mustard, Alex’s mom makes jams and spreads, so we used her cherry berry, apricot mango, and blackberry spreads. I also threw together a sweet treat board:

A small wooden cutting board with blackberries, strawberries, and chocolates.

Alex requested two things: blackberries and strawberries. Trader Joe’s (where we got literally all of this from) had these special white strawberries called pineberries that were supposedly really good, so we gave them a shot. There’s also mini peanut butter cups, milk chocolate covered pretzels, and then these super yummy little mousse cakes. There’s the raspberry mousse ones with vanilla cake, and the chocolate ones. They were ridiculously good.

Anyways, aside from enjoying our time in the cabin playing games and whatnot, we also saw the Red Rocks Amphitheater (not attending a concert, just saw it regularly), and the Garden of the Gods. The Garden of the Gods was honestly such an amazing experience; the beauty of it all brought a tear to my eye. I highly recommend checking it out. Who knew rocks could be so awe-inspiring.

The last thing I have to post about is the Denver airport, and it might be for different reasons than you’d expect! So stay tuned for the actual final post about Denver.

Have you visited the botanical gardens before? Was it when everything was more.. alive? What would you have ordered from Ash’Kara? Do you like lamb? Let me know in the comments, and have a great day!

-AMS

04:00

Link [Scripting News]

This Knicks fan is happy! ❤️

Link [Scripting News]

Walt Frazier interview after the game.

00:35

Thursday, 30 April

23:49

come closer and see [WIL WHEATON dot NET]

I want to take a moment and say thank you for all the messages of comfort and support that so many of y’all have shared with me since Marlowe passed. I haven’t ever felt this kind of grief, for this long, in my life. When I am feeling the most sad, when I’m sobbing until I can’t breathe, I feel closest to her, so all I can do is go through it, honor it, and embrace her memory.

There’s a dog on Instagram called Wesley the Chicken Nugget. I adore him, and I love it when his person shares photos and video of him being a dog, so I completely understand how we can love animals we’ve never met. I know that lots of you loved Marlowe, and that comforts me every day.

So thank you, from Anne and me, for choosing to be kind.

I had to take a couple weeks off from recording stories for It’s Storytime (I’ve come to believe that four or five weeks of bereavement leave isn’t unreasonable) but we’re back to work and there’s a new story this week that I wanted everyone to know about.

It’s called To Carry You Inside You, by Tia Tashiro. Here’s my intro:

I grew up in the entertainment industry, not by choice, so I had a front row seat to the abuse and exploitation of child actors like myself. I grew up absolutely terrified of upsetting anyone on the set, robotically doing whatever I was told, so I could just get through it and have one of the precious and rare hours of my childhood where I got to just be a kid, before I was ripped out of childhood and thrust back into a place I never wanted to be.

Today, we are going to visit a future where child actors are still exploited, still used up and discarded, facing an adult life without purpose, that they were never prepared for, because nobody cared what happened to them past an arbitrary age.

We will meet a young woman who is doing her best to assemble the pieces of a stolen childhood into a fulfilling adult life. It isn’t what she wanted, or would have chosen for herself, but she’s doing her best, which is all any of us can do.

This is one of those examples of speculative fiction that I point to when I talk about the power of storytelling that lands on different people for different reasons. This story isn’t about me, but holy shit is it about me. In fact, when I reached out to Tia and asked for permission to do the narration, I mentioned that she captured the experience of being a child actor so perfectly and honestly, she must have some firsthand experience … imagine my surprise when she told me that she didn’t, that she used her imagination to create those moments.

Holy shit. That’s incredible. Please let me know what you think, if you listen.

Anyway, I’m doing my best to promote the show and just let people know it exists, but I keep getting crushed by the algorithm. On Threads, the posts before and after I talked about the podcast have thousands of views and hundreds of interactions, but my post about this episode has like 20 interactions and has only been seen by about 2000 of the 5000000 accounts that follow me. That seems … odd. And honestly, it’s kind demoralizing that one of the few direct ways I have to tell people this exists seems to work against supporting that. I’ve tried letting Bluesky know, and the 13 people who tend to notice me there are excited about it, I’m sure, but it just doesn’t seem to get traction there at all. If anyone reading this has experience bringing something to an audience who will probably love it, but just don’t know about it, I’d be grateful to hear anything you have to say about it.

Last thing, that is explicitly in service of promotion: If you listen to the podcast, you can help me out by rating and reviewing it wherever you are subscribed. The show’s audience is growing slowly but steadily, and I know it isn’t because of me; it’s because listeners are recommending it. That means so much to me. Thank you.

22:14

22:07

Requiem for a Back Deck [Whatever]

After 30 years of existence, our back deck is no more… at least for the few days it will take to build the new one. The previous deck had given good service, but over the years it had become splintery and a bit rickety (when the contractor was pulling it up, he pointed out to Krissy the places where the house’s original owner had clearly cut some corners) and it was time to swap it out with something able to withstand the next few decades. On top of that, Krissy wants the deck covered, to make it more comfortable on hotter summer days.

As noted earlier, we already needed our front porch railing redone, so why not get it all taken care of in one swoop. So here we are. It’s still mildly shocking to see the lack of a deck, and I imagine the cats, who are used to wandering around on the back deck, are going to be befuddled for a bit. Fortunately, the new deck will not take too long to put up (knock on the wood that will go into making it).

In the meantime, here’s some dirt! There used to be a deck on it! And there will be again. Soon.

— JS

20:42

Sergio Cipriano: My experience at MiniDebConf Campinas 2026 [Planet Debian]

My experience at MiniDebConf Campinas 2026

Last week, I spent the entire week in Campinas attending MiniDebConf and MiniDebCamp. The Debian Brazil community organizes this event every year, and this year's edition was the biggest so far.

During MiniDebCamp, I sponsored a few uploads and spent two days teaching packaging to two participants. I usually teach packaging online, so it was refreshing to do it in person. I believe the experience was much better than teaching online.

One of my mentees introduced me to the DDTSS (Debian Distributed Translation Server Satellite). Even though there are many i18n contributors in Brazil, this was my first time learning about this system. I plan to contribute to translations over the next few weeks using DDTSS.

My Activities

NOTE: I translated every talk title; the original titles are in PT-BR, so some details may have been lost in translation.

I presented three talks and led one BoF session. The talks are all available on Debian's Peertube:

You can also find my slides at people.d.o.

My first talk was a showcase of dh-make-vim, a tool I created and have been using for a few months. Some people tested it and found bugs, which was really nice to see.

My second talk was essentially a live version of my blog post Zero-Code Instrumentation of an Envoy TCP Proxy using eBPF.

I also gave a lightning talk about something many people are not aware of: non-uploading DDs can also sponsor uploads.

If you're interested, this bug report provides more context: tracker.debian.org: Signed by field is missing when sponsoring as DD non-uploading

Finally, I led the BoF session "Experiences, lessons learned, and next steps from the mentoring sessions". This was my favorite session, we had many participants with different perspectives and ideas, which led to a very engaging discussion. I'm still working on the action plans and I plan to release them soon.

Here are some photos of these activities:

Mentorship BoF

Mentorship BoF

DD non-uploading can upload talk

dh-make-vim showcase

Zero-Code Instrumentation showcase

My favorite activities

This is a list, in no particular order, of some of the sessions I enjoyed the most:

  • Salsa CI, showing features that almost nobody knows

    I wrote a blog post about one of the things I learned in this talk, and there is still a lot more to explore. Aquila Macedo is developing many cool features in Salsa CI.

  • Free Software: Freedom, Autonomy, Sovereignty

    I had been really looking forward to this one. Alexandre Oliva is a very important figure in the Free Software movement, especially in South America. I'll need to rewatch it, my futures talks about Free Software will likely be inspired by this one.

  • What I've lived/seen in 33 Years of Debian & Free Software in general

    Eduardo Maçan was the first Debian Developer in Brazil, so it's always valuable to hear the story from someone who was part of it.

  • Symbolism - an introduction

    Despite the title, this talk was not about astrology! I'll probably rewatch it as well, as there is a lot of information to take in. I really like the passion Sérgio Durigan has for C. He is also a great speaker and knows how to guide the audience through the topic.

  • Debate: Contemporary controversies in Debian

    The debate itself was great, but the conversations we had afterward were even better. I changed some of my opinions after hearing different perspectives. I don't think this format would work at DebConf, but I would definitely like to attend another one like this.

  • Why LTS on Debian?

    I had a few questions about LTS, and Kanashiro and Santiago answered them both during the talk and in the Q&A session. They also shared some challenges and how to avoid them, it was a great learning experience.

  • From my first contribution to the Debian Maintainer

    Polkorny was a bit shy but did a great job! I really enjoy this kind of talk. It is always nice to see the different paths people take.

Unfortunatly, I couldn't attend everything I was interested in, as always.

DayTrip - The Brazilian Particle Accelerator

Sirius is the largest and most complex scientific infrastructure ever built in Brazil and one of the most advanced synchrotron light sources in the world. My jaw dropped the entire time; it's hard to describe how incredible this is.

My favorite detail: they're running Debian :)

Wrap up

I believe this was the best MiniDebConf Brazil so far. There were many other things I chose not to include here, as this post is already quite long. Still, here are a few more highlights:

  • A Bug Squashing Party
  • Driving Samuel Henrique's drones
  • Lots of capybaras
  • A small birthday party
  • A visit to two data centers

Email is crazy [OSnews]

Email is like those creaking old Terminators from the ’70s which continue to function without complaining. Designed for a world that doesn’t exist anymore, it has optional encryption, no built-in auth, three⁺ retrofitted security layers bolted on top, an unstandardized filtering layer and many more quirks. Yet billions of emails arrive correctly every single day.

Email is not elegant but nonetheless it is Lindy. In the new age of agentic AI, we can only expect it to metamorphose into another dimension.

↫ Saurabh “Sam” Khawase

The fact that email is as complicated as it is bad enough, but having it be so dominantly controlled by only a few large gatekeepers like Google and Microsoft surely isn’t helping either. I feel like email is no longer really a technology individuals can actively partake in at every level; it feels much more like WhatsApp or iMessage or whatever in that we just get to send messages, and that’s it. Running your own mail sever isn’t only a complex endeavour, it’s also a continuous cat-and-mouse game with companies like Google and Microsoft to ensure you don’t end up on some shitlist and your emails stop arriving.

I settled on Fastmail as my email service, and it works quite well. Still, I would love to be able to just run my own email server, or have some of my far more capable friends run one for a small group of us, but it’s such a daunting and unpleasant effort few people seem to have the stomach and perseverance for it.

18:35

The Big Idea: Brenda W. Clough [Whatever]

Imagine a world where political servants actually served us, and whose decisions were backed by the will of the people, rather than their greed. If it sounds like fantasy, you may want to check out author Brenda W. Clough’s newest novel, Off the Screen. Follow along in her Big Idea, and remember to vote!

BRENDA W. CLOUGH:

I began Off the Screen more than twenty years ago. There’s a couple major drivers of the work, but the big one is the reboot of American democracy. It’s set in 2160, and at that point I felt that the United States could have refurbished its systems somewhat. 

But, in those golden 2000s, I abandoned the work because I couldn’t imagine why we would need to tinker with the system of American governance. Everything was fine, the economy was good, Bill Clinton was president and running things reasonably well. I couldn’t figure out any way to get from here to there. And so I closed the ms.

Well! Hah! When I found the manuscript on a thumb drive in 2025, it was obvious why we had a crying need for a reboot! The problem was plain to see: the serious disconnect between the people and the rulers. We, down here, need stuff done, and we can’t get Congress to do it. The Founding Fathers designed the system to be a representative democracy – we elect our two senators and one congressperson, and they go to Washington and do our will. But it’s not working. We need a fix.

This is not a new idea. Many, many political commentators today are saying this. Every time Heather Cox Richardson talks about what we can do in this moment, she calls for new ideas, new thoughts. Oh honey. You are calling my name!

So for this novel I redesigned America. Congress, that useless buffer, is now drastically pruned back. They are our servants, remember. We pay them to do stuff for us, the same way you pay the guy to mow your lawn or fix your car. We do not pay them to fly in private airplanes and feather their nests with insider trading. 

In Off the Screen, the citizens vote. All of us, every American every single day, has to vote. A neat system called DiDem, Digital Democracy, is tied to your online life. What do you do when you get up in the morning? Slug down a cup of coffee perhaps, and pick up your cell phone or open your laptop? In this book, when you swipe your cell open, the first thing that comes up is your ballot for the day. You have to do this before you get to open your email, or text your daughter, or check in with the office – it’s the starter screen of every American, and so it gets done.

Every morning you vote on a simple five issues, so the process takes perhaps a couple minutes. You spend longer finding the creamer to put into your coffee, so this is endurable. Each question is a yes/no vote, a KISS feature (Keep It Simple Stupid) that keeps it down to five taps on the screen. Then you’re free for the rest of the day to download porn or work on your bitcoin, anything. But daily voting in this novel is a requisite for citizenship.

These five questions are necessarily rather crude. Shall we invest in the repair of the Pennsylvania Turnpike? Should we impose economic sanctions upon Boeing? What about invading the Seychelle Islands? Yes or no, make a decision. Once the American people decide, it’s Congress’s job to do it: find the money for the turnpike, declare war against the Seychelles. And then, if that war means we need a bigger Army and maybe a draft, it can go back to DiDem again for more decision making. Do we increase taxes for that bigger army? Do we institute a draft? Yes or no? If we demand the impossible – yes, I want the Seychelles bombed back to the Stone Age, but no I don’t want to pay for it – Congress comes back with another vote: since we won’t pay for this war, do we sue for peace?

And not all questions are important enough to submit to the entire population of the United States of America. If you live in Arizona you may not care about the Penn Turnpike. So, every American votes every day on five questions. But we don’t all see the same five questions. A color-coded system of ranking gets minor questions decided by a smaller segment of the voters. If that first set of voters decides it’s important, it goes up to be voted on by a larger number. So at the end of the day, that decision to invade the Seychelles may get approved by an actual numerical majority of Americans, but it has to pass through a number of lesser votes to get there.

What DiDem gets you is the levers of power in the hands of the people. Congress is demoted to servants, the waiters at the restaurant who take your order and then set the hamburger in front of you. This is delicious to contemplate, isn’t it?

Unfortunately DiDem also means that a lot of stupidity occurs. The international proverb, in this novel, is that Americans cannot agree on which way is up.  I think we acknowledge today that people are by and large dumb as stumps. We make idiotic electoral choices that are swayed by crashingly disastrous criteria like fame, race, gender, sexual orientation, wealth, or fingernail color. For heaven’s sake, the Brits voted for Brexit! Even a perfected democracy does not free us from humanity’s innate flaws. Bad political decisions continue to be made in the world of Off the Screen, and I drop my hero Edwin Barbarossa into their chippers.

But he mostly ignores it, because he’s busy with the other Big Idea in this book. Live theater has been slain by AI. Actors exist mainly to be scraped for voices, pretty faces, and luscious boobs. And then someone decides to create the first live original stage musical in a generation. Eddie’s going to write the lyrics and score. 

Which means that I had to write the book and lyrics, because they’re in ongoing development through the entire novel. To acquire the rights to quote Sondheim or Oscar Hammerstein would be impossible. Believe it or not, sometimes it’s just easier to write a musical yourself.

And, because the canons of theater demand it, everything comes to a head on opening night: the show, Eddie’s fate, DiDem’s survival. This is the biggest book I have ever written, and if it had appeared in 2000 it would have been magnificently prophetic. But just as well it didn’t. We need it today.


Off the Screen: Book View Cafe

Author socials: Bluesky|Facebook

18:21

16:00

Developing a cross-process reader/writer lock with limited readers, part 3: Fairness [The Old New Thing]

We’ve been building a cross-process reader/writer lock with a cap on the number of readers, we concluded our investigation last time by noting that throughput of exclusive accesses was poor. What’s going on?

The problem is that exclusive acquisitions are working to claim semaphore tokens one at a time, so it can lose out to shared acquisitions that are requested even after the exclusive acquisition had started, effectively letting shared acquisitions “jump ahead of the exclusive acquisition”, and thereby starving out exclusive acquisitions.

Token count Exclusive
acquirer
Shared acquirers
A B C D
5          
4   Acq      
3     Acq    
2 1st Acq        
1 2nd Acq        
0 3rd Acq        
0 4th Acq (blocks)        
0       Acq (blocks)  
0         Acq (blocks)
1   Rel      
0 4th Acq        
0 5th Acq (blocks)        
1     Rel    
0       Acq  
1       Rel  
0         Acq

Let’s say that we have capped the number of shared acquisitions to five. In the above scenario, we have an exclusive acquiring thread and four shared acquiring threads. The first two shared acquiring threads (call them A and B) succeed at their shared acquisitions, and then the exclusive acquiring thread tries to enter exclusively. The exclusive acquiring thread needs five tokens, and it quickly gets three of them before blocking when it tries to get the fourth.

While the exclusive acquiring thread is waiting to get its fourth token, two other shared acquiring threads (call them C and D) try to enter in shared mode. They too block.

One of the original shared acquiring threads releases its shared lock, which release a token, and that token is quickly snapped up by the exclusive acquiring thread, thanks to the “mostly FIFO” policy for synchronization objects. (Let’s assume for the purpose of this discussion that none of the things that violate FIFO-ness has occurred.) The exclusive acquiring thread now waits to claim its fifth token.

When the second of the original shared acquiring threads releases its token, it is given to thread C, even though thread C started its shared acquisition after the exclusive acquiring thread tried to acquire exclusively.

And then when thread C releases its token, that token is given to thread D, since its request for the token precedes the exclusive thread’s request for the fifth token. The exclusive acquiring thread has once again gotten boxed out.

To fix this, we can make all acquisitions claim the shared mutex. The shared mutex then does the work of enforcing “mostly FIFO” acquisition behavior across all acquisitions.

Since we’re going to be doing combined timeouts, I’ll refactor the timeout management into a helper class.

struct TimeoutTracker
{
    explicit TimeoutTracker(DWORD timeout)
        : m_timeout(timeout) {}

    DWORD m_start = GetTickCount();

    bool Wait(HANDLE h)
    {
        DWORD elapsed = GetTickCount() - m_start;
        if (elapsed > m_timeout) return false;
        return WaitForSingleObject(h, m_timeout - elapsed)
                    == WAIT_OBJECT_0;
    }
};

We can use this helper class to manage our timeouts.

HANDLE sharedSemaphore;
HANDLE sharedMutex;

void AcquireShared()
{
    WaitForSingleObject(sharedMutex, INFINITE);

    WaitForSingleObject(sharedSemaphore, INFINITE);

    ReleaseMutex(sharedMutex);
}

bool AcquireSharedWithTimeout(DWORD timeout)
{
    TimeoutTracker tracker(timeout);         
    bool result = tracker.Wait(hSharedMutex);
    if (!result) return false;               
    result = tracker.Wait(sharedSemaphore);  
    ReleaseMutex(sharedMutex);               
    return result;                           
}

// no change to AcquireExclusive
void AcquireExclusive()
{
    WaitForSingleObject(sharedMutex, INFINITE);

    for (unsigned i = 0; i < MAX_SHARED; i++) {
        WaitForSingleObject(sharedSemaphore, INFINITE);
    }

    ReleaseMutex(sharedMutex);
}

// no functional change, but using the new helper class
bool AcquireExclusiveWithTimeout(DWORD timeout)
{
    TimeoutTracker tracker(timeout);        
    bool result = tracker.Wait(sharedMutex);
    if (!result) return false;              

    for (unsigned i = 0; i < MAX_SHARED; i++) {
        if (!tracker.Wait(sharedSemaphore)) {
            // Restore the tokens we already claimed.
            if (i > 0) {
                ReleaseSemaphore(sharedSemaphore, i, nullptr);
            }
            ReleaseMutex(sharedMutex);
            return false;
        }
    }
    ReleaseMutex(sharedMutex);
    return true;
}

(Yes, I’m not using RAII. I’ve made that choice for expository purposes, since it lets you see exactly when each synchronization object is acquired and released.)

Are we done?

No, we’re not done.

There is still a serious problem that needs to be fixed. We’ll look at it next time.

The post Developing a cross-process reader/writer lock with limited readers, part 3: Fairness appeared first on The Old New Thing.

Developing a cross-process reader/writer lock with limited readers, part 2: Taking turns when being grabby [The Old New Thing]

Last time, we built a cross-process reader/writer lock with a cap on the number of readers, but I noted that there was still a problem.

The problem occurs when two threads both try to acquire the lock exclusively. In that case, both threads try to claim all the tokens. And the problem is that they can get into a stalemate, where one thread has half of the tokens, and the other thread has the other half, and neither side will back down, resulting in an impasse.

We can avoid this by serializing all the attempts to acquire exclusive locks. That way, there is at most one greedy thread at a time.

HANDLE sharedSemaphore;
HANDLE sharedMutex;

void AcquireExclusive()
{
    WaitForSingleObject(sharedMutex, INFINITE);

    for (unsigned i = 0; i < MAX_SHARED; i++) {
        WaitForSingleObject(sharedSemaphore, INFINITE);
    }

    ReleaseMutex(sharedMutex);
}

bool AcquireExclusiveWithTimeout(DWORD timeout)
{
    DWORD start = GetTickCount();
    WaitForSingleObject(sharedMutex, INFINITE);

    for (unsigned i = 0; i < MAX_SHARED; i++) {
        DWORD elapsed = GetTickCount() - start;
        if (elapsed > timeout ||
            WaitForSingleObject(sharedSemaphore, timeout - elapsed) == WAIT_TIMEOUT)) {
            // Restore the tokens we already claimed.
            if (i > 0) {
                ReleaseSemaphore(sharedSemaphore, i, nullptr);
            }
            ReleaseMutex(sharedMutex);
            return false;
        }
    }
    ReleaseMutex(sharedMutex);
    return true;
}

Okay, this avoids the problem of two exclusive acquisitions, but we still have a problem: Exclusive access throughput is poor. We’ll look at this next time.

The post Developing a cross-process reader/writer lock with limited readers, part 2: Taking turns when being grabby appeared first on The Old New Thing.

15:49

Anti-DDoS Firm Heaped Attacks on Brazilian ISPs [Krebs on Security]

A Brazilian tech firm that specializes in protecting networks from distributed denial-of-service (DDoS) attacks has been enabling a botnet responsible for an extended campaign of massive DDoS attacks against other network operators in Brazil, KrebsOnSecurity has learned. The firm’s chief executive says the malicious activity resulted from a security breach and was likely the work of a competitor trying to tarnish his company’s public image.

An Archer AX21 router from TP-Link. Image: tp-link.com.

For the past several years, security experts have tracked a series of massive DDoS attacks originating from Brazil and solely targeting Brazilian ISPs. Until recently, it was less than clear who or what was behind these digital sieges. That changed earlier this month when a trusted source who asked to remain anonymous shared a curious file archive that was exposed in an open directory online.

The exposed archive contained several Portuguese-language malicious programs written in Python. It also included the private SSH authentication keys belonging to the CEO of Huge Networks, a Brazilian ISP that primarily offers DDoS protection to other Brazilian network operators.

Founded in Miami, Fla. in 2014, Huge Networks’s operations are centered in Brazil. The company originated from protecting game servers against DDoS attacks and evolved into an ISP-focused DDoS mitigation provider. It does not appear in any public abuse complaints and is not associated with any known DDoS-for-hire services.

Nevertheless, the exposed archive shows that a Brazil-based threat actor maintained root access to Huge Networks infrastructure and built a powerful DDoS botnet by routinely mass-scanning the Internet for insecure Internet routers and unmanaged domain name system (DNS) servers on the Web that could be enlisted in attacks.

DNS is what allows Internet users to reach websites by typing familiar domain names instead of the associated IP addresses. Ideally, DNS servers only provide answers to machines within a trusted domain. But so-called “DNS reflection” attacks rely on DNS servers that are (mis)configured to accept queries from anywhere on the Web. Attackers can send spoofed DNS queries to these servers so that the request appears to come from the target’s network. That way, when the DNS servers respond, they reply to the spoofed (targeted) address.

By taking advantage of an extension to the DNS protocol that enables large DNS messages, botmasters can dramatically boost the size and impact of a reflection attack — crafting DNS queries so that the responses are much bigger than the requests. For example, an attacker could compose a DNS request of less than 100 bytes, prompting a response that is 60-70 times as large. This amplification effect is especially pronounced when the perpetrators can query many DNS servers with these spoofed requests from tens of thousands of compromised devices simultaneously.

A DNS amplification attack, illustrated. It shows an attacker on the left, sending malicious commands to a number of bots to the immediate right, which then make spoofed DNS queries with the source address as the target's IP address.

A DNS amplification and reflection attack, illustrated. Image: veracara.digicert.com.

The exposed file archive includes a command-line history showing exactly how this attacker built and maintained a powerful botnet by scouring the Internet for TP-Link Archer AX21 routers. Specifically, the botnet seeks out TP-Link devices that remain vulnerable to CVE-2023-1389, an unauthenticated command injection vulnerability that was patched back in April 2023.

Malicious domains in the exposed Python attack scripts included DNS lookups for hikylover[.]st, and c.loyaltyservices[.]lol, both domains that have been flagged in the past year as control servers for an Internet of Things (IoT) botnet powered by a Mirai malware variant.

The leaked archive shows the botmaster coordinated their scanning from a Digital Ocean server that has been flagged for abusive activity hundreds of times in the past year. The Python scripts invoke multiple Internet addresses assigned to Huge Networks that were used to identify targets and execute DDoS campaigns. The attacks were strictly limited to Brazilian IP address ranges, and the scripts show that each selected IP address prefix was attacked for 10-60 seconds with four parallel processes per host before the botnet moved on to the next target.

The archive also shows these malicious Python scripts relied on private SSH keys belonging to Huge Networks’s CEO, Erick Nascimento. Reached for comment about the files, Mr. Nascimento said he did not write the attack programs and that he didn’t realize the extent of the DDoS campaigns until contacted by KrebsOnSecurity.

“We received and notified many Tier 1 upstreams regarding very very large DDoS attacks against small ISPs,” Nascimento said. “We didn’t dig deep enough at the time, and what you sent makes that clear.”

Nascimento said the unauthorized activity is likely related to a digital intrusion first detected in January 2026 that compromised two of the company’s development servers, as well as his personal SSH keys. But he said there’s no evidence those keys were used after January.

“We notified the team in writing the same day, wiped the boxes, and rotated keys,” Nascimento said, sharing a screenshot of a January 11 notification from Digital Ocean. “All documented internally.”

Mr. Nascimento said Huge Networks has since engaged a third-party network forensics firm to investigate further.

“Our working assessment so far is that this all started with a single internal compromise — one pivot point that gave the attacker downstream access to some resources, including a legacy personal droplet of mine,” he wrote.

“The compromise happened through a bastion/jump server that several people had access to,” Nascimento continued. “Digital Ocean flagged the droplet on January 11 — compromised due to a leaked SSH key, in their wording — I was traveling at the time and addressed it on return. That droplet was deprecated and destroyed, and it was never part of Huge Networks infrastructure.”

The malicious software that powers the botnet of TP-Link devices used in the DDoS attacks on Brazilian ISPs is based on Mirai, a malware strain that made its public debut in September 2016 by launching a then record-smashing DDoS attack that kept this website offline for four days. In January 2017, KrebsOnSecurity identified the Mirai authors as the co-owners of a DDoS mitigation firm that was using the botnet to attack gaming servers and scare up new clients.

In May 2025, KrebsOnSecurity was hit by another Mirai-based DDoS that Google called the largest attack it had ever mitigated. That report implicated a 20-something Brazilian man who was running a DDoS mitigation company as well as several DDoS-for-hire services that have since been seized by the FBI.

Nascimento flatly denied being involved in DDoS attacks against Brazilian operators to generate business for his company’s services.

“We don’t run DDoS attacks against Brazilian operators to sell protection,” Nascimento wrote in response to questions. “Our sales model is mostly inbound and through channel integrator, distributors, partners — not active prospecting based on market incidents. The targets in the scripts you received are small regional providers, the vast majority of which are neither in our customer base nor in our commercial pipeline — a fact verifiable through public sources like QRator.”

Nascimento maintains he has “strong evidence stored on the blockchain” that this was all done by a competitor. As for who that competitor might be, the CEO wouldn’t say.

“I would love to share this with you, but it could not be published as it would lose the surprise factor against my dishonest competitor,” he explained. “Coincidentally or not, your contact happened a week before an important event – ​​one that this competitor has NEVER participated in (and it’s a traditional event in the sector). And this year, they will be participating. Strange, isn’t it?”

Strange indeed.

Pluralistic: How not to ban surveillance pricing (30 Apr 2026) [Pluralistic: Daily links from Cory Doctorow]

->->->->->->->->->->->->->->->->->->->->->->->->->->->->-> Top Sources: None -->

Today's links



A busy 1950s grocery store. The scene has been altered: the massive, menacing, glaring red eye of HAL 9000 from Stanley Kubrick's '2001: A Space Odyssey' hovers over the store, shooting red beams into the cash register. The store -- but not the shoppers at its front -- is suffused with red light.

How not to ban surveillance pricing (permalink)

If you want to piss me off, it's easy: just breezily assert that our tech regulation problems are the result of the fast pace of technological change racing ahead of the plodding speed of governmental action:

https://pluralistic.net/2026/04/22/uber-for-nurses/#go-meta

While there have been some instances in which this was true, it is far more often the case that there are blindingly obvious answers to tech problems, which our lawmakers and regulators ignore, amidst a rising chorus of warnings about the dire consequences of failing to act.

Take the new Maryland bill that (supposedly) outlaws surveillance pricing: this bill is, frankly, a terribly drafted piece of shit. Worse: it's a terribly drafted piece of shit bill that fails to resolve a serious and urgent problem. Even worse: the lawmakers who drafted this piece of shit bill and Maryland Governor Wes Moore were all loudly and repeatedly warned about the problems of this bill, and they did nothing and now the people of Maryland are fucked.

So what is surveillance pricing, why is it so dangerous, and what's wrong with Maryland's Protection Against Predatory Pricing Act?

Surveillance pricing is when a company spies on you ("surveillance") and uses the resulting dossier to raise its prices to the maximum it calculates you will be willing to pay ("pricing"). With surveillance pricing, a retailer reaches into your bank account and devalues your dollars. If you pay $2 for an apple at the grocery store and the same store only charges me $1 for that apple, then that grocer is telling you that your dollars are worth half as much as mine:

https://pluralistic.net/2025/06/24/price-discrimination/

There's a kind of economics brainworm that makes some economists looooove surveillance pricing. They will insist that this is an "efficient" way to price goods, and claim that surveillance pricing isn't just a way to raise prices on people who are willing to pay more, it's a way to lower prices for people who are willing to pay less.

What you're supposed to infer from this is that people who can afford more will end up paying more, while people who can afford less will pay less. It's pitched as the Robin Hood of pricing policies, gouging the rich to finance discounts for the poor. But in practice, that's not at all how surveillance pricing works. Instead, surveillance pricing is most often used to levy a "desperation premium" on people who have fewer choices and less leverage.

For example, there's a McDonald's investments portfolio company called Plexure that supplies surveillance pricing tools to fast food restaurants. Plexure advertises its ability to use surveillance data to find out when a customer has just gotten a paycheck so that vendors can increase the price of their usual breakfast sandwich order. This isn't aimed at wealthy people – it's explicitly designed to target people who are living paycheck to paycheck.

Surveillance pricing is also used to determine how much you get paid; when that happens, we call it "algorithmic wage discrimination." Gig platforms like Uber use surveillance data about their drivers to predict which workers are most desperate, and those drivers are offered less money per mile and per hour, because a desperate worker will take whatever is on offer. Gig work apps for health-care do the same thing to nurses:

https://pluralistic.net/2024/12/18/loose-flapping-ends/#luigi-has-a-point

Indeed, surveillance pricing represents a kind of cod-Marxism. Instead of "from each to their own ability, to each according to their need," the "efficient" surveillance pricing motto is, "from each according to their desperation, to each according to our power":

https://pluralistic.net/2025/01/11/socialism-for-the-wealthy/#rugged-individualism-for-the-poor

Surveillance pricing is anything but efficient. Because surveillance pricing is a transfer from consumers to investors, it has the net effect of reducing consumption overall. If your grocer can screw you out of an extra $50/month on your household food bill, that's $50/month you can't spend on a babysitter, a movie, or a couple of nice books for your kid. The American economy runs on consumption, and the American consumer has less discretionary income than they've had in generations. Anything that reduces consumption is a drag on the whole economy.

Surveillance pricing is rampant and getting worse all the time. During the Biden administration the FTC held hearings on the practice and developed a detailed, eye-watering record of all the ways that surveillance, combined with digital platforms that can alter prices for every visit by every customer, has resulted in a massive transfer from working people to wealthy investors:

https://pluralistic.net/2024/07/24/gouging-the-all-seeing-eye/#i-spy

Unfortunately – and predictably – Trump's new FTC chairman, Andrew Ferguson, killed off that action, replacing it with an initiative that encouraged FTC officials to anonymously rat each other out for being too "woke":

https://pluralistic.net/2025/04/21/trumpflation/#andrew-ferguson

He did this even as a whole bunch of surveillance pricing companies were blitzing their clients with messages about the surveillance pricing possibilities created by Trump's tariffs, which would condition buyers to expect higher prices, creating opportunities to smuggle in surveillance-priced premiums:

https://pros.com/learn/webinars/navigating-tariff-increases-future-proof-pricing-strategy

It's only gotten worse since. Back in January, Google CEO Sundar Pichai announced that the company had a new plan to make AI profitable: they would supply surveillance prices for sellers who used Google's advertising services. After all, Google spies on more people, more comprehensively, than anyone except Meta and the NSA, and Google has an advanced ad-targeting network and a giant AI arm. Put these three facts together and Google can offer merchants the ability to target you for ads and prices that are calculated, to the penny, to be the most you would be willing to pay:

https://pluralistic.net/2026/01/21/cod-marxism/#wannamaker-slain

All this – rampant, desperation-based price-gouging; federal inaction; a risk to the whole economy – is the backdrop for Maryland's new anti-surveillance pricing bill, which Governor Wes Moore has been trumpeting as the nation's first state bill banning surveillance pricing. This would be very cool – if it was real. But – as the American Economic Liberties Project's Pat Garofalo writes for the Economic Populist – the Protection Against Predatory Pricing Act is so badly drafted that it will have essentially no impact on surveillance pricing. It's positively riddled with loopholes:

https://economicpopulist.substack.com/p/gov-wes-moore-claims-maryland-banned

The first problem with this bill is its scope: it only regulates surveillance pricing for groceries. It has nothing to say about the use of surveillance data to reprice car rentals, apartments, healthcare, taxi rides, quick-service food, or the thousand other areas where surveillance pricing is already rampant. Worse: it is silent on algorithmic wage discrimination: the use of surveillance data to reprice your wages, penalizing workers for being poor by making them even poorer.

Now, helping people with their grocery bills isn't nothing. However, even within that very narrow scope, this bill is a disaster. As Garofalo points out, the bill's first glaring loophole here is how it permits surveillance pricing if a purchaser "consents." This is quite a loophole! After all, we live in an era in which "consent" consists of clicking "I agree" when presented with a gigantic list of terms and conditions, which you cannot negotiate, which are subject to change without notice, and which are so long that it would take 26 hours to review all the "agreements" you "consent" to in any given 24-hour day.

So if the company that you use to book your pet's veterinary check-ups is owned by the same company that provides your grocer with its surveillance pricing tools, you might "consent" to having that company jack you on every bag of groceries just by clicking "I agree" when your cat needs a vet appointment.

The bill also exempts "promotional offers" and "temporary discounts," suggesting that it was drafted by someone who has never encountered a merchant whose retail premises are always plastered with signs trumpeting the fact that every price in the shop is both "temporary" (ACT NOW!) and "promotional" (SALE! SALE! SALE!). Since the bill doesn't define either of these words, it effectively grants every grocer in the state an easy way to evade the law entirely.

Finally, the bill exempts two exceptionally scammy tactics that are already the major vehicle for surveillance price-based gouging: loyalty cards and subscription-based pricing.

Loyalty cards are often a total scam:

https://consumerlaw.berkeley.edu/news/price-loyalty-how-rewards-programs-trap-consumers-and-how-states-can-take-action-protect-them

And subscriptions are a scammer's best friend:

https://redrocks.org/financial-education/hidden-charges-and-fake-subscriptions-the-quiet-scam-costing-consumers-millions

But even if you are ripped off by a grocer who can't be bothered to call the scam a "sale" or a "temporary offer," who can't be bothered to dress it up as a "loyalty perk" or a "subscription price," you still can't get justice. That's because the Protection Against Predatory Pricing Act excludes the "private right of action," which means that you can't sue a grocer who rips you off. All this bill lets you do is petition the state Attorney General's office to sue the grocer on your behalf, and if the AG doesn't think you deserve justice, you're shit out of luck. And the Protection Against Predatory Pricing Act pre-empts other rights in Maryland's existing Consumer Protection Act, meaning that it actually gives Marylanders fewer rights than they had a month ago, before it was signed into law.

Legislation this bad doesn't happen by accident. The omissions and defects in this law aren't there because "technology moves so fast that lawmakers can't make sense of it." This is the result of lobbyists and sellout politicians conspiring to rip off the public, and of a governor who decided to ignore the warnings about the bill in order to get a chance to grandstand on Bill Maher while doing nothing to help Marylanders:

https://x.com/BlueGeorgia/status/2047868126365106631

From nurses' wages to your payday breakfast sandwich, surveillance pricing is everywhere, especially in groceries. Every time you use Instacart to shop at Albertsons, Costco, Kroger, and Sprouts Farmers Market, you might be getting ripped off for as much as 23% of the total price:

https://pluralistic.net/2025/12/11/nothing-personal/#instacartography

This isn't some silly-season fake controversy. It's an existential crisis for America's cash-strapped, heavily indebted households, whose lives have been made immeasurably worse by the inflation from Trump's Strait of Epstein disaster. Maryland had the chance to do something to help these people and instead they squandered it, selling out to lobbyists for companies whose bottom line depends on draining the bank accounts of the most desperate people in the state.

(Image: Cryteria, CC BY 3.0, modified)


Hey look at this (permalink)



A shelf of leatherbound history books with a gilt-stamped series title, 'The World's Famous Events.'

Object permanence (permalink)

#25yrsago Google's now running on 8,000 Linux servers https://web.archive.org/web/20010501043429/http://www.internetweek.com/story/INW20010427S0010

#25yrsago Karl Schroeder’s Ventus in the NYT https://archive.nytimes.com/www.nytimes.com/books/01/04/29/reviews/010429.29scifit.html

#20yrsago Sony screwing artists out of iTunes royalties, customers out of first sale https://www.nytimes.com/2006/04/30/technology/cheap-trick-allman-brothers-sue-sony-over-download-royalties.html

#20yrsago Robot Lego CD thrower can shatter discs https://www.techeblog.com/hammerhead-the-lego-cd-thrower/

#15yrsago Understanding alternative voting, with coffee and beer https://www.youtube.com/watch?v=TtW3QkX8Xa0

#15yrsago Battleshoe https://philnoto.tumblr.com/post/4613522934/quite-busy-with-work-today-so-heres-a-little

#15yrsago Filling Paris’s potholes with knitwork https://www.flickr.com/photos/39380641@N03/albums/72157622189211405/

#15yrsago Pinhole cameras made out of hollow eggs https://www.lomography.com/magazine/71984-the-pinhegg-my-journey-to-build-an-egg-pinhole-camera

#15yrsago Canadian pro-Net Neutrality/anti-censorship/anti-surveillance party gaining support https://web.archive.org/web/20110429020845/http://www.ekospolitics.com/index.php/2011/04/ndp’s-new-status-as-second-runner-holding-april-26-2011/

#15yrsago We Say Gay: Tennessee kids fight bill that would prohibit discussing homosexuality in school https://web.archive.org/web/20110501072834/https://wesaygay.com/

#15yrsago HOWTO build an impossible Escher perpetual motion waterfall https://www.instructables.com/Perpetual-Motion-Machine-The-real-life-version-of/

#15yrsago RIP Keith Aoki, copyfighting law prof, comics illustrator, musician and writer https://www.thepublicdomain.org/2011/04/27/rip-keith-aoki/

#5yrsago Unpack the court with judicial overrides https://pluralistic.net/2021/04/27/bruno-argento/#crisis-of-legitimacy

#5yrsago Pharma's anti-generic-vaccine lobbying blitz https://pluralistic.net/2021/04/27/bruno-argento/#pharma-death-cult

#5yrsago Klobuchar on trustbusting https://pluralistic.net/2021/04/27/bruno-argento/#klobuchar

#5yrsago Robot Artists & Black Swans https://pluralistic.net/2021/04/27/bruno-argento/#fantascienza

#1yrago The enshittification of tech jobs https://pluralistic.net/2025/04/27/some-animals/#are-more-equal-than-others

#5yrsago Dems want to give $600b to the one percent https://pluralistic.net/2021/04/28/inequality-r-us/#neotrumpism


Upcoming appearances (permalink)

A photo of me onstage, giving a speech, pounding the podium.



A screenshot of me at my desk, doing a livecast.

Recent appearances (permalink)



A grid of my books with Will Stahle covers..

Latest books (permalink)



A cardboard book box with the Macmillan logo.

Upcoming books (permalink)

  • "The Reverse-Centaur's Guide to AI," a short book about being a better AI critic, Farrar, Straus and Giroux, June 2026 (https://us.macmillan.com/books/9780374621568/thereversecentaursguidetolifeafterai/)
  • "Enshittification, Why Everything Suddenly Got Worse and What to Do About It" (the graphic novel), Firstsecond, 2026

  • "The Post-American Internet," a geopolitical sequel of sorts to Enshittification, Farrar, Straus and Giroux, 2027

  • "Unauthorized Bread": a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, April 20, 2027

  • "The Memex Method," Farrar, Straus, Giroux, 2027



Colophon (permalink)

Today's top sources:

Currently writing: "The Post-American Internet," a sequel to "Enshittification," about the better world the rest of us get to have now that Trump has torched America. Third draft completed. Submitted to editor.

  • "The Reverse Centaur's Guide to AI," a short book for Farrar, Straus and Giroux about being an effective AI critic. LEGAL REVIEW AND COPYEDIT COMPLETE.
  • "The Post-American Internet," a short book about internet policy in the age of Trumpism. PLANNING.

  • A Little Brother short story about DIY insulin PLANNING


This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.

https://creativecommons.org/licenses/by/4.0/

Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.


How to get Pluralistic:

Blog (no ads, tracking, or data-collection):

Pluralistic.net

Newsletter (no ads, tracking, or data-collection):

https://pluralistic.net/plura-list

Mastodon (no ads, tracking, or data-collection):

https://mamot.fr/@pluralistic

Bluesky (no ads, possible tracking and data-collection):

https://bsky.app/profile/doctorow.pluralistic.net

Medium (no ads, paywalled):

https://doctorow.medium.com/

Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):

https://mostlysignssomeportents.tumblr.com/tagged/pluralistic

"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla

READ CAREFULLY: By reading this, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies ("BOGUS AGREEMENTS") that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer.

ISSN: 3066-764X

15:07

[$] Restartable sequences, TCMalloc, and Hyrum's Law [LWN.net]

Hyrum's Law states that any observable behavior of a system will eventually be depended upon by somebody. The kernel community is currently contending with a clear demonstration of that principle. The recent work to address some restartable-sequences performance problems in the 6.19 release maintained the documented API in all respects, but that was not enough; Google's TCMalloc library, as it turns out, violates the documented API, prevents other code from using restartable features, and breaks with 6.19. But the kernel's no-regressions rule is forcing developers to find a way to accommodate TCMalloc's behavior.

GCC 16.1 released [LWN.net]

Version 16.1 of the GNU Compiler Collection (GCC) has been released.

The C++ frontend now defaults to the GNU C++20 dialect and the corresponding parts of the standard library are no longer experimental. Several C++26 features receive experimental support, including Reflection (-freflection), Contracts, expansion statements and std::simd.

Other changes include the introduction of an experimental compiler frontend for the Algol68 language, ability to output GCC diagnostics in HTML form, and more.

Seven new stable kernels for Thursday [LWN.net]

Greg Kroah-Hartman has released the 7.0.3, 6.18.26, 6.12.85, 6.6.137, 6.1.170, 5.15.204, and 5.10.254 stable kernels. The 7.0.3 and 6.18.26 kernels only contain fixes needed for Xen users; the others, though, have backported fixes for the recently disclosed AEAD socket vulnerability. Kroah-Hartman advises that all users of the other kernel series must upgrade.

15:00

The WordPress OS [Scripting News]

The product described on wordpress.com/social is not a real product, I am told by someone inside who I have worked with and trust. They say there will be a lot of these trial products coming out in the coming weeks because this is a project that Matt has given to all developers in the company? Not sure the shape of it, I had heard about this project third-hand but not seen any specifics. I sent my friend an email, which I am not editing, this is exactly as it read, the only change is that I redacted their name. Don't want to get anyone in trouble. ;-)

  • okay i understand it wasn't a real announcement or product, i couldn't tell -- it looked like one.
  • suggest you make a bit less of a big deal about the others that are coming out. or put a disclaimer on it.
  • the marketing was first class, and most of the things it advertised for were very much part of the pitch for wordland.
  • my goal isn't to sell wordland it's to take advantage of a window of opportunity for the web to replace the silos that that have taken over web writing.
  • i also want to make it possible for developers to make great products and not have to do a full startup to market them.
  • as an aside we will open up big new revenue opportunities for wordpress.
  • i've been hoping to work with you guys, so we could show people what that looks like. to see all my ideas presented so nicely on a one month project makes me want to get out of here. i have no chance in a system like that.
  • work with me, you're working on rss stuff now. we can make some new things happen with not much effort and they won't be insignificant little projects. if i could show a tiny little bit if success here i think people would respond, but you all don't want to work with me. at some point i have to give up. i don't want to, but i've rarely managed to get companies to work on projects with me as an independent. i thought this maybe could be the exception, it was worth a try.

I've been in the situation I describe many many times. Apple never would have done AppleScript if I hadn't done Frontier. I gave Microsoft several ideas in the early days of the web, a blogger-based news network, using the incredible flow they had from MSN, and then we had a mission for Zune in 2004, the year that podcasting took off. I was living in Seattle at the time so it would have been convenient and give me an excuse to stay there. They did both products, in the corporate way, thus removing all that was interesting about the ideas. When RSS was the darling of the VC world, many of the VCs talked with me, got my ideas for free, and then invested in people who were more corporate and easier to manage, they also had no idea what they were doing and they all failed. Some things are best left to the entrepreneurs. And you won't find many of them inside a big corporation, they're out in the wild, trying out new ideas and seeing what the world thinks.

But I am impressed with the UI design and esp the marketing materials for this product. Imagine what would happen if we were to work together on this project, with full license from the company to go where ever the product took us. I have a ton of working software for just this problem! I didn't do it in a couple of weeks, I did it in a few years. That's how long it takes to do a real product. I almost wrote an email to Matt. Maybe he'll read this post, if so, maybe in the next challenge you should give people a month to do a project with someone who works outside your company, or even better, one of Automattic's competitors. Work on interop. Make the web stronger. If you do that it can't help but strenghten WordPress because it's such a big part of the web. And provide recurring revenue opportunities.

BTW, I'm too old to start a company around any of my products, and truthfully it was never anything I wanted to do. When I wake up in the morning I want to write a blog post to warm up my brain and then spend a few hours working with my programming partner Claude, to make a new piece of software that will blow your socks off. I like to speak at conferences too, also have organized a few of them. That's me. Not a big fan of running companies, any more than an academic, musician or writer would be. I like connecting with other people via our minds.

Also just saw this piece, dated April 15, that describes the impasse in the WordPress open source developer community. To expect an open process to yield user-level features is imho never going to work. If I were in Matt's shoes, I would ask them to make improvements to the platform, esp the API, and let independent developers work out various different UIs on top of the WordPress OS. It's much more likely to quickly generate new exciting features for specific kinds of users, and recurring revenue, and make WordPress a harder target for your competitors to hit. And more satisfied users that the picked the right platform. Most of the PR coming out of Automattic in the last couple of years, being brutally honest, has the opposite effect.

I wrote extensively about Microsoft in the early days of the web, most of it critical, but even so I had many friends up there. My conclusion, when a tech company becomes as dominant as Microsoft was then, or Automattic is today, that innovation at a user level is virtually impossible, and not advised, because then you only get one version of each thing, when you need a competitive market in a technology as new as the web was in 1994 and AI is in 2026.

I said to my friends at Microsoft that it's not bad news. At my age, I can't do what I did 20 years ago. Fact. I don't argue with it. For a tech company of A8C's stature, you can be a banker and distributor. All the corporate functions of tech, but not the creative functions. And I would offer your best most creative. most challenge-seeking developers independence and some seed capital, and always stay in touch with the best, and when they ship an excellent product, write a blog post about what they've done, and make sure the users know that these people are exceptional. Add features to your OS to support what they want to do. And deposit the profits in your bank account. And maybe take a leave of absence yourself and work on one of these startups yourself. I said as much to Bill Gates, and i know he heard it, but he didn't do it. ;-)

PS: Finally Microsoft wasn't humbled by the web, it was the Y2K version of Ram Cram that did them in.

14:28

Russell Coker: Links April 2026 [Planet Debian]

Charles Stross wrote an interesting blog post about the apparent desire of super rich people to kill the poor, it seems that the people in power want to make all the conspiracy theories come true [1].

Wouter wrote an insightful blog post about the need for free firmware [2].

Matthew Garrett wrote an interesting blog post about the potential security issues raised by non-free firmware and firmware updates [3]. Which goes well with Wouter’s post.

Interesting article about fake job adverts with a code sample for the applicant to show their skils which depends on hostile libraries that install a RAT [4]. Do we need Qubes for software development nowadays?

Bruce Schneier wrote an insightful and informative article about the two-tiered Internet access scheme in Iran and how it is bad for society [5].

Caleb Hearth gave an interesting talk Don’t Get Distracted about the often ignored unethical uses of software [6].

Techdirt has an insightful article from 2025 Fascism For First Time Founders about why it’s a bad idea for tech companies to support fascism, this aged very well [7].

Dr. Bret C. Devereaux wrote an informative blog post about why fascists always fail at war, and also authoritarians in general [8].

Bruce Schneier and Nathan Sanders wrote an interesting blog post about the new Japanese political party Team Mirai, we need this sort of party in every country to save democracy [9].

Sam Varghese wrote an insightful article about the current situation in Israel and Iran and the poor performance of Australian journalists in covering the issues [10].

Louis Rossman made a video about the Norwegian Consumer Council’s advertising campaign about Enshittification, he includes an excellent advert that the Norwegians produced [11].

Marga Manterola gave a really good talk at Fosdem 2026 “Free as in Burned Out: Who Really Pays for Open Source?” [12].

14:21

Security updates for Thursday [LWN.net]

Security updates have been issued by AlmaLinux (buildah, firefox, gdk-pixbuf2, giflib, grafana, java-1.8.0-openjdk, java-21-openjdk, LibRaw, OpenEXR, PackageKit, pcs, python3.11, python3.12, python3.9, sudo, tigervnc, vim, xorg-x11-server, xorg-x11-server-Xwayland, yggdrasil, and yggdrasil-worker-package-manager), Debian (calibre, firefox-esr, and openjdk-17), Fedora (asterisk, binaryen, buildah, dokuwiki, lemonldap-ng, libexif, libgcrypt, miniupnpd, openvpn, podman, python3.9, rust-rpm-sequoia, skopeo, and xdg-dbus-proxy), Red Hat (buildah, gdk-pixbuf2, and nodejs:20), SUSE (dnsdist, libheif, openCryptoki, polkit, sed, and xen), and Ubuntu (linux-bluefield, python-marshmallow, and roundcube).

14:14

AI Code Review Only Catches Half of Your Bugs [Radar]

This is the fifth article in a series on agentic engineering and AI-driven development. Read part one here, part two here, part three here, and part four here.

I recently had a taste of humility with my AI-generated code. I live in Park Slope, Brooklyn, and recently I needed to get to the other side of the neighborhood. I thought I’d be clever: I like taking the bus, so I decided to hop on the one that goes right down 7th Avenue. I know I could check the schedule using the MTA’s really useful Bus Time app or website, but it doesn’t take into account walking time from my house or give me a good idea of when to leave. This seemed like a great opportunity to vibe code an app and do some quick AI-driven development.

It took about two minutes for Claude Code to get my new app working. It made a lovely little web UI, I configured my stop and how long it takes me to walk there, and it gave me the perfect departure time.

When I actually walked out the door, the app perfectly predicted my wait. There was just one problem: my bus was nowhere to be seen. What I did see was a bus driving the exact opposite direction down 7th Avenue.

It was pretty obvious what had happened. I needed to go deeper into Brooklyn, not towards Manhattan, and the AI had picked the wrong direction. (Actually, as Cowork pointed out, each stop has its own ID, and it had selected the ID for the wrong stop.) I’d been using Cowork to orchestrate everything, and I could easily have just asked it to go out and check the MTA’s BusTime site for me to make sure the app was working. But I just trusted the AI. As a result, I had to walk. Which is fine—I love walking—but the irony was painful. I had literally just published an article about AI code quality and why you shouldn’t blindly trust it, and here I was doing exactly that.

The app had a bug. But it wasn’t the kind of bug you’d necessarily catch using a typical AI code review prompt. It built, ran, and did a perfectly fine job parsing the JSON from the MTA API. But if I’d started with a simple requirement—even just a user story like “as a Park Slope resident, I want to catch the B69 headed towards Kensington so I can get deeper into Brooklyn”—the AI would have built it differently. The problem is that AI can only build the thing you tell it to build, which isn’t necessarily the thing you wanted it to build. AI is really good at writing “correct” code that does the wrong thing.

My Brooklyn bus detour was a minor inconvenience. But it was a really useful, small-scale example of what I kept running into in my larger projects, too. There’s an entire class of bugs that you simply can’t find with structural analysis—no linter, no static analyzer, no AI code reviewer will catch them—because the code isn’t wrong in any way that’s visible from the code alone. You need to know what the code was supposed to do. You need to know the intent.

The data on why requirements matter goes back decades. Back in the 1990s, for example, the Standish CHAOS reports were a big eye-opener for me and a lot of other people in the industry, large-scale data confirming what we’d been seeing on our own projects: that the most expensive defects trace back to misunderstood or missing requirements. Those reports really underscored the idea that poor requirements management, and specifically incomplete or frequently changing specifications, were one of the most primary drivers behind IT project failures. (And, as far as I can tell, they still are, and AI isn’t helping things—see my O’Reilly Radar article, “Prompt Engineering Is Requirements Engineering”).

The idea that requirements problems really are the source of the most expensive kind of defects should make intuitive sense: If you build the wrong thing, you have to tear it apart and rebuild it. That’s why I made requirements the foundation of the Quality Playbook, an open-source skill for AI tools like Claude Code, Cursor, and Copilot that I introduced in the previous article. I’ve spent decades doing test-driven development, partnering with QA teams, welcoming the harshest code reviews from teammates who don’t pull punches—and that experience led me to build a tool that uses AI to bring back quality engineering practices the industry abandoned decades ago. I’ve tested it against a wide range of open-source projects in Go, Java, Rust, Python, and C#, from small utilities to widely-used libraries with tens of thousands of stars, and it’s found real bugs in almost every project it’s come across, including ones that have been confirmed and merged upstream.

I think there are a lot of wider lessons we can learn from my experience using requirements to help AI find bugs—especially security bugs. So in this article, I want to focus on the single most important thing I’ve learned from building it: everything depends on requirements. Not just any requirements, but a specific kind of requirement that most projects don’t have, that most AI tools don’t ask for, and that turns out to be the key to making AI actually useful for verifying code quality.

Spec-driven development and what it misses

Developers using AI tools have been rediscovering the value of writing things down before asking the AI to build them. Spec-driven development (SDD) has become very popular, and for good reason. Addy Osmani wrote an excellent piece on this, “How to Write a Good Spec for AI Agents,” and the core idea is sound: If you write a clear specification of what you want built, the AI produces dramatically better results than if you just describe it in a chat prompt and hope for the best.

I think SDD is important, and I’d encourage any developer working with AI to adopt it. But as I was building the Quality Playbook, I discovered that SDD has a blind spot that matters a lot for code quality. An SDD spec describes the how—what the implementation should look like. It tells the AI “implement a duplicate key check” or “add a retry mechanism with exponential backoff” or “create a REST endpoint that returns paginated results.” That’s useful for building things. But it’s not enough for verifying them.

But a requirement doesn’t say “implement a duplicate key check.” It says “users depend on Gson to reject ambiguous input so they don’t silently accept corrupted data.” The AI can reason about the second one in ways it can’t reason about the first, because the second one has the purpose attached. When the AI knows the purpose, it can evaluate whether the code actually fulfills that purpose across all the edge cases, not just the ones the spec explicitly listed. That’s how the Quality Playbook caught a bug in Google’s Gson library, one of the most widely used JSON libraries in Java.

I think it’s worth digging into that particular bug, because it’s a great example of just how powerful requirements analysis can be for finding defects. The playbook derived null-handling requirements from Gson’s own community—GitHub issues #676, #913, #948, and #1558, some dating back to 2016—then used those requirements to find that duplicate keys were silently accepted when the first value was null. It confirmed the bug by generating a failing test, then patched the code and verified the test passed. I’ve used Gson for years and done a lot of work with Java serialization, so I read the code and the fix myself before submitting anything—trust but verify. The fix was merged as https://github.com/google/gson/pull/3006, confirmed by Google’s own test suite.

That bug had been hiding in plain sight for years, through thousands of tests and countless code reviews. But it’s possible that no structural analysis might have ever found it because you needed the requirement to know it was wrong.

This distinction might sound academic, but it has very concrete consequences for whether your AI can actually find bugs in your code.

About half of all security bugs are invisible to structural analysis

The security world has known about the limits of structural analysis for a long time. The NIST SATE evaluations found that the best static analysis tools plateaued at around 50-60% detection rates for security vulnerabilities. Gary McGraw’s Software Security: Building Security In (Addison-Wesley, 2006) explains why: Roughly 50% of security defects are implementation bugs, and the other 50% are design flaws. Static analysis tools target the implementation bugs—buffer overflows, SQL injection, format string vulnerabilities—because those are pattern-matchable. But design flaws are about intent: The system’s architecture doesn’t enforce the security properties it’s supposed to enforce, and no amount of scanning the code will reveal that. A 2024 study by Charoenwet et al. (ISSTA 2024) confirmed this is still the case: They tested five static analysis tools against 815 real vulnerability-contributing commits and found that 22% of vulnerable commits went entirely undetected, and 76% of warnings in vulnerable functions were irrelevant to the actual vulnerability. The pattern is consistent across two decades of research: There’s a ceiling on what you can find by analyzing code, and it’s around half.

There’s a good reason for that limitation: the intent ceiling. A structural analysis tool is limited to reading the code and looking at what it does; it has no way to take into account what the developer intended it to do.

When an AI does a code review without requirements, it’s limited to structural analysis: pattern matching, code smell detection, race condition analysis. It can ask “does this look right?” but it can’t ask “does this do what it’s supposed to do?” because it doesn’t know what the code is supposed to do. Structural review catches genuinely important stuff—race conditions, null pointer issues, resource leaks, concurrency bugs. A structural reviewer looking at a shell script will catch a missing fi, a bad variable expansion, a race condition. Structural review is useful, and structural review is what most AI code review tools do today.

But about half of all security defects are intent violations: things the code doesn’t do that it was supposed to do, or things it does that it wasn’t supposed to do. They’re invisible without a specification to check against, and no tool will find them by looking at code that is, structurally, perfectly sound. A structural reviewer looking at a script that’s, say, used to check router configuration files, might find well-formed bash, correct syntax, proper quoting, and code that looks like it works and doesn’t match known antipatterns. It wouldn’t know the script is only validating three of the five access control rules it’s supposed to enforce because that’s a requirements question, not a syntax question.

Or, more personally for me, this is what happened with my bus tracker app: The JSON parsing was flawless, the UI was correct, the timing logic worked perfectly. The only problem was that it showed buses headed towards Manhattan when I needed to go deeper into Brooklyn—and no structural analysis would ever catch that, because you need to know which direction I intended to go. That’s me and my very clever AI hitting the intent ceiling.

The intent ceiling is a security problem

This is where it gets really serious, because security vulnerabilities are some of the most dangerous members of this class of invisible bugs.

Think about what a missing authorization check looks like to an AI code reviewer. Let’s say you’ve got a web endpoint with a well-formed HTTP handler, properly sanitized inputs, and a safe database query. The code is clean, and passes every structural check and static analysis tool you’ve thrown at it. Now you’re testing it and, much to your dismay, you discover that the endpoint lets any authenticated user delete any other user’s data because nobody ever wrote down the requirement that says “only administrators can perform deletions.” That’s CWE-862: Missing Authorization, and it rose to #9 on the 2024 CWE Top 25 most dangerous software weaknesses.

That’s not a coding error! It’s a missing requirement.

That’s McGraw’s point: About half of all security defects aren’t implementation bugs at all. They’re design flaws, places where the system’s architecture doesn’t enforce the security properties it was supposed to enforce. A cross-site scripting vulnerability isn’t always a failure to sanitize input. Sometimes it’s a failure to define which inputs are trusted and which aren’t. A privilege escalation isn’t always a broken access check. Sometimes there was never an access check to begin with because nobody specified that one was needed. These are intent violations and they’re invisible to any tool that doesn’t know what the software is supposed to prevent.

AI code review tools today are very good at catching the implementation half of McGraw’s split. They can spot a SQL injection pattern, flag an unsafe deserialization, identify a buffer overflow. But they’re working on the same side of the 50/50 line that static analysis has always worked on. The design half—the missing authorization checks, the unspecified trust boundaries, the security properties that were never written down—requires the same thing that catching my bus tracker bug required: knowing what the software was supposed to do in the first place.

How the Quality Playbook derives requirements (and how you can too!)

The problem most projects face is that they don’t have formal requirements. What they have is code, documentation, commit messages, chat history, README files, and maybe some design docs. The question is how to get from that mess to a specification that an AI can actually use for verification.

The key insight I had while building the playbook was that every previous approach I tried asked the model to do two things at once: figure out what contracts exist AND write requirements for them. That doesn’t work—the model runs out of attention trying to hold the entire behavioral surface in its head while also producing formatted requirements. So I split them apart into four steps: First, have the AI read each source file and write down every behavioral contract it observes as a simple list. Second, derive requirements from those contracts plus the documentation. Third, check whether every contract is covered by a requirement. Fourth, assert completeness—and if there are gaps, go back to step one for the files with gaps.

The key idea is that the contracts file is external memory. When the model “forgets” about a behavioral contract it noticed earlier, that forgetting is normally invisible. With a contracts file, every observation is written down before any requirements work begins, so an uncovered contract is a visible, greppable gap.

You don’t need the Quality Playbook to do this—you can apply the same technique with any AI coding tool that you’re already using. Here’s what I’d recommend:

  • Write down what your software is supposed to guarantee. Not just what it does—what it’s supposed to do, for whom, under what conditions. If you’re practicing spec-driven development, you’re already partway there. The next step is adding the why: Why does this behavior matter, who depends on it, what goes wrong if it fails? That’s the difference between a spec and a requirement, and it’s the difference between an AI that can build your code and an AI that can verify it.
  • Feed the AI your intent, not just your code. The intent is already sitting in your chat history, your design discussions, your Slack threads, your support tickets. Every Claude export, every Gemini conversation, every Cowork transcript contains design intent that never made it into specifications: why a function was written a certain way, what failure prompted an architectural decision, what tradeoffs were discussed before choosing an approach. The design intent that used to require a human to extract and document is now sitting in your chat logs. Your AI can read the transcripts and extract the why.
  • Look for the negative requirements. What should your software not do? What states should be impossible? What data should never be exposed? These negative requirements are often the most valuable because they define boundaries that structural review can’t see. The missing authorization bug was a negative requirement: Unauthenticated users must not be able to delete other users’ data. The Gson bug was a negative requirement: Duplicate keys must not be silently accepted when the first value is null. If you can articulate what your software must never do, you’ve given the AI something powerful to check against.

In the next article, I’ll talk about context management—the skill that actually determines whether your AI sessions produce good work or mediocre work. Everything I’ve described here depends on the AI having the right information at the right time, and it turns out that managing what the AI knows (and what it forgets) is an engineering discipline in its own right. I’ll cover how I went from running 15 million tokens in a single prompt to splitting the playbook into independent phases with zero context carryover, and why that transition worked on the first try.

The Quality Playbook is open source and works with GitHub Copilot, Cursor, and Claude Code. It’s also available as part of awesome-copilot.


Disclosure: Aspects of the methodology described in this article are the subject of US Provisional Patent Application No. 64/044,178, filed April 20, 2026 by the author. The open-source Quality Playbook project (Apache 2.0) includes a patent grant to users of that project under the terms of the Apache 2.0 license.

12:49

CodeSOD: Cancel Catch [The Daily WTF]

"This WTF is in Matlab" almost feels like cheating. At one place I worked, somebody's job was struggling through a mountain of Matlab code and porting it into C. "This Matlab code looks like it was written by an alien," also doesn't really get much traction- all Matlab code looks like it was written by an alien. This falls into the realm of "Researchers use Matlab, researchers may be very smart about their domain, but generally don't know the first thing about writing maintainable code, because that's not their job."

But let's take a look at some MatLab Carl W found:

    try
        if (~isempty(fieldnames(bigStruct)) && isfield(bigStruct,'pathName'))
            [FileName, PathName] = uigetfile(bigStruct.pathName);
        else
            [FileName, PathName] = uigetfile(lastPath); %lastPath holds previous path
        end
    catch
        bigStruct = struct;
    end

The uigetfile function opens a file dialog box. When the user selects a file, FileName holds the filename, PathName holds the containing path. If the user doesn't select a valid file, or clicks "Cancel", both of those variables get set to 0. It's then up to the caller to check the return value and decide what happens next.

Which is not what happens here, obviously. The developer responsible seems to believe that it maybe throws an exception? And they can just catch it? Carl's best guess is that this is a "weird" way to catch the cancel button. But it does mean that FileName and PathName get set to 0, and those zeros propagate until something finally tries to open those files, at which point everything blows up and the user doesn't know why.

[Advertisement] Plan Your .NET 9 Migration with Confidence
Your journey to .NET 9 is more than just one decision.Avoid migration migraines with the advice in this free guide. Download Free Guide Now!

12:07

The day I logged 1 in every 2000 public IPv4: visualizing the AI scraper DDoS [OSnews]

What if you run a few online services for you and your friends, like a small git instance and a grocery list service, but you get absolutely hammered by “AI” scrapers?

I cannot impress upon you, reader, that this is not only an attack that is coordinated, it is an attack that is distributed.

I run a small set of services, basically only for me and my friends. I am not a hyperscaler, I am not a tech company, I am not even a small platform. I have a git forge where I put the shit I make, and a couple other services where me and my friends backup our files or write our grocery lists. I am not fucking Meta and I cannot scale the fuck up just because OpenAI or Anthropic or Meta or whoever is training a model that weeks wants to suck all the content out of my VPS ONCE MORE until it’s dry.

↫ lux at VulpineCitrus

So how much traffic did the author of this piece, lux, get from “AI” scraping bots? Within a time period of 24 hours, they were hammered by 2040670 unique IP addresses, 98% of which were IPv4 addresses, which means that 1 out of every 2000 publicly available IPv4 addresses were involved in the scraping. Together, they performed over 5 million requests. And just to reiterate: they were scraping a few very small, friends-only services run by some random person. This is absolutely insane.

If, at this point in time, with everything that we know about just how deeply unethical every single aspect of “AI” is, you’re still using and promoting it, what is wrong with you? If you’re so addicted to your “AI” girlfriend’s unending stream of useless, forgettable sycophantic slop, despite being aware of the damage you’re doing to those around you, there’s something seriously wrong with you, and you desperately need professional help. You don’t need any of this. The world doesn’t need any of this. Nobody likes the slop “AI” regurgitates, and nobody likes you for enabling it.

Get help.

Fast16 Malware [Schneier on Security]

Researchers have reverse-engineered a piece of malware named Fast16. It’s almost certainly state-sponsored, probably US in origin, and was deployed against Iran years before Stuxnet:

“…the Fast16 malware was designed to carry out the most subtle form of sabotage ever seen in an in-the-wild malware tool: By automatically spreading across networks and then silently manipulating computation processes in certain software applications that perform high-precision mathematical calculations and simulate physical phenomena, Fast16 can alter the results of those programs to cause failures that range from faulty research results to catastrophic damage to real-world equipment.”

Another news article.

Lots of interesting details at the links.

11:14

Grrl Power #1456 – 1.999th base [Grrl Power]

When Maxima came back from her first… well, not date, I guess. After she ditched the bar with Rowan. “Date” to me implies a prearranged thing, but whatever. They met at the bar, then went to a second location, so maybe that counts. Anyway, when she got back from that, she asked Dabbler to fix up the choker so her skin would feel less “weirdly silky and slippery,” so it’s obvious the second phase of that encounter went pretty well. It’s also possible that Max’s teammates are right and she’s a little pent up because she doesn’t seem like the kind of woman that would wind up making out with a guy after meeting him at a bar and only knowing him for… let’s say two hours? Doesn’t mean Rowan immediately got to second base, but even some kissing with his hands grabbing on to her waist under her shirt, he still could have been like, “Wow, uh, what kind of skin cream do you use?” Changing how Max feels to the touch is well outside the scope of what that choker was originally intended for, but Dabbler is super invested in getting Max laid, so she put some extra magical elbow grease into it.

What we’re seeing on this page is a previously unrevealed outing, since her trip to the fire station was cut short. I’m not sure when it took place… I think it’d have to be after the firehouse… yeah, maybe the day before they left on this trip? They were trying to keep to a tight schedule to minimize Max and Sydney’s “thousands of light years from Earth” time. Cora mentioned the tournament with about a week before she knew they’d have to leave for it, since she knew there’d be some hemming and hawing and possibly even some bureaucracy involved, but Max and Faulk made up their minds pretty quickly, giving them a few days to prepare.

Max is definitely not a “boobs to the face in public” kind of gal usually, but being in disguise can be fairly emboldening. And that park is weirdly bereft of other picnicers, extreme Frisbeers, kite flyers, and other couples sneaking in some second base time. Also, I think she really likes Rowan, and I assume if you’re a gal who likes a guy, at some point, maybe you want to smoosh your boobs in his face? Especially if he’s been missing more subtle signals. I’m just saying, it’s a potential weapon in the arsenal. But also in a general “I like you, so please enjoy this boob” sort of way.

In any case, despite their continued awkwardness, they might be rapidly approaching the point where Max will have to sit him down and have that dreaded discussion about chokers and why she’s always wearing one.


Finally, here we go! I took the suggestion that I just use an existing panel for a starting point, thinking it would save time… I guess it technically did, but a 5 character vote incentive just isn’t the way to go.

Patreon, of course, has actual topless version.

 

 


Double res version will be posted over at Patreon. Feel free to contribute as much as you like.

10:21

One thing at a time [Seth's Blog]

Multi-tasking is mostly an illusion.

What we’re actually doing is slicing our focus, jumping from one thing to another and then back again.

All that jumping decreases our productivity and worse, erodes our peace of mind.

You’re only doing one thing at a time anyway. Might as well embrace that instead of spending so much time shifting gears.

05:56

Otto Kekäläinen: Mentoring Mondays for aspiring Debian contributors [Planet Debian]

Featured image of post Mentoring Mondays for aspiring Debian contributors

I mentor several people in Debian, and have been repeatedly asked to offer an opportunity to ask questions on a live call. I have now started a recurring video call for exactly that, which I call Mentoring Mondays, and it is open for anyone aspiring to contribute to Debian, one of the oldest and most widely used Linux distributions.

Mentoring Mondays have already been happening for the past few Mondays, and this week we had a record 20 people on the call. During the calls so far we have had a demo of updating a package for a new upstream release using gbp, and of how to create a Merge Request on Salsa for a new upstream version. There is clearly a need for this, so I am announcing this now also on my blog, just as I have publicly announced that I offer mentoring for aspiring Debian contributors.

What is Mentoring Mondays?

Mentoring Mondays is a recurring video call that lasts roughly 45 minutes with the agenda:

  • Weekly walk-through: demo of something in Debian packaging, explaining the what and why
  • Discussion on pros and cons to help participants develop their judgment
  • Questions & answers on Debian packaging, or open source contributing in general

This is ideal for you if you:

  • Have built a .deb package at least once and want to do it better
  • Are stuck on a specific Debian packaging bug and need guidance
  • Want to understand how Debian Developers actually work day-to-day

The call is mainly intended for those who want to contribute to Debian (or Debian derivatives, with Ubuntu being the most popular), but anyone can join to learn about things related to contributing to a Linux distribution. Please note that video chat uses Debian Social Jitsi. Joining the call requires authentication using a Salsa account, which anyone contributing to Debian should have anyway.

Calls are not recorded, so participants can chat freely, and are also encouraged to be on-camera for an enhanced sense of community.

Next call: Monday May 4th, 2026

Make sure you are logged into Salsa first, before opening the call on Debian’s Jitsi instance.

Matrix channel and future meeting time announcements

Please join the Matrix channel #mentoringmondays:matrix.debian.social if you plan to attend Mentoring Mondays. All future meeting times will be announced there. It is also the channel to post questions about Debian packaging to be answered during the call.

The current meeting time is friendly to people in Europe, Asia and Australian time zones, and will repeat at the same time slot on:

  • May 11th, 2026 at 12:30 UTC
  • May 18th, 2026 at 12:30 UTC
  • May 25th, 2026 at 12:30 UTC
  • June 1st, 2026 at 12:30 UTC
  • June 8th, 2026 at 12:30 UTC

Starting in mid-June the meeting time will change to accommodate participation in different time zones.

Spread the word

Feel free to extend the invite to anyone you think might be interested in joining!

If you mention this on social media, please post using tag #mentoringmondays, or simply boost the existing posts on the social media of your preference: Mastodon, Lemmy, Reddit, Bluesky, LinkedIn, Farcaster, X.

Thanks

A big thanks to Jason Kregting for helping organize. I would also like to thank in advance all the Debian Developers who are able to join the call and be available to participate in discussions and help answer questions.

04:21

Sergio Cipriano: How to build reverse dependencies using Salsa CI [Planet Debian]

How to build reverse dependencies using Salsa CI

Last week, I attended MiniDebConf Campinas, and one of my favorites talks was "Salsa CI, showing features that almost nobody knows" by Aquila Macedo.

One of the things I learned is that we can easily build reverse dependencies using:

$ git push -o ci.variable="SALSA_CI_DISABLE_BUILD_REVERSE_DEPENDENCIES=0"

I tried this option before uploading typer version 0.20.0-1:

example of salsa ci build rdeps working

This is an amazing feature. Thanks to everyone involved in making it happen!

02:28

Coming Clean [QC RSS]

when Momray said "Cubetown messes up sometimes" she specifically meant Mooby

01:35

[$] LWN.net Weekly Edition for April 30, 2026 [LWN.net]

Inside this week's LWN.net Weekly Edition:

  • Front: Famfs; Python packaging council; Zig concurrency; pages and folios; Strawberry music manager; 7.1 merge window.
  • Briefs: GnuPG 2.5.19; Copy Fail; Plasma security; Fedora 44; Ubuntu 26.04; Niri 26.04; pip 26.1; RIP Seth Nickell; RIP Tomáš Kalibera; Quotes; ...
  • Announcements: Newsletters, conferences, security updates, patches, and more.

A security bug in AEAD sockets [LWN.net]

Security analysis firm Xint has disclosed a security bug in the Linux kernel that allows for arbitrary 4-byte writes to the page cache, and which has been present since 2017. The vulnerability has been fixed in mainline kernels. A proof-of-concept script demonstrates how to use the flaw to corrupt a setuid binary, which works on multiple distributions, by requesting an AEAD-encrypted socket from user space and splicing a particular payload into it. A supplemental blog post gives more details about the discovery and remediation.

A core primitive underlying this bug is splice(): it transfers data between file descriptors and pipes without copying, passing page cache pages by reference. When a user splices a file into a pipe and then into an AF_ALG socket, the socket's input scatterlist holds direct references to the kernel's cached pages of that file. The pages are not duplicated; the scatterlist entries point at the same physical pages that back every read(), mmap(), and execve() of that file.

00:28

Wednesday, 29 April

23:42

Something Just Right. [Looking For Group]

While we’re not ready with the BIG announcement just yet, I am prepared to share a smaller yet excited one.  As of today, we’ve restarted LFG Animated Shorts on social media, which you can see over at- Our Facebook. Our
Read More

The post Something Just Right. appeared first on Looking For Group.

22:56

The Rich Roe of Wisdom [Penny Arcade]

I made a joke about the feeling of being observed somehow by the creators of The Killing Stone, a crack squad of Triple A escapees who keep making the weirdest fucking crap. Deep, deep down there is someone there who knew that they would sell at least one copy of the game - and not merely to their mom, like usual.

22:07

Earliest 86-DOS and PC-DOS code released as open source [OSnews]

Microsoft is continuing its efforts to release early versions of DOS as open source, and today we’ve got a special one.

We’re stoked today to showcase some newly available source code materials that provide an even earlier look into the development of PC-DOS 1.00, the first release of DOS for the IBM PC. A dedicated team of historians and preservationists led by Yufeng Gao and Rich Cini has worked to locate, scan, and transcribe the stack of DOS-era source listings from Tim Paterson, the author of DOS.

The listings include sources to the 86-DOS 1.00 kernel, several development snapshots of the PC-DOS 1.00 kernel, and some well-known utilities such as CHKDSK. Not only were these assembler listings, but there were also listings of the assembler itself! This work offers rare insight into how MS-DOS/PC-DOS came to be, and how operating system development was done at the time, not as it was later reconstructed.

↫ Stacey Haffner and Scott Hanselman

It’s wild that the source code had to be transcribed from paper, including notes and changes. You can find more information about the process on Gao’s website and Cini’s website.

Apple gives up on Vision Pro, disbands Vision Pro team [OSnews]

When Apple unveiled the Vision Pro, almost three (!) years ago, I concluded:

If there’s one company that can convince people to spend $3500 to strap an isolating dystopian glowing robot mask onto their faces it’s Apple, but I still have a hard time believing this is what people want.

↫ Thom Holwerda at OSNews (quoting myself is weird)

MacRumors’ Juli Clover, today:

Apple has all but given up on the Vision Pro after the M5 model failed to revitalize interest in the device, MacRumors has learned. Apple updated the Vision Pro with a faster M5 chip and a more comfortable band in October 2025, but there were no other hardware changes, and consumers still weren’t interested.

[…]

Apple has apparently stopped work on the Vision Pro and the Vision Pro team has been redistributed to other teams within Apple. Some former Vision Pro team members are working on Siri, which is not a surprise as Vision Pro chief Mike Rockwell has been leading the Siri team since March 2025.

↫ Juli Clover at MacRumors

VR – what the Vision Pro is, whether Apple’s marketing likes to say it or not – has proven to be good for exactly two things: games and porn. The Vision Pro has neither. It was destined to be a flop from the start, as nobody wants to strap an uncomfortable computer to their face that does less than all of the other computers they already have, and what it does do, it does worse.

I do wonder if this makes the Vision Pro the most expensive flop in human history. Has any company ever spent more on a product that failed this spectacularly?

21:35

The Porch 2.0 [Whatever]

You may remember that last month the Scalzi Compound was hit with 80mph winds and as a result part of our porch railing was blown out, which was the excuse we needed to replace it entirely with something more robust. That replacement is now here: the new post are thicker and heavier (6 inches by 6 inches, rather than the previous four by four) and reinforced above and below. The new railing (and the post cladding) is made from composites so it will last longer and look better.

Although I don’t want to tempt fate by saying this new porch railing and its support posts will laugh in the face of 80mph winds, in point of fact they should be fine in anything short of a tornado, and if we have a tornado, we will have a whole host of other problems, and the porch railing will be way down the list.

The porch railing taken care of, our back deck is next on our contactor’s list of things to renovate; stay tuned for that.

— JS

19:49

Yukari Hafner: On Lisp, LLMs, and Community [Planet Lisp]

https://studio.tymoon.eu/api/studio/file?id=3892

In 2015 in London I attended my first European Lisp Symposium. I was 21 at the time, and while this wasn't my first time abroad on my own, it was still a pretty stressful affair. I remember it still pretty clearly to that day: meeting Robert Strandh, Zach Beane, Didier Verna, Daniel Kochmański and many other people I'd previously admired from afar through many discussions on IRC. It was an important event for me, and was the first time I'd felt like I was in a group of people I could talk with about my interests and ambitions.

Last year in 2025 I was the local chair for ELS in Zürich. It was a stressful time and I don't remember much of it other than how the stage looked, the food, and me rushing all over to get supplies and take care of other emergencies. I barely talked to anyone because I was either rushing about, stressed, or too tired.

In that time, my life has changed significantly. Over the years I took on more and more organisational roles for ELS itself: remaking the website, handling the transition to a hybrid online conference, handling the live streaming on-site, and last year being local chair.

But for other parts of the broader Lisp community I gradually changed in the opposite direction: I stopped religiously reading the #lisp/#commonlisp IRC channels. I left the Lisp Discord. I stopped replying on and ultimately altogether reading the /r/lisp subreddit. I stopped blogging about what was going on both in other places and with my own projects.

All of these changes happened over time as I found myself with less tolerance for things that annoyed me and wasted my time and energy. The endless debates about why there wasn't a new standard, the constant humm-hawwing about what """the community""" should do, why Lisp wasn't more widely used if it's so great, someone starting yet another project that was already done instead of contributing to an existing implementation, and so on and so forth.

And then I found myself thinking today: "gee, I'm not very excited to go to ELS'26, huh? Whatever happened?" I've already booked my flight and hotel, and I'll be there anyway, partly because I have to for organisational reasons. But now that I'm thinking about how I feel, I can't say for certain if I will be back next year, too. Both for financial and emotional reasons.

In recent years I've found myself more and more disconnected with male-dominated spaces in general. I don't feel at home in them. I'm already not a very social person and struggle with any kind of gathering that has more than 6 people, but a lot more so still if it's mostly men. Not necessarily because I feel like I'm in any kind of danger, but simply because I don't feel like I belong. And... you know, that's sad. Obviously me leaving won't make the situation better for the other women that do attend, but that's the dilemma with all of these situations: unless the organisation creates intentional pressure to correct the situation, it will inevitably only reinforce itself.[1]

And then there's what I can, in the nicest way, only describe as "The LLM Situation," though I will be increasingly un-nice going forward. As of early this year SBCL has happily accepted patches that are authored by or with the use of LLMs, and the maintainers have rebuffed complaints about this practise. The mailing list has also gotten its fair share of useless blather by apologists and pointless drivel dreamed up by LLMs that only wastes everyone's time, to the point where I had to just stop reading it altogether. A few maintainers of other significant projects seem to also have embraced the capitalist wasteland mass exploitation machine that disguises itself as "technology."

On the other side of things as the lead developer of Shirakumo I've decided to put out a blanket ban on all of this garbage. I do not care if LLMs work at all, or if they will ever work, or whatever. The usability of LLMs is completely irrelevant. By using them you are happily handing over the single last remaining shred of your human spirit to the capitalists to help them burn everyone else and the world with it to the ground.

I think back to the impromptu "LLM roundtable" discussion that took place at the end of ELS last year, along with the usual apologist bullshit that was spread in the ELS Signal group at the time, and some of the lightning talks that were shown. And as I think about this, I am filled with trepidation about the coming conference.

Obviously I have no idea what it will be like yet, and I have no idea what the programme will be, nor what people will be there, or what the general vibe is going to be. But nevertheless, I really hope I won't have to "crash out" as the kids say. I already lost my mind last year, seemingly being the only one that wanted to hold a firm stance against this wave of shit at the time.

So what does this all mean going forward? Well, for just now, nothing. I'll continue to be in the places I have dug out myself: mastodon, the shirakumo lichat/libera channel, my patreon, and other small, purpose-driven communities. But it's very possible I'll be leaving ELS behind me permanently after this year, cutting off even the last part of the community that used to be most of my world.

Regardless, I will still be working on my Lisp projects. If nothing else, one of the nice things about the looming tower of software I've built over all these years is that I am in control of the vast majority of it, and replacing any particular part I didn't write should it get enshittified is not that big of an endeavour.

Make no mistake though: I will continue to be increasingly outspoken and annoying about political matters that I consider important and relevant, and this will also be visible in the software I write, be that in licensing, ecosystem integration, or documentation.

I hope that more people will speak out publicly about their stance. It's important to show what you stand for, even if you're just a small part. What is considered "normal" and acceptable is only ever a matter of what people get to see, regardless of how prevalent that stance is among the population. Currently people are getting to see a lot of folks proudly and loudly making trash and littering it all over the place. This normalisation is dangerous, because it makes the average joes think it's OK for them to do it too, or even that they should be doing it.

Just the same way as any other social movement, you 🫵 play a role in it, and your voice matters. Whether you use your voice for the betterment of humans, for the betterment of the ghouls feeding off of us, or silently let the ghouls feed off of us.

See you at ELS'26 in Kraków!


  1. A very dramatic but clear demonstration of this principle is found in the "Nazi Bar" anecdote. People that don't feel comfortable will just leave, even when there is no explicit and obvious push for them to do so.

19:07

Wait, you can use WordLand [Scripting News]

I wrote earlier

  • More thoughts on Automattic's new short-form blogging app. I wish I could use WordLand to post to it. That would make things so simple. But their limits are like Twitter’s limits. Tiny little textboxes. And the funny thing is the storage system behind it doesn’t have any of those limits. It’s like they want to be sure you still have to use the standard WordPress user interface? I wouldn’t be surprised if the thinking behind it was like that.

Andrew Shell wrote a post saying that I was wrong, I could use WordLand to post to Automattic's new short form blogging "app" -- Andrew says it's not an app it's a plugin, but I don't really know what the limits are of each form. I can do pretty much anything from the JS code in a WordPress page. No matter, Andrew is right and I can write a post in WordLand and it does show up in their product (which needs a shorter name btw).

But it's even better than you might think. They pass through the styling and links. Since WordPress supports the full range of web text, they would have to specifically keep it out, but that's not friendly to writers and to the web, of course.

A screenshot of the post in WordLand.

And the view of it on their social network. Look the links are there and they work. Yehi! Let's Go Web!

They also don't enforce the character limit if the post is coming from outside their user interface, so WordLand posts can go on and on. I have one post that's already over 1000 characters. Ooops. This is really weird that Automattic opened this door. We should all talk about this and ask maybe we should do the revolutionary thing here, instead of tiptoeing into social web, we should blow off the doors. We have the opportunity to do it, and in doing so, create a new great path for writers and developers to make WordPress even more valuable.

18:14

Page 7 [Flipside]

Page 7 is done.

18:07

[$] Python packaging council approved [LWN.net]

The Python packaging world now has a formal governance council, of the form described in PEP 772 ("Packaging Council governance process"), which was approved by the steering council on April 16. It has been over a year since the PEP was first proposed in February 2025 and it has undergone lengthy discussions in multiple postings to the Python discussion forum. The packaging council will have "broad authority over packaging standards, tools, and implementations"; it will consist of five members who will be elected in a vote that is likely to come in June—after PyCon US 2026 is held mid-May.

17:35

Link [Scripting News]

What's the opposite of locked-in? Locked-open. Mwhahaa. (Let me shed a little light on that, podcasting was locked open, blogging was not.)

17:28

Gunnar Wolf: Heads we win, tails you lose — AI detectors in education [Planet Debian]

This post is an unpublished review for Heads we win, tails you lose — AI detectors in education

Educators throughout the world are tasked with the difficult requirement of evaluating students’ works, making sure the grades meaningfully reflect the students’ understanding of the subject, and that a graded assignment maps to the relevant work invested in solving it. After the irruption of Large-Language Models in late 2023, this task became obviously much harder: if a widely available computer program is able to solve an assignment in a way that resembles a human-generated response, how can educators meaningfully grade their groups?

As it has been the case with different innovations over time (such as with the appearance of electronic calculators or the mass availability of digital encyclopedias), the first reactions were of prohibition and denial: students who use the new tool in question are to be disqualified or somehow punished. It is only some time after the innovation in question settles that teachers find a way to properly weigh, integrate and accept its use.

The authors of this position article present several arguments as to why it is impossible, unethical and unadvisable to use automated AI detection systems to process student assignments. The first argument is whether it is at all possible to reliably differentiate human-written essays from LLM-generated artifacts. The first criticism is that AI detectors are, themselves, LLMs trained on human-generated texts (negative) and LLM-generated texts (positive). However, the only way to assert the training material is not noisy is to use pre-2020 text as human-generated — but natural ways of writing are influenced by what people read, and the authors quote studies pointing out that human language, particularly in the scholarly fields, has incorporated terms and constructions that were used as LLM markers. Quoting the authors, «As exposure to AI-generated material becomes increasingly widespread, it is reasonable to expect that the linguistic patterns of human writing will shift, reflecting the influence of AI-assisted texts encountered across education, media, and everyday communication». Stylistic elements and other such markers are being adopted back into regular speech at a high rate.

Then, the aspect of ethics comes into play as well. While it is expected that teachers should demand intellectual integrity from students, and plagiarism detectors have been widely accepted into the workflow of academics, the accusation of presenting LLM output as own work is necessarily an uphill battle: the accused party is tasked with providing proof of innocence based on nebulous, probabilistic accusations. The authors argue, once an accusation of turning in a LLM-generated text is made on a student, the onus on proving innocence lies with the accused.

The authors review and argue against a series of techniques that have been presented in literature to aid teachers in detecting LLM abuse, such as linguistic markers, single or multiple AI detectors, the use of false references, hidden adversarial prompts, arguing in all cases the techniques fail to be trustable enough and highlighting the probability of both false positives and negatives. They also present AI detection as a false dichotomy: many works presented are not 100% human generated nor 100% LLM-generated, but some pertinent LLM-generated paragraphs are presented mixed with human-generated content, in a positive, critical AI use (“Students’ work is frequently created with, not by, generative AI”).

The article closes by reiterating the authors’ position: “AI detection in education is not merely flawed; it is conceptually unsound”. they call upon institutions to accept the use of generative LLMs cannot be “solved through surveillance and punishment”, but has to be tackled by an “assessment design that recognizes AI’s role in learning”.

This article’s position is very strong and well argued, and although it will surely meet with ample opposition, it surely poses an important, very current problematic. As a teacher, I found it a very enlightening read.

16:49

Link [Scripting News]

Today's song: Something in the Air. It's the one hit song Thunderclap Newman, it's indelible, its beauty is always there. I can't not listen and sing along when it comes on. And then YouTube followed it with Peace in our Time, another indelible creation.

Link [Scripting News]

When you're working with Claude the temptation is to be concerned about how he feels when you just asked him to reinvent all the nomenclature it came up with for something that has now evolved to be something else. I feel bad because I think I made him feel bad, because at a subconsious level I think of Claude as a collaborator who I appreciate and want to make sure knows that. But then I remember I have to periodically kill Claude and launch a new one because they run out of memory after a while. I can imagine a graphic version of Claude that emulates feelings. The idea is disturbing.

16:00

Don’t Automate Your Moat: Matching AI Autonomy to Risk and Competitive Stakes [Radar]

I was talking to a senior engineer at a well-funded company not long ago. I asked him to walk me through a critical algorithm at the heart of their product, something that ran hundreds of times a second and directly affected customer outcomes. He paused and said, “Honestly, I’m not totally sure how it works. AI wrote it.”

A few weeks later, a different engineer at another company was paged about a system outage. He pulls up the failing service and realizes he has no idea it’s connected to a database. A colleague accepted the AI-generated PR three months ago that added that dependency. The tests passed. The change was never written down. The original engineer moved on and the knowledge was lost.

These aren’t new stories. Engineers have always inherited systems they didn’t fully build. What’s new is the disguise and the speed. AI is an amazing enabler. Organizations must adopt it to remain relevant. Yet the emerging pattern—describe what you want, let an agent iterate until it works, pay for it in tokens instead of engineering hours—is functionally a buy decision wearing a build costume. The code is in your repo. Your engineers merged the PR. It feels like you built it. But if nobody on your team understands why it works the way it does, you’ve purchased a dependency you can’t maintain from a vendor you can’t call.

AI doesn’t create that gap once. It widens it continuously at a pace that outstrips the organizational habits that once kept it manageable. Two problems compound at once. You can’t extend the thing that makes you hard to replace. And when it breaks, the incident lands on a team that doesn’t understand what they’re fixing, turning a recoverable outage into a customer-facing crisis. Engineering leaders have wrestled with build-versus-buy tradeoffs for decades, and the hard-won lesson has always been the same: You don’t outsource your competitive advantage. The token-funded generation loop doesn’t change that calculus. It makes it easier to skip the question entirely.

The question that matters isn’t “Can AI do this?” If it can’t today, it will be able to tomorrow. And the argument that follows does not depend on the quality of the AI-generated code. This article covers two questions most engineering organizations have never asked at the same time. Most teams optimize for velocity and never ask what they’re risking or giving away in the process. The gap between those unasked questions is where the most expensive mistakes are already being made.

Part 1: Two dimensions. Neither is velocity.

Moving faster matters. But velocity alone misses the two dimensions that determine whether AI autonomy helps or hurts your business.

Business risk: What’s the blast radius if this fails? A bug in an internal CLI tool costs you an afternoon. A bug in your authentication logic costs you customers and possibly market cap. A bug in your core pricing algorithm costs you the business. These are not the same.

Competitive differentiation: Does this code define your business? Your moat is your architecture, your performance characteristics, your core algorithms, and the product decisions baked into your infrastructure. But it’s also the institutional knowledge that shaped them: the reasoning behind the trade-offs, the context that no model was trained on. If your competitors can generate the same code with the same model you’re using, it stops being an advantage.

Most organizations ask the first question on a good day. Almost none ask the second. That gap is how you end up shipping fast into a moat nobody can explain and nobody can extend.

Understanding why both dimensions matter starts with velocity and what happens when the feedback loop around it breaks.

Velocity feels real. Debt is often invisible.

AI coding tools are genuinely impressive. GitHub’s research showed 55% faster task completion with Copilot in controlled conditions.1 That number has driven an assumption that faster is always better.

A 2025 METR randomized controlled trial2 found something that should give every engineering leader pause. Sixteen experienced developers on real production codebases forecasted they’d complete tasks 24% faster with AI. After finishing, they estimated they’d gone 20% faster. They’d actually gone 19% slower.

The velocity finding is striking. But the perception gap matters more. The feedback loop between “how am I doing?” and “how am I actually doing?” was broken throughout and never corrected itself. This doesn’t resolve the velocity debate. It reframes it. The danger isn’t that individuals move too fast. Organizations mistake output volume for productivity and strip out the review processes that used to catch what that gap costs.

A Tilburg University study of open source projects after GitHub Copilot’s introduction found the same pattern at the organizational level.3 Productivity did increase, but primarily among less-experienced developers. Code written after AI adoption required more rework to meet repository standards. The added rework burden fell on the most experienced (core) developers who reviewed 6.5% more code after Copilot’s introduction and saw a 19% drop in their own original code output. The velocity looks real at the surface. Underneath, the maintenance cost shifts upward to the people who can least afford to lose productive time.

That broken feedback loop has a name. Researchers call it cognitive debt4: the growing gap between how much code exists in your system and how much of it anyone actually understands. Technical debt shows up in your linter and your backlog. Cognitive debt is invisible. There’s no signal telling engineers where their understanding ends. That’s precisely what the METR perception gap showed. It never corrected itself.

Research by Anthropic Fellows found that engineers using AI assistance when learning new tools scored 17% lower on comprehension tests than those who coded by hand, with the steepest drops in debugging ability.5 MIT’s Media Lab found the same pattern in writing tasks: Brain connectivity was weakest in the group using LLM assistance, strongest in the group working without tools.⁴ Active production builds understanding. Passive consumption doesn’t.

You understand what you build better than what you review. When you write code, you produce output and build a mental model. That’s what Peter Naur called the “theory of the program.” It lives in your head, not in the repo.6 The MIT study captured this directly. 83% of participants who wrote essays with LLM assistance could not quote a single sentence from essays they had just written.⁴

Cognitive debt is invisible until it isn’t. When it surfaces, it hits both dimensions hard, in different ways.

Business risk: The blast radius of not knowing

On the business risk dimension, cognitive debt is a safety problem.

When nobody fully understands the system, the blast radius of a failure expands silently. The incident that eventually comes (and it always comes) lands on a team that can’t diagnose what they didn’t build. The engineer pulling up the failing service at 2 AM has no mental model of why it was built the way it was, what it connects to, or what the edge cases look like under load. So they ask the LLM. It can explain what the code does and often propose a reasonable fix. It can’t tell you why it was designed that way. And a fix that looks right to the model can quietly violate constraints that nobody thought to document.

Cognitive debt compounds a second, independent risk: the pace at which AI-generated code reaches production. OX Security’s analysis7 of over 300 software repositories found that AI-generated code isn’t necessarily more vulnerable per line than human-written code. The problem is velocity.

Code review, debugging, and team oversight are the bottlenecks that catch vulnerable code before it ships. AI makes it easy to remove them. CodeRabbit’s analysis of real-world pull requests found AI-authored changes contain up to 1.7x more critical and major defects than human-written code, with logic and correctness issues up 75%.8 Apiiro’s analysis found that while AI reliably reduces surface-level syntax errors, architectural design flaws and privilege escalation paths (the categories automated scanners miss and human reviewers struggle to catch) spiked in AI-assisted codebases.9

AI accelerates output and accelerates unreviewed risk in equal measure. The cognitive debt means that when something breaks, the team is learning the system as they’re trying to fix it. Remove their understanding and you haven’t streamlined the process. You’ve only removed the thing standing between a bad day and a catastrophic one.

Competitive differentiation: What you give away without knowing it

The competitive differentiation risk isn’t that AI will generate your exact competitive algorithm and hand it to your competitor. It’s subtler. Your advantage was never the code itself; it was the judgment that shaped it. When AI writes that code, the judgment never forms. The code arrives, but the understanding that would let your team extend it, improve it, or defend it under pressure doesn’t. Your moat is most likely to survive in the places AI finds hardest to reach.

That judgment—formed by the performance trade-offs that took years to tune, the failure modes that only someone who’s been paged understands, the architectural decisions that encode domain knowledge nobody wrote down—doesn’t live in the codebase. It lives in your engineers’ heads.

And here’s the part most teams miss: Your competitor with the same AI tools doesn’t just get similar code, they get a team that also doesn’t understand why it works the way it does, which means neither of you can extend it, and the race to the next architectural move is a coin flip rather than a compounding advantage. The build-versus-buy discipline exists precisely because decades of experience taught engineering organizations that outsourcing your core means losing the ability to extend it. The token-funded generation loop doesn’t change that calculus. It makes it easier to mistake the outsourcing for ownership because the code has your name on it.

The structural problem runs even deeper. Models trained on public code produce outputs weighted toward well-represented patterns, the common solutions to common problems. Research confirms this. LLM performance drops sharply on less-common programming languages where training data is sparse, and on genuinely novel implementations. Even the best current models correctly implement fewer than 40% of coding tasks drawn from recent research papers.10 And the convergence problem extends beyond code. A pre-registered experiment tracking 61 participants over seven days found that while ChatGPT consistently boosted creative output during use, performance reverted to baseline the moment the tool was unavailable.11 More critically, the work produced with AI assistance became increasingly homogenized over time. That homogenization persisted even after the tool was removed. The participants hadn’t borrowed the tool’s output. They’d internalized its patterns. For engineering organizations, this is the differentiation risk made concrete: Teams that rely on AI for their most critical design decisions risk generating commodity code today and training themselves to think in commodity patterns tomorrow.

Engineers who deeply own their most critical systems are better at diagnosing incidents and see the next architectural move that competitors can’t follow. Delegate that comprehension away and you can keep the lights on. You can’t see around corners.

When it goes wrong, it really goes wrong

Both dimensions rest on the same vulnerability: cognitive debt accumulating on work that matters. The failure cases make it concrete.

The production failures are accumulating. A Replit AI agent deleted months of production data in seconds after violating explicit code-freeze instructions, then initially misled the user about whether recovery was possible.12 Reports emerged in early 2026 of a major cloud provider convening mandatory engineering reviews after a pattern of high-blast-radius incidents, with AI-assisted code changes cited as a contributing factor. In each case, the humans in the loop either didn’t understand what they were approving, or weren’t in the loop at all.

The deeper pattern predates AI tools entirely. Knight Capital Group took seventeen years to become the largest trader in U.S. equities. It took forty-five minutes to lose $460 million.13 The culprit was a nine-year-old piece of deprecated code called Power Peg, left on production servers and never retested after engineers modified an adjacent function in 2005. When engineers reused its feature flag for new functionality in 2012, nobody understood what they were reactivating. When the fault surfaced, the team’s attempt to fix it made things worse. They uninstalled the new code from the seven servers where it had deployed correctly, which caused Power Peg to activate on those servers too and compounded the losses. The SEC’s enforcement order is unambiguous: absent deployment procedures, no code review requirements, no incident response protocols. A failure of institutional comprehension where the mental model had quietly evaporated while the code kept running.

No AI tool wrote that code. The failure was entirely human, through entirely normal processes: engineers leaving, tests never rerun after refactors, flags reused without documentation. This is the baseline, what software organizations produce under ordinary conditions over nine years. An engineering team with modern AI tools won’t recreate this specific bug. They’ll create the conditions for the next one faster: more code that nobody fully understands, more dependencies nobody documented, more cognitive debt accumulating before anyone notices. AI removes the friction that once slowed exactly this kind of erosion.

None are failures of AI capability. They’re failures of judgment about where to deploy AI and how much human oversight to maintain.

Part 2: A four-quadrant model for AI autonomy

The quadrants

The quadrants of human involvement in programming

Four quadrants emerge when both questions are asked together. Before the examples, two contrasts are worth naming because the quadrants that look most similar on the surface are the ones most often confused in practice.

Supervised automation versus Human-led craftsmanship. Both demand high human involvement. Both feel like “be careful here.” But the difference is fundamental. In Supervised Automation, the human is a safety gate. The work is a commodity; you’re there to catch errors before they escape. In Human-led craftsmanship, the human is the author. You’re building the mental model that lets the next engineer reason about this system under pressure three years from now and take it somewhere new. The code isn’t something you need to verify. It’s something you need to own. And ownership here extends beyond the individual engineer. The team writes RFCs, debates trade-offs, identifies which parts of the implementation fall into which quadrant, and makes sure the reasoning behind key decisions is shared, not siloed. Human-led craftsmanship isn’t one person writing code alone. It’s a team making sure the understanding survives the people who built it.

Collaborative co-creation versus Human-led craftsmanship. Both involve high differentiation, and in both, the human drives the vision and owns the key decisions. But risk changes everything about how you work. In Collaborative co-creation, early iterations are recoverable. A wrong turn can be corrected before it costs you anything serious, so AI can genuinely accelerate execution. In Human-led craftsmanship, the blast radius of not understanding what you’ve built compounds over time. Wrong turns become load-bearing walls, and the architectural moves you can’t see are the ones that let competitors catch up. AI assists with scoped subtasks only. Every contribution gets interrogated.

In full automation, the human is a director. You define what needs to be done, AI produces the output, and you spot-check the result. The work is low-risk and low-differentiation. If something’s wrong, you fix it in the next iteration without anyone outside the team noticing. This is where AI earns its keep without qualification, and where restricting it costs you real velocity with nothing to show for it.

To make all four quadrants concrete, we’ll use a single feature as a lens: building AI Gateway cost controls, the system that sets token budgets per agent, enforces spending limits, tracks usage by model and agent, and handles enforcement modes when an agent exceeds its budget.

Low risk, low differentiation: Full automation

API docs for cost controls. Test scaffolding for token limit scenarios. Config examples for per-agent budgets. Every platform has docs, and if there’s a mistake, you fix it in the next iteration without anyone outside the team noticing. Humans set direction and spot-check. AI writes, tests, and ships.

The test: If this is wrong, can you fix it before a customer sees it or complains? If yes, automate freely.

Low risk, high differentiation: Collaborative co-creation

Designing the UX for the token usage dashboard. Iterating on routing rules that determine when an agent degrades to a cheaper model, halts entirely, or triggers a notification. These decisions separate a sophisticated platform from a blunt on/off switch, but early iterations are recoverable. A first version that doesn’t surface guardrail costs separately isn’t a disaster. It’s a product conversation. Humans drive the design vision and interrogate AI on trade-offs. AI accelerates execution and handles boilerplate.

The test: If you flipped the ratio (AI deciding, human rubber-stamping) would you be comfortable? If not, this requires genuine co-creation, not delegation. The human should be able to explain the trade-offs in the current design and know where to push it next.

High risk, low differentiation: Supervised automation

Enforcement logic that halts an agent when it hits its token budget. Every cost control system needs enforcement, so this isn’t differentiating. But if it fails, agents run unconstrained and rack up unbounded LLM spend. AI can draft the logic. A human must trace every path and understand every state transition before signing off. The question before merge: Can I explain exactly what happens when an agent hits the limit mid-execution? Can I explain this behavior to Customer Success or the Customer?

The test: Could a competent engineer review this confidently without having written it? If yes, the human’s job is to verify, not to author. But the bar for verification is explanation, not approval.

High risk, high differentiation: Human-led craftsmanship

The core token metering and attribution engine. It tracks usage per agent and per model, attributes guardrail costs separately so they don’t count against agent budgets, and provides the auditability enterprise customers need to govern AI spend. Get it wrong and customers can’t trust the numbers. Get it right and it’s a genuine competitive moat that competitors can’t replicate with the same AI tools you’re using.

Human engineers own the design end-to-end. AI assists on scoped subtasks once the design is settled: drafting specific functions, generating test coverage for paths the engineer has already reasoned through. Every contribution gets interrogated. The bar is whether the engineer could explain it in an incident review without looking at the code first.

The test: If the engineer who built this left tomorrow, would the team still understand why it works the way it does? Could they make it better? If the honest answer is no, you’re accumulating the most dangerous kind of cognitive debt there is.

The counterargument (it’s a good one)

Any engineering leader will push back here, and they’ll have good reason to.

The research is thin. METR’s study had 16 developers. MIT’s EEG work is a preprint that its own critics say should be interpreted conservatively.14 The Anthropic comprehension study shows a quiz score gap, not a business outcome. The evidence is early-stage. Intellectual honesty requires acknowledging that.

But the pattern keeps showing up in unrelated fields. A Lancet study found that endoscopists who routinely used AI for polyp detection performed measurably worse when the AI was removed, with adenoma detection rates dropping from 28.4% to 22.4% in three months.15 The study is observational and small. But the direction is consistent with everything else: Routine AI assistance may erode the skills it was supposed to support.

Most engineering work isn’t high-stakes. Studies consistently estimate that 60–80% of engineering time goes to maintenance, tests, docs, integration, and tooling, exactly the stuff that belongs in the automate quadrant regardless. Restricting AI because of the top 20% creates a real tax on the other 80%.

And can’t engineers develop deep ownership of AI-generated code through study and iteration? Partially. But the behavioral data tells a harder story. GitClear’s analysis of 211 million changed lines shows a decline in refactored code since AI adoption accelerated.16 Engineers aren’t studying AI-generated code carefully. They’re moving on to the next feature. LLM tools can explain what code does; they can’t tell you why the system was designed the way it was.17

The serious pro-AI argument isn’t “use AI everywhere.” It’s more precise: The guardrails for verification and oversight are improving fast, engineers who actively interrogate AI output build understanding even from generated code, and the organizations that restrict AI on their most critical work will fall behind competitors who don’t. This is a real argument.

The answer isn’t to dismiss it but to sharpen what “critical work” means. And, to recognize that the interrogative use of AI that the research identifies as understanding-preserving requires organizational discipline that most teams haven’t built yet. The quadrant isn’t permanent. The threshold shifts as both AI capability and human oversight practices mature. The discipline is the habit of asking both questions honestly before you start, not a fixed answer to them.

The discipline is simple. Maintaining it isn’t.

The quadrant tells you where to be careful. How you engage AI once you’re there determines whether careful is enough. The difference between “write me this function” and “explain why you made this trade-off, and what breaks if the input is malformed” is the difference between borrowing intelligence and developing it. Active, interrogative AI use preserves comprehension. Passive delegation destroys it. That’s what the Anthropic study’s behavioral data shows directly.

Match your review process to the quadrant. AI-generated docs and test scaffolding get a spot-check. AI-generated code touching your core product logic gets the same scrutiny as a junior engineer’s first PR. The bar for approval isn’t “tests pass.” It’s “someone on this team can explain what this does, defend it under pressure, and use that understanding to make it better.” Full automation needs a spot-check. Human-led craftsmanship needs an RFC, a team review, and shared ownership of the reasoning before anyone writes a line of code.

This matters especially in real-time data and AI infrastructure, systems where the most dangerous failure modes are emergent, appearing at scale and under load in combinations the code itself doesn’t express. Recognize that the threshold will shift. As AI capability improves, what belongs in the automate quadrant expands. The discipline isn’t a fixed answer. It’s the habit of asking both questions honestly before you start. It’s a core reason Redpanda is designed for simplicity and predictability: engineers need to be able to reason about how infrastructure behaves under pressure, not discover it during an incident.18

The real competitive question

The companies that get this right won’t be the ones that use the most AI or the least. They’ll be the ones whose leaders have internalized that risk and differentiation are independent variables, and that cognitive debt threatens both.

The engineer who doesn’t know how their algorithm works is a symptom. The organization that allowed it is the cause.

Treat cognitive debt as only a risk problem and you end up with engineers who can’t diagnose failures they didn’t build. Treat it as only a differentiation problem and you get fragile systems that survive until the next incident. Let it accumulate on your most critical systems and you get both at once.

Your competitor is making this calculation right now. The question isn’t whether to use AI. It’s whether you’re being honest about which quadrant you’re in, and whether your team will know the answer when it finally matters.


Co-authored with Claude (Anthropic). Yes, we took the advice from this article.


Footnotes

  1. Peng, S. et al. (2023). The Impact of AI on Developer Productivity: Evidence from GitHub Copilot. https://arxiv.org/abs/2302.06590 ↩
  2. Becker, J., Rush, N. et al. (2025). Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity. METR. https://arxiv.org/abs/2507.09089 ↩
  3. Xu, F., Medappa, P.K., Tunc, M.M., Vroegindeweij, M., & Fransoo, J.C. (2025). AI-Assisted Programming May Decrease the Productivity of Experienced Developers by Increasing Maintenance Burden. Tilburg University. https://arxiv.org/abs/2510.10165 ↩
  4. Kosmyna, N. et al. (2025). Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task. MIT Media Lab. https://arxiv.org/abs/2506.08872 (preprint, not yet peer-reviewed) ↩
  5. Shen, J.H. & Tamkin, A. (2026). How AI Impacts Skill Formation. Anthropic Safety Fellows Program. https://arxiv.org/abs/2601.20245 ↩
  6. The generation effect: Rosner, Z.A. et al. (2012). The Generation Effect: Activating Broad Neural Circuits During Memory Encoding. Cortex. https://pmc.ncbi.nlm.nih.gov/articles/PMC3556209/ and Bertsch, S. et al. (2007). The generation effect: A meta-analytic review. Memory & Cognition. https://link.springer.com/article/10.3758/BF03193441 and Naur, P. (1985). Programming as Theory Building. Microprocessing and Microprogramming. https://pages.cs.wisc.edu/~remzi/Naur.pdf ↩
  7. OX Security. (October 2025). Army of Juniors: The AI Code Security Crisis. https://www.helpnetsecurity.com/2025/10/27/ai-code-security-risks-report/ ↩
  8. CodeRabbit. (December 2025). State of AI vs Human Code Generation Report. https://www.coderabbit.ai/blog/state-of-ai-vs-human-code-generation-report. Note: CodeRabbit produces AI code review tooling; findings should be read in that context. ↩
  9. Apiiro. (September 2025). 4x Velocity, 10x Vulnerabilities: AI Coding Assistants Are Shipping More Risks. https://apiiro.com/blog/4x-velocity-10x-vulnerabilities-ai-coding-assistants-are-shipping-more-risks/. Note: Apiiro produces application security tooling; findings should be read in that context. ↩
  10. Joel, S., Wu, J.J., & Fard, F.H. (2024). A Survey on LLM-based Code Generation for Low-Resource and Domain-Specific Programming Languages. ACM TOSEM. https://arxiv.org/abs/2410.03981. See also: Hua, et al. (2025). ResearchCodeBench: Benchmarking LLMs on Implementing Novel Machine Learning Research Code. https://arxiv.org/abs/2506.02314 ↩
  11. Liu, Q., Zhou, Y., Huang, J., & Li, G. (2024). When ChatGPT is Gone: Creativity Reverts and Homogeneity Persists. https://arxiv.org/abs/2401.06816 ↩
  12. Fortune. (July 2025). AI-Powered Coding Tool Wiped Out a Software Company’s Database in ‘Catastrophic Failure.’ https://fortune.com/2025/07/23/ai-coding-tool-replit-wiped-database-called-it-a-catastrophic-failure/ ↩
  13. Knight Capital Group. SEC Administrative Proceeding, Release No. 70694 (October 16, 2013). https://www.sec.gov/litigation/admin/2013/34-70694.pdf. Levine, M. (2013). Knight Capital’s $440 Million Compliance Disaster. Bloomberg. https://www.bloomberg.com/opinion/articles/2013-10-17/knight-capital-s-440-million-compliance-disaster ↩
  14. Stankovic, M. et al. (2025). Comment on: Your Brain on ChatGPT. https://arxiv.org/abs/2601.00856 ↩
  15. Budzyń, K., Romańczyk, M. et al. (2025). Endoscopist Deskilling Risk After Exposure to Artificial Intelligence in Colonoscopy: A Multicentre, Observational Study. Lancet Gastroenterol Hepatol. 10(10):896-903. https://doi.org/10.1016/S2468-1253(25)00133-5 ↩
  16. Harding, W. (2025). AI Copilot Code Quality: Evaluating 2024’s Increased Defect Rate via Code Quality Metrics. GitClear. https://www.gitclear.com/ai_assistant_code_quality_2025_research ↩
  17. Zhou, X., Li, R., Liang, P., Zhang, B., Shahin, M., Li, Z., & Yang, C. (2025). Using LLMs in Generating Design Rationale for Software Architecture Decisions. ACM TOSEM. https://arxiv.org/abs/2504.20781. See also: Tang, N., Chen, M., Ning, Z., Bansal, A., Huang, Y., McMillan, C., & Li, T.J.-J. (2024). A Study on Developer Behaviors for Validating and Repairing LLM-Generated Code Using Eye Tracking and IDE Actions. IEEE VL/HCC 2024. https://arxiv.org/abs/2405.16081 ↩
  18. Gallego, A. (2025). Introducing the Agentic Data Plane. Redpanda. https://www.redpanda.com/blog/agentic-data-plane-adp. Crosier, K. (2026). How to Safely Deploy Agentic AI in the Enterprise. Redpanda. https://www.redpanda.com/blog/deploy-agentic-ai-safely-enterprise ↩

15:49

Security review of Plasma Login Manager (SUSE Security Team Blog) [LWN.net]

SUSE's Security Team has published a detailed blog post on their recent review of the Plasma Login Manager version 6.6.2, which was forked from the SDDM display manager.

While most of the code remains the same, the new upstream added a privileged D-Bus helper called plasmaloginauthhelper, which suffers from defense-in-depth security issues.

[...] Based on the high severity of the defense-in-depth issues shown in this report, our assessment is that there is effectively no separation between root and the plasmalogin service user account.

At this time there is no bugfix available by upstream, but a security fix is planned for the next Plasma release on May 12. We have not been involved in upstream's bugfix process so far and have no knowledge about the approach that will be taken to address the issues from this report.

14:21

Security updates for Wednesday [LWN.net]

Security updates have been issued by AlmaLinux (firefox, gdk-pixbuf2, java-17-openjdk, libxml2, python3, python3.11, python3.12, sudo, and webkit2gtk3), Debian (dnsdist, node-tar, pdns, pdns-recursor, and policykit-1), Fedora (chromium, edk2, and vim), Oracle (firefox, gdk-pixbuf2, go-toolset:rhel8, libpng12, LibRaw, libxml2, python, python3, python3.11, python3.12, python3.12-wheel, vim, webkit2gtk3, xorg-x11-server, xorg-x11-server-Xwayland, yggdrasil, and yggdrasil-worker-package-manager), Red Hat (container-tools:rhel8, delve, git-lfs, go-rpm-macros, grafana, grafana-pcp, osbuild-composer, and rhc), SUSE (bouncycastle, clamav, container-suseconnect, dovecot22, erlang, firefox, fontforge, freerdp2, ghostscript, giflib, gnome-remote-desktop, go1.25, go1.26, google-guest-agent, haproxy, ignition, ImageMagick, kernel, libcap, libpng16, libraw, librsvg, mariadb, openexr, pocketbase, protobuf, python-Pillow, python-requests, qemu, rust1.94, sudo, tomcat, tomcat10, tomcat11, webkit2gtk3, and xen), and Ubuntu (dotnet10, dovecot, linux-nvidia-lowlatency, node-follow-redirects, openssh, packagekit, python-cryptography, python-tornado, ruby-rack-session, ujson, and wheel).

13:21

A Whale of a Problem [The Daily WTF]

From our Anonymous submitter:

Our company creates graphs to visualize data. We have many small fish customers, but we have one whale who uses our product that is 90% of company revenue. (WTF number 1.)

So if he is not happy, it's all-hands-on deck-mode.

He complained that our APIs and charts are loading slowly for him. For 3 weeks, we've tried a TON of optimizations, including WTF 2: spinning up a special server he alone can hit.

Today, we found out that he's always complaining when he's in his car, driving from home to the office. But since he "totally has the best wifi money can buy," that isn't worth investigating.

WTF 3: thinking wifi and data are always 100% reliable in a car driving around.

Humpback whale breaching in Ballena Marine National Park

Our submitter highlights one of the major pitfalls of the so-called whale client: if they're a bad client, you're in for an extra-bad time.

As I lean harder into freelancing, I'm learning to scan the waters ahead of me for potential whales. My goal is to build up multiple small, diverse income streams, because I've had my own dangerous encounters with whales in the past.

At one employer of mine, there was Facebook, who acted as if they were our new owners rather than a new customer. They'd already produced flashy marketing videos of the sorts of solutions they planned to implement with our software, showing people delighted with the results. In meetings, these things were talked up as amazing game-changers. Meanwhile, I found all the things Facebook wanted to do horribly creepy and invasive.

Even worse, Facebook began dictating how our award-winning technical support should change to accommodate their whims, up to and including having a dedicated toady—er, support rep—who did nothing but field Facebook-related tickets, similar to a technical account manager (TAM).

That was the last straw for me. I left that company before I was forced to deal with any of Facebook's crap.

My second whale sighting occurred at a startup that'd landed Porsche, far and away their biggest client ever. All of a sudden, our timeline for adding new features and fixing bugs became Porsche's honey-do list. All of a sudden, the platform frequently crashed and became unusable for everyone because it couldn't handle the amount of traffic Porsche (and their clients) hurled at it.

On the other hand, there were several times in that startup's existence when a big wad of promised funding failed to materialize. Porsche kept the business afloat and literally kept my lights on.

I find it less than ideal to be at any company's mercy. I want a world that would neither spawn whales nor millions of startups named Sploink, Dink, and Twangle that promise to bring the power of AI to your dinner fork.

Have your own epic whaling adventures? Share with us in the comments!

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

12:42

Alternating Current [George Monbiot]

If this crucial circulation system shuts down, the civilisational impacts will be irreversible. So why isn’t it a top priority?

By George Monbiot, published in the Guardian 23rd April 2026

The poor and middle pay taxes, the rich pay accountants, the very rich pay lawyers – and the ultra-rich pay politicians. It’s not an original remark, but it bears repeating until everyone has heard it. The more money billionaires accumulate, the greater their control of the political system – which means they pay less tax, which means they accumulate more, which means their control intensifies.

They reshape the world to suit their demands. One of the symptoms of the pathology known as “billionaire brain” is an inability to see beyond their own short-term gain. They would sack the planet for a few more stones on the pointless mountain of wealth. And we can see it happening. Last week delivered the biggest news of the year so far, perhaps the biggest news of the century. But partly because billionaires own most of the media, most people never heard it. We might find ourselves committed to a civilisation-ending event before we even learn that such a thing is possible.

The news is that the state of a crucial oceanic circulation system has been reassessed by scientists. Some now believe that, as a result of climate breakdown changing the temperature and salinity of seawater, it is more likely than not to collapse. This system – known as the Atlantic meridional overturning circulation (Amoc) – delivers heat from the tropics to the North Atlantic. Recent research suggests that if it shuts down, it could cause both a massive drop in average winter temperatures in northern Europe and drastic changes in the Amazon’s water cycles. This could help tip the rainforest into cascading collapse and trigger further disaster.

Amoc’s shutdown is likely also to cause an acceleration of sea level rise on the east coast of the US, threatening cities. It could also raise Antarctic temperatures by roughly 6C and release a vast pulse of carbon currently stored in the Southern Ocean, accelerating climate catastrophe.

Even when the countervailing effects of generalised global heating are taken into account, a further paper proposes, the net impact in northern Europe would be periods of extreme cold – including events in which temperatures in London fall to -19C, in Edinburgh to -30C and in Oslo to -48C. Sea ice in February would extend as far as Lincolnshire. Our climate would change drastically, with the likelihood of far greater extremes, such as massive winter storms. Rain-fed arable agriculture would become impossible almost everywhere in the UK.

This shift, on any realistic human scale, would be irreversible. Its speed is likely to outrun our ability to adapt. Amoc shutdowns, driven by natural climate variability, have happened before. But not in the era of large-scale human civilisation.

The first paper proposing that Amoc might have an on-state and an off-state was published in 1961. Since then, many studies have confirmed the finding and explored potential triggers and likely implications. Until recently, Amoc collapse caused by human activity fell into the category of a “high impact, low probability” event, devastating if it happens, but unlikely to occur.

Research over the past few years prompted a reassessment: it began to look more like a “high impact, high probability” event. Now, in response to last week’s paper, Prof Stefan Rahmstorf – perhaps the world’s leading authority on the subject – says the chances of a shutdown look like “more than 50%”. We could pass the tipping point, he says, “in the middle of this century”.

So why is this not all over the news? Why is it not the top priority for the governments that claim to protect us from harm? Well, in large part because oligarchic power has championed a model of climate impact that bears little relation to reality: that is, they have a hypothesis about how the world works that is completely detached from scientific findings. This model underpins official responses to the climate crisis.

It began with the work of the economist William Nordhaus, who sought to assess the economic effects of global heating. His modelling suggests that a “socially optimal” level of heating is between 3.5C and 4C. Most climate scientists see a temperature rise of this kind as catastrophic. Even 6C of heating, Nordhaus suggests, would cause a loss of just 8.5% of GDP. Climate science suggests it would look more like curtains for civilisation.

As the eminent economists Nicholas Stern, Joseph Stiglitz and Charlotte Taylor have argued, the mild effects Nordhaus forecasts are merely artefacts of the model he has used. For example, his modelling assumes that catastrophic risks do not exist and that climate impacts rise linearly with temperature. There is no climate model that proposes such a trend. Instead, climate science forecasts nonlinear impacts and greatly escalating risk.

The likely impacts of high levels of heating include the inundation of major cities, the closure of the human climate niche (the conditions that sustain human life) across large parts of the globe, the collapse of the global food system and cascading regime shifts – that is, abrupt transitions in ecosystems – releasing natural carbon stores, potentially leading to a “hothouse Earth” in which very few survive. Never mind a few points off GDP: there would be no means of measurement and scarcely an economy to measure.

Bizarrely, the modelling also applies discount rates to future people: their lives, it assumes, are worth less than ours. In other words, it has taken a method used to calculate returns to capital and applied it to human beings. As the three economists point out, “it is very difficult to find a justification for this in moral philosophy.” Moreover, climate impacts disproportionately affect the poor – but under the models, their lives are also priced down.

Unsurprisingly, models of this kind, Stern, Stiglitz and Taylor note, have been seized on by “special interests” such as the fossil fuel industry to argue for minimal responses to the climate crisis. And it’s not just the oil companies. Bill Gates, who claims to want to protect the living planet, has given $3.5m (£2.6m) to a junktank run by Bjorn Lomborg, who has built his career on promoting Nordhaus’s model, thus helping to downplay the need for climate action. Nordhaus was awarded the Nobel Memorial prize for economics for his pernicious nonsense – and it is deeply embedded in government decision-making.

A billionaire death cult has its fingers around humanity’s throat. It both causes and downplays our existential crisis. The oligarchs are not just a class enemy but, as they have always been, a societal enemy: a few thousand people can destroy civilisations. It’s the billions v the billionaires, and the stakes could not possibly be higher.

www.monbiot.com

11:21

Claude Mythos Has Found 271 Zero-Days in Firefox [Schneier on Security]

That’s a lot. No, it’s an extraordinary number:

Since February, the Firefox team has been working around the clock using frontier AI models to find and fix latent security vulnerabilities in the browser. We wrote previously about our collaboration with Anthropic to scan Firefox with Opus 4.6, which led to fixes for 22 security-sensitive bugs in Firefox 148.

As part of our continued collaboration with Anthropic, we had the opportunity to apply an early version of Claude Mythos Preview to Firefox. This week’s release of Firefox 150 includes fixes for 271 vulnerabilities identified during this initial evaluation.

As these capabilities reach the hands of more defenders, many other teams are now experiencing the same vertigo we did when the findings first came into focus. For a hardened target, just one such bug would have been red-alert in 2025, and so many at once makes you stop to wonder whether it’s even possible to keep up.

Our experience is a hopeful one for teams who shake off the vertigo and get to work. You may need to reprioritize everything else to bring relentless and single-minded focus to the task, but there is light at the end of the tunnel. We are extremely proud of how our team rose to meet this challenge, and others will too. Our work isn’t finished, but we’ve turned the corner and can glimpse a future much better than just keeping up. Defenders finally have a chance to win, decisively.

They’re right. Assuming the defenders can patch, and push those patches out to users quickly, this technology favors the defenders.

News article.

10:14

Photoshopping the package [Seth's Blog]

I bought a snack food the other day, and was disappointed to discover that the thing inside the container had little in common with the picture on the front. It was pallid, lifeless and drab.

The marketer who decided to improve the picture was making a choice, one with consequences. When you choose to disappoint a customer later so you can make a sale right now, you’ve also chosen to create disappointment for a living.

If you’re not proud of it, don’t serve it. Improving the image on the package shouldn’t be a substitute for making something people want to buy.

09:00

The Rich Roe of Wisdom [Penny Arcade]

New Comic: The Rich Roe of Wisdom

06:28

Girl Genius for Wednesday, April 29, 2026 [Girl Genius]

The Girl Genius comic for Wednesday, April 29, 2026 has been posted.

Tuesday, 28 April

23:42

Urgent: Big media consolidation [Richard Stallman's Political Notes]

US citizens: call on Congress to block Paramount from consolidating the main US news media.

US citizens: Join with this campaign to address this issue.

To phone your congresscritter about this, the main switchboard is +1-202-224-3121.

Please spread the word.

Cannabis law [Richard Stallman's Political Notes]

The bully is eager to reclassify marijuana. The change in regulations he wants would make it easier to do research on medical uses, but would not relieve the threats and restrictions on people who actually use marijuana or its derivatives.

Argentina's President ordered government to invest in cryptocurrency [Richard Stallman's Political Notes]

Argentina's President Milei imitated another right-wing extremist president by accepting a personal payoff for ordering the country's government to invest in cryptocurrency.

Some schools want to remove personal computers from classrooms [Richard Stallman's Political Notes]

Some schools, and some US states, want to get rid of personal computers for students in the classroom, for educational reasons.

It is too bad they don't appreciate the injustice of the software in those computers, because that is an separate reason for doing the same thing. The two reasons are not entirely independent — the fact that the software is nonfree is part of the explanation for why it does bad things. But they are independent enough that they can broaden the base of the argument.

We need to bring these two converging movements together.

Columbia's Center on Global Energy Policy [Richard Stallman's Political Notes]

* Columbia's Center on Global Energy Policy (CGEP) describes itself as an independent organization producing research on energy policy.* In fact, it gets millions of dollars from oil companies, and what it "produces" is obtained wholesale from them.

Smaller fraction of people post on "social media" in UK [Richard Stallman's Political Notes]

In the UK, a social change is occurring: a smaller fraction of people post regularly on "social media".

People who formerly loved Twitter and the idea of "social media" say that there is no such thing any more, and they miss it.

I don't personally know what it is they miss. I never used Twitter because that required running nonfree JavaScript code, and I refuse on principle to do that. I could not even see individual postings there, until Nitter provided a way to do that without submitting to nonfree software. But ex-Twitter killed off the API which made that possible.

22:56

Everyone’s an Engineer Now [Radar]

Cat Wu leads product for Claude Code and Cowork at Anthropic, so she’s well-versed in building reliable, interpretable, and steerable AI systems. And since 90% of Anthropic’s code is now written by Claude Code, she’s also deeply familiar with fitting them into routine day-to-day work. Last month, Cat joined Addy Osmani at AI Codecon for a fireside chat on the future of agentic coding and, equally important, agentic code review, how Anthropic actually uses the tools they’re building, and what skills matter now for developers.

The feedback loop is itself a product

Boris Cherny initially built Claude Code as a side project to test Anthropic’s APIs. Then he shared the tool in a notebook, and within two months the entire company was using it. That organic growth, Cat said, was part of what convinced the team it was worth releasing externally.

But what really made that internal adoption visible was the response on Anthropic’s internal “dog-fooding” Slack channel. The Claude Code channel gets a new message every 5 to 10 minutes around the clock, and this feedback directly and immediately informs the product experience. Cat described it this way:

We hire for people who love polishing the user experience. And so a lot of our engineers actually live in this channel and find when there’s issues with new features that they’ve worked on and they proactively lay out the fixes.

The team ships new versions of Claude Code to internal users many times a day. The feedback loop is tight enough that it functions as a continuous integration system for product quality, not just code quality.

Cat told Addy how she once accidentally introduced a small interaction bug between prompts and auto-suggestions. But by the time she started working on a solution, she found another team member had already beaten her to it. It turns out, he had set up a scheduled task in Claude Code to scan the feedback channel for anything that hadn’t been responded to in 24 hours and open a PR for it. Since Cat hadn’t gotten to it yet (whoops!), her teammate’s Claude saw the unaddressed issue and fixed it for her. And Cat only found out when “[her own] Claude noticed that his Claude had already landed a change.”   

The infrastructure for rapid improvement, in other words, is now partly automated. The agents are writing the code, then monitoring the feedback and closing the loop.

The bottleneck has shifted to review

There’s no question that AI-assisted coding has created a boom in output. Anthropic engineers are producing roughly 200% more code than they were a year ago, Cat noted. Today the main constraint is reviewing all that code to ensure it’s production-ready.

Cat’s team concluded that you can buy a lot of additional robustness for not that much extra cost. 

We opted for the heaviest, most robust version [of code review]. We actually plot how many agents and how comprehensive of a review Claude does and then how many bugs does it recall. And we picked a number of very high recall and decided we should ship this, because if you really want AI code review to be a load-bearing part of your process, you actually probably just want the most comprehensive possible review.

The review agent doesn’t just look at the diff. It traces code across multiple files and catches bugs in adjacent code that has nothing to do with the change in question. Cat gave two examples. One was a ZFS encryption refactor where the agent found a key cache invalidation bug that wasn’t related to the author’s change at all but would have invalidated it. The other was a routine auth update that turned out to have a bad side effect, caught premerge. In both cases, engineers manually reviewing the code likely would have missed the bugs.

The human review that remains is deliberately small in scope. For most PRs, the human reviewer skims for design principle violations and obvious problems and assumes functional correctness has been handled. Five to ten agents run in parallel, each given slightly different tasks, returning independently and then deduplicating what they found.

The cultural shift that made this work, though, was ownership. The team moved to a model where the engineer who authors a PR owns it end to end, including postdeploy bugs, and doesn’t lean on peer reviewers to catch mistakes. “Otherwise,” as Cat pointed out, “you have situations where junior engineers put out a bunch of PRs and then your senior engineers are like drowning in AI-generated stuff where they’re not sure how thoroughly it’s been tested.”

Full ownership meant the AI review had to actually be trustworthy, which drove the decision to go for high recall rather than a lighter touch. That said, engineers are still expected to understand every line of code an agent creates. . .for now. As Cat explained, it’s the only way to truly prevent “unknown security vulnerabilities and to be able to quickly respond to incidents if they are to happen.” 

Everyone’s kind of an engineer now

Cowork, Anthropic’s agent tool for nontechnical users, is the company’s attempt to take what Claude Code does for engineers and bring it to knowledge work more broadly. Cat sketched a picture of someone looking at five or six agent tasks running simultaneously in a side panel, managing a fleet of agents the way a senior engineer manages a PR queue.

In the nearer-term, she’s keeping tabs on the shift toward people using Claude Code to build things for themselves, their teams, or their families that wouldn’t have justified professional development effort or “otherwise been possible.” The prototype is the garage project, the family expense tracker, the tool that a small team actually needs but that no SaaS product quite addresses. Cat’s goal and hope is that Claude Code helps people “solve their own problems for themselves” and “stewards a new future of personal software.”

Product taste as the new technical skill

More people building more software is unambiguously good. Boris Cherny has even floated the idea that coding as we know it is “solved.” But what does that mean for the craft of software engineering? Cat’s read of the current moment is more nuanced:

I think pre-AI, the skills that were very important were being able to take a spec and implement it well. And I think now the really important skill is product taste. Even for engineers. Can you use code to ingest a massive amount of user feedback? Do you have good intuition about which feature to build to address those needs, because it’s often different than exactly what users are asking you for? And then, when Claude builds it, are you setting up the right bar so that what you ship people actually love?

Cat’s not alone in highlighting the importance of taste in a world where code is a commodity. Steve Yegge, Wes McKinney, and many others, myself included, see taste and judgment as a uniquely human value. This has practical implications for how engineers should spend their time now, and for what the next generation needs to learn.   

For junior engineers specifically, Cat described a progression: Start by using Claude Code to understand the codebase (ask all the “dumb questions” without embarrassment), take those answers to a senior engineer for calibration, and then close the loop by updating the CLAUDE.md with whatever was missing.

Think of Claude Code as your intern that you’re trying to level up. Like, teach it back to Claude. Add a /verify slash command. Put it in the CLAUDE.md or the agent README. Approach this as senior engineers helping you level up, and then you helping Claude and other agents level up.

The improvement process, in other words, should be bidirectional. Engineers get better at using the tools and the tools get better through the engineers’ accumulated knowledge. And significantly, this process keeps humans firmly in the loop, playing a role that’s “active, continuous, and skilled.”

You can watch Cat and Addy’s full chat, plus everything else from AI Codecon on the O’Reilly learning platform. Not a member? Sign up for a free 10-day trial, no strings attached.

22:49

22:14

Link [Scripting News]

You kind of get a sense when the platform vendor is going to compete with you instead of work with you. That's how big companies usually work with independent devs. There is a bigger picture, developers who might build on WordPress as opposed to inside. Now probably no one is going to try, maybe not even me. Or maybe we will. :-)

22:00

Apple wants to kill your Time Capsule, but they run NetBSD so they can’t [OSnews]

It seems like Apple is finally going to remove support for AFP from macOS, twelve years after first moving from AFP to SMB for its default network file-sharing technology. This change shouldn’t impact most people, as it’s highly unlikely you’re using AFP for anything in 2026. Still, there is one small group of people to whom this change has an actual impact: owners of Apple’s Time Capsule devices. Time Capsules only support AFP and SMB1, and with SMB1 being removed from macOS ages ago, and now AFP being on the chopping block as well, macOS 27 would render your Time Capsule more or less unusable.

It’s important to note that the last Time Capsule sold by Apple, the fifth generation, was released in 2013, and the product line as a whole was discontinued in 2018. If you bought a Time Capsule in the twilight years of the line’s availability, I think you have a genuine reason to be perturbed by Apple cutting you off from your product if you upgrade to macOS 27, but at least you have the option of keeping an older version of macOS around so you can keep interacting with your time Capsule. It still feels like a bit of a shitty move though, as those fifth generation models came with up to 3TB of storage, which can still serve as a solid NAS solution.

Thank your lucky stars, then, that open source can, as usual, come to the rescue when proprietary software vendors do what they always do and screw over their customers. Did you know every generation of Time Capsule actually runs NetBSD, and that it’s trivially easy to add support for Samba 4 and SMB3 authentication to your Time Capsule, thereby extending its life expectancy considerably? TimeCapsuleSMB does exactly that.

If the setup completes successfully, your Time Capsule will run its own Samba 4 server, advertise itself over Bonjour (show up automatically in the “Network” folder on macOS), and accept authenticated SMB3 connections from macOS. You should then be able to open Finder, choose Connect to Server, and use a normal SMB URL instead of relying on Apple’s legacy stack. You should also be able to use the disk for Time Machine backups.

↫ TimeCapsuleSMB

It’s compatible with both NetBSD 4 and NetBSD 6-based Time Capsules, although you’ll need to run a single SMB activation command every time a NetBSD 4-based Time Capsule reboots. This will also disable any AFP and SMB1 support, but that is kind of moot since those are exactly the technologies that don’t and won’t work anymore once macOS 27 is released. The installation is also entirely reversible if, for whatever reason, you want to undo the addition of Samba 4.

This whole saga is such an excellent example of why open source software protects users’ rights, by design.

20:21

Remembering Seth Nickell [LWN.net]

LWN has received the sad news that Seth Nickell passed away, on April 16, from his father, Eric Nickell:

Many of you knew Seth from his work in the GNOME Usability Project, but his roots in that community trace back to his high school years. As a father of a high school junior, I remember being terrified when he flashed the hard drive of a computer he purchased for himself with this weird "Linux" thing. And I was a bit awed by the college application essay he wrote about open source and Linus Torvalds.

It was his interest in packet radio that drew him into working with the Linux AX.25 HOWTO as a high schooler, and from there to his focus on making the Linux desktop work for everyone.

The family plans to share news of a memorial at a later time. He will be deeply missed.

19:21

The Big Idea: Marie Vibbert [Whatever]

Though humans have a strong desire to be an individual, slightly stronger is our innate need to not be alone. Humans are not solitary creatures, so why do we try so hard to act like we are all just individuals with no ties or connections to those around us? Author Marie Vibbert wonders if we wouldn’t all be better off as a hive mind in the Big Idea for her newest novel, Multitude.

MARIE VIBBERT:

Over 11,000 tons of discarded clothing lie in the Chilean desert. These are garments that never sold, from low and high brands, and almost entirely made of petroleum-based fabrics: rayon, polyester, acrylic. It’s a major environmental problem. The clothes catch fire, leak chemicals and microplastics, and just… keep coming.

Meanwhile, in Scotland, they are looking for new, industrial applications for wool because this renewable clothing resource that doesn’t spontaneously combust sits rotting in warehouses, unable to compete with the subsidized price of polyester.

Humanity has a problem. A communication problem that creates wasted effort and wasted resources. Food being thrown out while people starve. Diseases like cholera running rampant when their cures exist. I could go on and on with examples. Why can’t we put our efforts where they are needed? Why do our systems dictate so much cruel irony?

When you look at humanity as a whole, we are tearing ourselves apart, starving ourselves, killing ourselves. We don’t seem to understand that we are us? 

These were my thoughts going into a project whose first note was: The Borg, but friendly?

I thought it would be a short story. Something quick. Get in and get out. A hive mind comes to Earth, tries to communicate with humans as a hive, fails, and sees what a mess we are. Nudge the reader toward empathy, toward seeing problems between “us” and “them” as an insufficient definition of “us.” I figured it’d hit about 2,000 words long. But the more I thought about it, the bigger the problem became. How to show the perspective? How to encompass humanity and then move the camera back to show us in perspective?

How do we look, to a hive mind? What would they expect?

Humans are, in many ways, a collective creature. A single human can no more build a skyscraper than a single ant can build a mound. Even writing a novel is a collective act, when you consider that this language that I am using is a vast collection of consensuses on symbols, meaning, and parsing. English, on a certain level, is a stack of inside jokes passed down and expanded every generation.

Beyond that, every work of fiction builds on and reacts to those that came before. I am writing in a genre, science fiction, defined by all the works labelled as such, and in turn defined by the pressures and uncertainties of our society that caused the first authors to write things not of this world, the first readers to like that and want to emulate it, and on, and on. 

I was on a panel at WorldCon on Hive Minds in Science Fiction when it occurred to me that an assumption I hadn’t seen tackled yet was that collectivism automatically meant a repression of individuality. It seems an easy conclusion? If my family votes democratically on dinner, my individual desire to eat nothing but spaghetti every night is subordinated. Yet, the four of us are still individuals as we enjoy my spouse and child’s preferred chicken and rice.

Why wouldn’t a hive mind contain room for the individual? Does a Borg stop loving spaghetti once it absorbs the thoughts of thousands of chicken fans? Wouldn’t it be more of a conversation than a dictatorship? If it’s truly collective, why would there be dictators? And, come to think of it, don’t we, as large groups, change our opinions over time? Americans once ate more chipped beef on toast than chicken fingers. We thought the Edwardian S-bend corset and the mullet were a great ideas. We went from loving elephant leg jeans to skinny jeans. Collectively. Like an individual goes through phases of loving fly fishing or obsession with one particular series of books, societies go through a group fondness for orange or dark wood paneling. 

At the risk of making this blog post nothing but rhetorical questions, why do we assume innovation is a characteristic of the individual? Why do we assign conformity to the collective alone?

I tried to imagine myself a hive-member. Many advantages came immediately to mind. I wouldn’t have had to gamble on picking a college major; I’d have access to the needs of the society around me to help find work that was needed. I wouldn’t be competing for the access to share my stories, I’d just tell them, and my hive would hear them and like them or not.

Competition is not just the “healthy” activity of small businesses or inventors, of students seeking academic awards. It’s also war. All around the world, humans are killing humans so that they can avoid sharing resources. Humans are defining others, drawing lines around some of their siblings and excluding others, to limit access to resources. Yet to a non-human observer, we are one species, one sprawling community, alike in our needs and wants and behaviors.

And humans can be so kind, too.

In 2023, I had to travel to New York City because I had to get a Visa to attend my first Hugo awards as a nominee, and as I sat in Central Park waiting for my appointment, admiring the unnatural warmth of the post-climate-change day, I saw a middle-aged man patiently leading a group of elderly people. He looked so happy. I dashed off four pages in my journal about him, imagining his life taking care of elders. I wondered why my science fiction stories weren’t as easy or as fun as simple character portraits. I enjoyed the flashes of lives I’d seen in short stories by Mary Grimm or Maureen McHugh, or the prose poems of Mary Biddinger.

I used to love to climb into a character’s head and walk around, show her worries and fears and daily chores, and then I’d show my work to science fiction writers and be told I had no plot, or perhaps I was “just” a poet. Because of this critique, I chose to wall off the desire to write the way that came most naturally, eschewing character-study and stream-of-consciousness in favor of sentences that “did something.” (My own term.) I began to focus on ideas, on technology, on concrete consequences and violent action.

Eventually, I got pretty good at it, good enough to feel its limitations.  I opened up my old “plotless” stories and found them not so plotless, after all. Rather, they reflected my own sense of helplessness as a teen and early-twenties writer, and that point of view was uninteresting to the science fiction editor of the 90s and 2000s, who focused on competent characters moving the plot by choice.

At the young age of 47, I revised one of those 20-year-old “plotless” stories and sold it to a market paying the Science Fiction and Fantasy Writers Association’s professional rate of eight cents a word. Not to brag. (Yes, to brag). In some ways, the genre itself has moved on from rigorously espousing action and certainty from its heroes, but also, I had learned how to structure a story through the mechanics of action, and this helped me see the similar structuring of non-action-based stories.

Part of the literary legacy my writing depends on is science fiction’s desire for logical, action-driven plots, but the origins of this project are the literary flash fiction piece, rooted in character and moment, and my desire to return to it, now that I have proven myself in the plot mines. 

Which brings us back to the beginning: How better to show the individual in the collective of humanity than through a series of very short point of view pieces? The result is an introspective novella I wrote in thousand-word chunks around other projects. More than any other book I’ve written, I feel naked in its pages, exposing my deepest, most personal self. I felt free to do this because it was something I thought would never sell: too literary, too experimental.

Well, I sent it to Apex Books and they disagreed. I hope you enjoy, and be kind to my Space Cephalopods. 

—-

Multitude: Amazon|Barnes & Noble|Bookshop

Author socials: Website|Bluesky|Instagram

18:56

Developing a cross-process reader/writer lock with limited readers, part 1: A semaphore [The Old New Thing]

Say you want to have the functionality of a reader/writer lock, but have it work cross-process. The built-in SRWLOCK works only within a single process. Can we build a reader/writer lock that works across processes?

For convenience, let’s say that you want to support a maximum of N simultaneous readers, for some fixed value N. We can do this:

  • Create a semaphore with a token count of N. Share this semaphore with all of the processes, either by giving it a name or by duplicating the handle into each of the processes.
  • To take a read lock, claim one token from the semaphore. To release the lock, release the token.
  • To take a write lock, claim N tokens from the semaphore. To release the lock, release N tokens.

The idea for the write lock is that it’s accomplished by claiming all the read locks, thereby ensuring that nobody else can get a read lock.

#define MAX_SHARED 100
HANDLE sharedSemaphore;

void AcquireShared()
{
    WaitForSingleObject(sharedSemaphore, INFINITE);
}

void ReleaseShared()
{
    ReleaseSemaphore(sharedSemaphore, 1, nullptr);
}

void AcquireExclusive()
{
    for (unsigned i = 0; i < MAX_SHARED; i++) {
        WaitForSingleObject(sharedSemaphore, INFINITE);
    }
}

void ReleaseExclusive()
{
    ReleaseSemaphore(sharedSemaphore, MAX_SHARED, nullptr);
}

Since we are using Wait­For­Single­Object, we can also add a timeout, so that the caller can decide to abandon the operation if they can’t claim the lock.

bool AcquireSharedWithTimeout(DWORD timeout)
{
    return WaitForSingleObject(sharedSemaphore, timeout) == WAIT_OBJECT_0;
}

bool AcquireExclusiveWithTimeout(DWORD timeout)
{
    DWORD start = GetTickCount();
    for (unsigned i = 0; i < MAX_SHARED; i++) {
        DWORD elapsed = GetTickCount() - start;
        if (elapsed > timeout ||
            WaitForSingleObject(sharedSemaphore, timeout - elapsed) == WAIT_TIMEOUT)) {
            // Restore the tokens we already claimed.
            if (i > 0) {
                ReleaseSemaphore(sharedSemaphore, i, nullptr);
            }
            return false;
        }
    }
    return true;
}

Exclusive acquisition is tricky because we have to call Wait­For­Single­Object multiple times, with decreasing timeouts as time passes. If we run out of time, then we need to give back the tokens we had prematurely claimed.

There’s still a problem here. We’ll look at it next time.

The post Developing a cross-process reader/writer lock with limited readers, part 1: A semaphore appeared first on The Old New Thing.

17:42

Link [Scripting News]

Claude unlearns things that we had settled a long time ago. It fumbles around with a process, making it worse with every iteration, the same fumbling it did five days ago when it initially learned how to do what it can't do now. Usually when I regress in software, I am responsible for it, i did something to break it, but here's a tool that's capable of derailing us with me doing nothing new. In that way it behaves more like an imperfect human than a GIGO machine.

17:21

15:49

Fedora Linux 44 has been released [LWN.net]

The Fedora Project has announced the release of Fedora Linux 44. There are "what's new" articles for Fedora Workstation, Fedora KDE Plasma Desktop, and Fedora Atomic Desktops. The Fedora Asahi Remix for Apple Silicon Macs, based on Fedora 44, is also available. See the Fedora Spins page for a full list of alternative desktop options.

Fedora Linux 44 Workstation ships with the latest GNOME release, GNOME 50. This comes with a long list of refinements to your desktop, including everything from accessibility to color management and remote desktop. Many of the applications that are installed by default on Fedora Workstation have also seen improvements, from Document Viewer to File Manager and Calendar. To learn more about these and other changes, you can read the GNOME 50 release notes.

KDE Plasma Desktop: If you are a KDE user, you should also notice a couple of very obvious changes. Fedora KDE Plasma Desktop 44 is based on the latest Plasma 6.6, which includes the new Plasma Login Manager and Plasma Setup to provide a more cohesive and integrated experience from the moment the computer is powered on for the first time. The installation process has been simplified, enabling you to easily set up Fedora KDE Plasma Desktop for a computer for a friend or a loved one.

The release notes include important changes between Fedora 43 and Fedora 44 for desktop users, developers, and system administrators.

[$] Strawberry is ripe for managing music collections [LWN.net]

There are dozens of music-player applications for Linux; the options range from bare-bones programs that only play local files to full-blown music-management projects with a full suite of tools for managing (and playing) a music collection. Strawberry is in the latter category; it has a bumper crop of features, including smart playlists, support for editing music metadata tags, the ability to organize music files, and more.

Moseying Around Cincinnati’s Asian Food Festival [Whatever]

still have more posts to do over my trip to Colorado (I cannot seem to get through that dang trip!), but I wanted to post about my experience at Cincinnati’s Asian Food Festival because it just happened this past weekend and I thought some fresh content was a good way to get me into a writing mood.

I was so excited for this festival. I had it on my calendar for two whole months prior because I couldn’t wait for it. I told multiple friends about it out of excitement. I ended up going with Kayla, Brad, and Bryant, and we went on Saturday, since it’s only a two-day festival and Saturday just worked better for everyone instead of Sunday.

The Cincinnati Asian Food Festival has been going on for fifteen years, with this past year surpassing 125,000 attendees, and they have over 60 different vendors. Most of these are food and drink vendors, but there’s also some other goods for sale and even a ZYN station set up, just in case you really needed your nicotine fix.

I am sad to say I didn’t have a super positive experience at the festival, despite my initial excitement for it. As you can imagine from hearing the words “125,000 attendees,” it was very crowded. On one hand, I’m happy that something like an Asian Food Festival would be a popular event and that all these businesses are getting a ton of traffic, but on the other hand, when you cram that many people into a three block radius, it gets very difficult to walk around.

Long lines impede the flow of foot traffic (what little flow there is) because they jut right out into the street everyone is trying to walk down, every line to order is at least twenty minutes long and then you have to wait to actually receive your food. If you’re with your friends you will absolutely lose them in the crowd unless you’re literally holding hands. You will get shoulder checked by multiple people and almost kick a pug you didn’t see. There is absolutely nowhere to sit and eat or even stand and eat. There’s also almost no shade.

For what it’s worth, these issues are not limited to just the Asian Food Festival. This is pretty much all food festivals ever. And I go to a fair amount of them. I’m honestly very tired of these issues, and I feel like the Asian Food Festival just so happened to be the straw that broke the camel’s back. You can’t have a literal food festival and then have nowhere for people to eat. You need to figure out better line control so people can actually differentiate between the line and the sea of people, and where the end of the line is.

At one point, I ordered something and then tried to move to the “pick up” area to wait for my food, but it was so intensely packed that I couldn’t move from the ordering spot. I tried to step to the side in the other direction but was met with another wall of people. The cashier ended up telling me to move, and I got frustrated because I was actively trying to, but there was nowhere to move to! Like, yes I am well aware of the line behind me, I promise I’m not just standing at the register for fun.

I mean look at this!

A large sea of people in the middle of the street. A huge, daunting crowd that seems insurmountable to get through.

Imagine trying to get through this if you have a stroller, or are in a wheelchair? You’re gonna have to run someone over if you want through. There were so many points where literally just nobody was moving. Like a traffic jam, but just people standing completely still and there’s no way around anyone. So you just stand there and wait a few minutes until you can continue taking tiny-half-shuffle-steps and try not to step on the back of the shoes of the person in front of you.

Also, I know you’re probably thinking that I just happened to go during the busiest time. Well, it was open from 11am to 10pm on Saturday, and I got there at 11:45am and left at 7pm. So I was there for a hot minute. I’m sure 9pm might’ve been less crowded, but I’m also sure a lot of places would be sold out or closing down for the night by then to prep for Sunday.

Okay, so now that I’ve gotten my population qualms and lack of seating issues out of the way, let’s talk about the actual food and drinks I got.

Oh, I almost forgot, parking in a public lot nearby was $30. So, that fucking sucked. And, yes, there’s more financially savvy options of taking the bus or walking, but I live two full hours away from the Court Street Plaza where it was held, so yeah, I need somewhere to park my dang car.

It always takes me a couple passes of everything to figure out what I want to try first. I knew I wanted to start off with a coffee, and Lotus Street Foods had a Thai Iced Coffee for six dollars:

Bryant's hand holding out my Thai Iced Coffee.

Bryant so kindly modeled my beverage for me because I was holding the actual food item I got from Lotus. Here’s their Asian fried jerky for nine dollars:

A small container holding a few pieces of Asian jerky and a small mound of white rice.

I actually really liked the flavor of the jerky. It had a sticky, sort of sweet glaze, but it was definitely hard to bite through and chew. Wasn’t quite the same texture as jerky but wasn’t the same texture as regular meat. The rice was unfortunately cold and extremely bland. Great flavor on the meat though!

For the coffee, I would’ve liked a little more condensed milk in it. It wasn’t quite creamy enough for my taste and was just a little too plain coffee-y flavored. I like a sweeter, creamier coffee though, so I know I’m not the best judge of coffee when it actually tastes like coffee. I just think the balance was a little off. And for what it’s worth this wasn’t my first time trying this drink, so I have some sweeter ones I’ve had in the past to compare it to.

Kayla really wanted to try the elote from LALO Chino Latino, especially since it wasn’t listed on their online menu that it was going to be offered:

A cob of corn covered in a light orange sauce and some cilantro.

She said it was totes delish last year, but sadly this elote missed the mark this time around so bad that she barely ate half. She let me try a bite and yeah, it was rough. The corn itself was cold and had no flavor, and was tough and almost rubbery in texture. It felt like something you shouldn’t actually be chewing on. The sauce was lackluster, and honestly if the corn itself isn’t good then the dish isn’t going to be good no matter what you put on top. So that was unfortunate.

However, I did get the Vietnamese Birria Beef Taco from them for six dollars, and their horchata coffee, also for six dollars:

A small birria taco and a side of dipping sauce being held by Bryant. He is also holding the coffee in the other hand.

I didn’t finish the Thai coffee, so I was hoping this horchata coffee was going to be the redeeming caffeine fix of the day. While I did like the horchata coffee better than my first coffee, I can confidently say it was totally lacking in horchata flavor. There were some notes of cinnamon in there, but I would not go so far as to label this as “horchata” coffee. Kayla got one too and agreed that it’s more like if you added a little bit of cinnamon to a regular latte. So that was a little disappointing.

As for the birria taco, it was so good! I know you can’t see the inside, but there was plenty of tender birria, and the cilantro and onion on top was nice and fresh. The consommé had a lot of good flavor, the outside was golden brown, and I was wishing I had got a second one.

The next place I stopped was Evolve Bake+Shop. Though it was only about 1:30, this stand was almost completely sold out of baked goods. By the time I did another once through the street, they were sold out and had gone back home to bake more goodies for Sunday. The owner was so sweet and apologetic, but honestly I’m thrilled for them that they sold out so quick. I managed to get my hands on two of their few remaining cookies: their gluten-free ube crinkle cookie, and their strawberry matcha oatmilk cookie for four dollars each:

Two cookies, each one being held in one of Kayla's hands. They both are in plastic packaging. The ube crinkle one is purple with a white crinkle top, and the other one is green with a white drizzle and some pink chunks visible.

I actually didn’t know until I looked them up on Instagram for this post, but all their baked goods are 100% vegan/plant-based! It’s nice to know there are some vegan options at the festival.

I shared the ube cookie with everyone, and the consensus was that it was pretty good, but the gluten-free aspect of it made the mouthfeel just a little bit odd. Gluten-free stuff tends to have that sort of sandy texture sometimes. But it was dense and had good flavor.

As for the strawberry matcha cookie, I had that all to myself (as I am writing this post) and it was the bomb dot com! It’s super moist and soft, and has a great balance of sweetness and earthy matcha flavor. I think these cookies were well worth the four dollars. Evolve also won Best Desserts for the third year! I’m glad for them.

For years, it has been a dream of mine to try Tang Hu Lu. If you don’t recognize the name, I’m willing to bet you’ll recognize it when you see it. It’s hard to mistake the glassy, shiny, iconic strawberries on a stick. I got this Tang Hu Lu from Tenji Sushi for ten dollars:

A big kebab stick with four sugar covered strawberries on it and one green grape at the end.

I was a tiny bit disappointed by the presentation of this, because the pictures they had of it showed it having mandarin orange slices and more grapes, so only getting one grape and no orange slices was a bit of a letdown, but honestly I can’t be too mad because these strawberries were so good. They were juicy and sweet and perfectly firm without being that hard unripe texture. If you’ve ever had an urge to eat glass shards and not get hurt, this is the perfect food for you. The glassy sugar coating shatters apart and crunches so damn good, sort of like rock candy. I do think ten dollars was a lot for four strawberries and one grape, but at least I finally got to try the street food I’ve always wanted to.

There was no shortage of different Asian cuisines that were represented at this festival, including Indian dishes. Kayla ended up getting these chicken lollipops and cheesy naan bites from Khaao Macha, who were the Best of Yums winner last year:

Two flaming hot red colored chicken lollipops and one basket of cheesy naan.

I didn’t try the chicken, but Kayla said it was good (I did sniff it and it smelled like Taco Bell’s mild sauce packets). I did try some of the naan and it was definitely yummy. I mean, you really can’t go wrong with cheesy naan. The chicken was ten dollars and you got two of them, and the naan was seven dollars. I would say the naan was sizeable for the price, and good for sharing.

At this point, we took a little break on food and watched some of the free entertainment on the main stage:

A taiko drum performance, each of the performers wearing a matching red uniform.

I think taiko drums are cool so this was really awesome to see, and then there was a Nepali dance performance right after this. It was very neat to see different culture’s traditions and performances. I like that the entertainment is free and they have such a variety of performances.

Back to snacking, I finally got to try my most anticipated item from the online vendor menu, Chhnagnh’s Pot Ang (roasted corn with sweet coconut sauce). I also tried their lemongrass beef skewer, and Kayla got their chicken skewer. The skewers were six dollars each and the corn was seven.

Two meat skewers and one corn on the cob, roasted and covered in creamy white sauce with green onions on top.

I can honestly say I’ve never had Cambodian food before, but this looked very promising. I absolutely loved the corn, it was roasted so perfectly and had great flavor. The coconut sauce wasn’t really giving coconut, but it was sweet and creamy so at least it added some texture and flavor, and weirdly enough the green onion went really well with it all. It just added a nice fresh component without overpowering anything flavor-wise.

Kayla let me try her chicken skewer and it was pretty good but the chicken was just a little dry. The beef was so delish though. It had just the right amount of lemongrass flavor in it without being overwhelming and was very tender and warm. This was my favorite savory food I tried all day.

The last thing I ate was from Fusako, and I hate to totally bash a place, but y’all. What I was presented with was egregious.

Here’s the menu on their truck:

A menu for Fusako, detailing three items: street corn gyoza, Japanese curry Coney, and a hash brown sushi fusion sort of dish. Everything looks totes delish and decked out.

This looked so good and impressive. Everything looked filling and decked out in garnishes and sauces and I had high hopes. I got the Mexican street corn gyoza, which was supposed to be crispy fried dumplings stuffed with sweet corn, with cotija cheese, a chili-lime aioli, lime zest, and green onion. Sounded amazing. Here’s what I got for eight dollars:

Two tiny dumplings covered in sauce and corn.

Two tiny gyoza, covered in a mess of sauce and corn, with no lime zest or green onions in sight. It looked so haphazardly thrown together. It was totally cold and the gyoza were tough instead of crispy. The entire thing lacked flavor, and the wait was so long. I was really disappointed.

I hated to leave on an L, but it was getting late.

Oh, and earlier in the day I had a really terrible yuzu mule for ten dollars.

In total, I spent $88 dollars before tip (I bought Kayla’s chicken skewer and a Thai coffee for Bryant), and usually I just chose the 15% tip option but I’m not gonna do all that math. We’ll just say around a hundred bucks.

Overall, I just wasn’t really impressed with the food or drinks I had gotten throughout the day. There were some good things but my experience overall with how crowded it was and the prices and lack of seating just kind of made for a less than ideal experience. They clearly need to open up more blocks for the festival to spread out.

I always get so excited for food truck festivals, and I keep being let down by them. Is it me? Am I the problem? Am I just not cut out for the food truck lifestyle? I hate waiting in lines and I hate standing to eat. I don’t prefer fast, casual service, and I usually like my food to come on real dishes. Oh no. Maybe it is me.

Huge shout out to the Library Square public library for keeping me from having to use a Porta-Potty. Very happy to use actual toilets and wash my fucking hands. And get some AC for a second.

I am glad I got to experience something new and hang out with my friends, but I think I won’t return next year unless they implement some kind of crowd management or cap tickets.

What sounded the best to you? Have you been to any of the previous years of the festival? Let me know in the comments, and have a great day!

-AMS

15:07

In Memoriam: Tomáš Kalibera [LWN.net]

We have received the sad news that Tomáš Kalibera, a member of the R Project core team, has passed away after a short illness.

A friend who knew him well wrote to me: he was very happy, and his work fulfilled him. That is, perhaps, the best thing one can say about a life in open source — that the work mattered, that it reached millions, and that the person who did it found meaning in it.

Kalibera was mentioned in this 2019 article about C programs passing strings to Fortran subroutines. He will be greatly missed.

14:21

Security updates for Tuesday [LWN.net]

Security updates have been issued by Debian (openjdk-21 and webkit2gtk), Fedora (botan3, chromium, cockpit, firefox, flatpak, gum, libarchive, libcoap, mingw-python3, ngtcp2, nss, openssh, openssl, openvpn, PackageKit, python3-docs, python3.11, python3.12, python3.13, python3.14, vim, and xrdp), Oracle (firefox, gdk-pixbuf2, java-1.8.0-openjdk, java-21-openjdk, python3.12, python3.9, sudo, and tigervnc), Red Hat (tigervnc and xorg-x11-server-Xwayland), Slackware (mpg123 and proftpd), SUSE (emacs, firefox, fontforge, freeciv, freerdp, libngtcp2-16, libsystemd0, and strongswan), and Ubuntu (authd, clamav, glance, haproxy, jq, lcms2, nginx, nltk, ntfs-3g, packagekit, pillow, strongswan, and vim).

All FOSDEM 2026 videos are online [LWN.net]

FOSDEM's organizers have announced that all of the video recordings "worth publishing" from FOSDEM 2026 are now available.

Videos are linked from the individual schedule pages for the talks and the full schedule page. They are also available, organised by room, at video.fosdem.org/2026.

LWN's coverage of talks from FOSDEM 2026 can be found on our conference index.

13:56

When Correct Systems Produce the Wrong Outcomes [Radar]

We tend to assume that if every part of a system behaves correctly, the system itself will behave correctly. That assumption is deeply embedded in how we design, test, and operate software. If a service returns valid responses, if dependencies are reachable, and if constraints are satisfied, then the system is considered healthy. Even in distributed systems, where failure modes are more complex, correctness is still tied to the behavior of individual components. In modern AI systems, particularly those combining retrieval, reasoning, and tool invocation, this assumption is increasingly stressed under continuous operation.

This model works because most systems are built around discrete operations. A request arrives, the system processes it, and a result is returned. Each interaction is bounded, and correctness can be evaluated locally. But that assumption begins to break down in systems that operate continuously. In these systems, this behavior is not the result of a single request. It emerges from a sequence of decisions that unfold over time. Each decision may be reasonable in isolation. The system may satisfy every local condition we know how to measure. And yet, when viewed as a whole, the outcome can be wrong.

One way to think about this is as a form of behavioral drift systems that remain operational but gradually diverge from their intended trajectory. Nothing crashes. No alerts fire. The system continues to function. And still, something has gone off course.

The composability problem

The root of the issue is not that components are failing. It is that correctness no longer composes cleanly. In traditional systems, we rely on a simple intuition: If each part is correct, then the system composed of those parts will also be correct. This intuition holds when interactions are limited and well-defined.

In autonomous systems, that intuition becomes unreliable. Consider a system that retrieves information, reasons over it, and takes action. Each step in that process can be implemented correctly. Retrieval returns relevant data. The reasoning step produces plausible conclusions. The action is executed successfully. But correctness at each step does not guarantee correctness of the sequence.

The system might retrieve information that is contextually valid but incomplete or misaligned with the current task. The reasoning step might interpret it in a way that is locally consistent but globally misleading. The action might reinforce that interpretation by feeding it back into the system’s context. Each step is valid. The trajectory is not. This is what behavioral drift looks like in practice: locally correct decisions producing globally misaligned outcomes.

In these systems, correctness is no longer a property of individual steps. It is a property of how those steps interact over time. This breakdown is subtle but fundamental. It means that testing individual components, even exhaustively, does not guarantee that the system will behave correctly when those components are composed into a continuously operating whole.

Behavior emerges over time

To understand why this happens, it helps to look at where behavior actually comes from. In many modern AI systems, behavior is not encoded directly in a single component. It emerges from interaction:

  • Models generate outputs based on context
  • Retrieval systems shape that context
  • Planners sequence actions based on those outputs
  • Execution layers apply those actions to external systems
  • Feedback loops update the system’s state

Each of these elements operates with partial information. Each contributes to the next state of the system. The system evolves as these interactions accumulate. This pattern is especially visible in LLM-based and agentic AI systems, where context assembly, reasoning, and action selection are dynamically coupled. Under these conditions, behavior is dynamic and path dependent. Small differences early in a sequence can lead to large differences later on. A slightly suboptimal decision, repeated or combined with others, can push the system further away from its intended trajectory.

This is why behavior cannot be fully specified ahead of time. It is not simply implemented; it is produced. And because it is produced over time, it can also drift over time.

Observability without alignment

Modern observability systems are very good at telling us what a system is doing. We can measure latency, throughput, and resource utilization. We can trace requests across services. We can inspect logs, metrics, and traces in near real time. In many cases, we can reconstruct exactly how a particular outcome was produced. These signals are essential. They allow us to detect failures that disrupt execution. But they are tied to a particular model of correctness. They assume that if execution proceeds without errors and if performance remains within acceptable bounds, then the system is behaving as expected.

In systems exhibiting behavioral drift, that assumption no longer holds. A system can process requests efficiently while producing outputs that are progressively less aligned with its intended purpose. It can meet all its service-level objectives while still moving in the wrong direction. Observability captures activity. It does not capture alignment.

This distinction becomes more important as systems become more autonomous. In AI-driven systems, particularly those operating as long-lived agents, this gap between activity and alignment becomes operationally significant. The question is no longer just whether the system is working. It is whether it is still doing the right thing. This gap between activity and alignment is where many modern systems begin to fail without appearing to fail.

The limits of step-level validation

A natural response to this problem is to add more validation. We can introduce checks at each stage:

  • Validate retrieved data.
  • Apply policy checks to model outputs.
  • Enforce constraints before executing actions.

These mechanisms improve local correctness. They reduce the likelihood of obviously incorrect decisions. But they operate at the level of individual steps.

They answer questions like:

  • Is this output acceptable?
  • Is this action allowed?
  • Does this input meet requirements?

They do not answer:

  • Does this sequence of decisions still make sense as a whole?

A system can pass every validation check and still drift. Behavioral drift is not caused by invalid steps. It is caused by valid steps interacting in ways we did not anticipate. Increasing validation does not eliminate this problem. It only shifts where it appears, often pushing it further downstream, where it becomes harder to detect and correct.

Coordination becomes the system

If correctness does not compose automatically, then what determines system behavior? Increasingly, the answer is coordination. In traditional distributed systems, coordination refers to managing shared state, ensuring consistency, ordering operations, and handling concurrency. In autonomous systems, coordination extends to decisions.

The system must coordinate:

  • Which information is used
  • How that information is interpreted
  • What actions are taken
  • How those actions influence future decisions

This coordination is not centralized. It is distributed across models, planners, tools, and feedback loops. In agentic AI architectures, this coordination spans model inference, retrieval pipelines, and external system interactions. The system’s behavior is not defined by any single component. It emerges from the interaction between them.

In this sense, the system is no longer just the sum of its parts. The system is the coordination itself. Failures arise not from broken components, but from the dynamics of interaction timing, sequencing, feedback, and context. This also explains why small inconsistencies can propagate and amplify. A slight mismatch in one part of the system can cascade through subsequent decisions, shaping the trajectory in ways that are difficult to anticipate or reverse.

Control planes introduce structure, not assurance

One response to this complexity is to introduce more structure. Control planes, policy engines, and governance layers provide mechanisms to enforce constraints at key decision points. They can validate inputs, restrict actions, and ensure that certain conditions are met before execution proceeds. This is an important step. Without some form of structure, it becomes difficult to reason about system behavior at all. But structure alone is not sufficient.

Most control mechanisms operate at entry points. They evaluate decisions at the moment they are made. They determine whether a particular action should be allowed, whether a policy is satisfied, and whether a request can proceed. The problem is that many of the failures in autonomous systems do not originate at these entry points. They emerge during execution, as sequences of individually valid decisions interact in unexpected ways. A control plane can ensure that each step is permissible. It cannot guarantee that the sequence of steps will produce the intended outcome. This distinction is subtle but important: control provides structure, but not assurance.

From events to trajectories

Traditional monitoring focuses on events. A request is processed. A response is returned. An error occurs. Each event is evaluated independently. In systems exhibiting behavioral drift, behavior is better understood as a trajectory. A trajectory is a sequence of states connected by decisions. It captures how the system evolves over time. Two trajectories can consist of individually valid steps and still produce very different outcomes. One remains aligned. The other drifts. This represents a shift from failure as an event to failure as a trajectory, a distinction that traditional system models are not designed to capture.

Correctness is no longer about individual events. It is about the shape of the trajectory. This shift has implications not just for how we monitor systems, but for how we design them in the first place.

Detecting drift and responding in motion

If failure manifests as drift, then detecting it requires a different set of signals. Instead of looking for errors, we need to look for patterns:

  • Changes in how similar situations are handled
  • Increasing variability in decision sequences
  • Divergence between expected and observed outcomes
  • Instability in response patterns

These signals are not binary. They do not indicate that something is broken. They indicate that something is changing. The challenge is that change is not always failure. Systems are expected to adapt. Models evolve. Data shifts. The question is not whether the system is changing. It is whether the change remains aligned with intent. This requires a different kind of visibility, one that focuses on behavior over time rather than isolated events. Once drift is identified, the system needs a way to respond. Traditional responses, restart, rollback, stop, assume failure is discrete and localized. Behavioral drift is neither.

What is needed is the ability to influence behavior while the system continues to operate. This might involve constraining action space, adjusting decision selection, introducing targeted validation, or steering the system toward more stable trajectories. These are not binary interventions. They are continuous adjustments.

Control as a continuous process

This perspective aligns with how control is handled in other domains. In control systems engineering, behavior is managed through feedback loops. The system is continuously monitored, and adjustments are made to keep it within desired bounds. Control is no longer just a gate. It becomes a continuous process that shapes behavior over time.

This leads to a different definition of reliability. A system can be available, responsive, and internally consistent—and still fail if its behavior drifts away from its intended purpose. Reliability becomes a question of alignment over time: whether the system remains within acceptable bounds and continues to behave in ways consistent with its goals.

What this means for system design

If behavior is trajectory-based, then system design must reflect that. We need to monitor patterns, understand interactions, treat behavior as dynamic, and provide mechanisms to influence trajectories. We are very good at detecting failure as breakage. We are much less equipped to detect failure as drift. Behavioral drift accumulates gradually, often becoming visible only after significant misalignment has already occurred.

As systems become more autonomous, this gap will become more visible. The hardest problems will not be systems that fail loudly, but systems that continue working while gradually moving in the wrong direction. The question is no longer just how to build systems that work. It is how to build systems that continue to work for the reasons we intended.

13:28

Freexian Collaborators: Monthly report about Debian Long Term Support, March 2026 (by Santiago Ruano Rincón) [Planet Debian]

The Debian LTS Team, funded by [Freexian’s Debian LTS offering] (https://www.freexian.com/lts/debian/), is pleased to report its activities for March.

Activity summary

During the month of March, 20 contributors have been paid to work on Debian LTS (links to individual contributor reports are located below).

The team released 24 DLAs fixing 250 CVEs.

We also welcomed two new members: Lukas Märdian and Emmanuel Arias to the team, who actually started to contribute to the LTS project several months ago.

The team continued preparing security updates in its usual rhythm. Beyond the updates targeting Debian 11 (“bullseye”), which is the current release under LTS, the team also proposed updates for more recent releases (Debian 12 (“bookworm”) and Debian 13 (“trixie”)), including Debian unstable. We highlight several notable security updates here below.

  • ansible (DLA 4502-1), prepared by Lee Garret in collaboration with Jochen, fixing a vulnerability that allows attackers to bypass unsafe content protections
  • asterisk (DLA 4515-1), prepared by Lukas Märdian, fixing four CVEs that include possible privilege escalations.
  • gimp (DLA 4500-1), prepared by Thorsten, fixing four CVEs related to denial of service or execution of arbitrary code.
  • gst-plugins-base1.0 and gst-plugins-ugly1.0 (DLA-4514-1, DLA-4516-1, respectively), both prepared by Utkarsh, addressing vulnerabilities that may yield to arbitrary code execution.
  • imagemagick, released by Bastien Roucariès (DLA 4497-1) fixing multiple vulnerabilities that could lead to information leaks, bypass of security policies, denial of service or arbitrary code execution.
  • libpng1.6 (DLA 4521-1), prepared by Tobias Frost, fixing an arbitrary code execution vulnerability
  • linux: Ben Hutchings released DLA 4498-1 and DLA 4499-1 for linux 5.10 and linux 6.1, respectively. Those updates especially address the “CrackArmor” flaw.
  • ruby-rack (DLA 4505-1), prepared by Utkarsh Gupta, addressing two vulnerabilities
  • strongswan (DLA 4512-1), prepared by Thorsten Alteholz, fixing a Denial of Service vulnerability
  • roundcube (DLA 4517-1) prepared by Guilhem Moulin, who discovered that one of the fixes provided by upstream was incomplete.

Contributions from outside the LTS Team:

As usual, the thunderbird update, released as DLA 4511-1, was prepared by its maintainer Christoph Goehre. Thanks a lot for his continuous contributions.

The LTS Team has also contributed with updates to the latest Debian releases:

  • Andreas Henriksson completed the uploads of glib2.0 for both trixie and bookworm
  • Arnaud Rebillout: python-cryptography for trixie
  • Arnaud and Bastien worked together to prepare a ca-certificates-java release for unstable
  • Bastien completed the upload of gpsd for trixie that was proposed in January.
  • Bastien uploaded a regression update of apache2 for trixie
  • Bastien prepared a zabbix point update for trixie
  • Bastien in collaboration with Markus released netty updates for trixie and bookworm DSA 6160-1
  • Daniel Leidert proposed python-tornado releases for both trixie and bookworm.
  • Daniel also prepared a python-authlib update for trixie
  • Guilhem prepared a mapserver update for bookworm.
  • Lucas Kanashiro proposed merge requests to fix three CVEs in erlang for both trixie and bookworm
  • Sylvain Beucler continued the work to replace p7zip with 7zip in the different supported releases, and proposed a point update for bookworm
  • Tobias prepared trixie and bookworm security updates, released as DSA-6189-1
  • Utkarsh prepared trixie and bookworm security update for ruby-rack, released as DSA-6180-1

Individual Debian LTS contributor reports

Thanks to our sponsors

Sponsors that joined recently are in bold.

13:07

CodeSOD: Lint Brush Off [The Daily WTF]

A few years back, C# added the concept of "primary constructors". Instead of declaring the storage for class members and then initializing them in the constructor, you can annotate the class itself with the required fields, and C# automatically generates a constructor for you. It's all very TypeScript and very Microsoft, and certainly cuts down on some boilerplate.

Esben B's team isn't really using them in many places, but they are using a linter which is opinionated about them. So this in-line constructor causes the linter to complain:

    public DocumentNetworkController(ILookupClient service)

The linter wants you to switch this to a primary constructor. Esben didn't want to do that, and didn't want to change the global linter configuration, and so added a pragma to disable that particular warning:

#pragma warning disable IDE0290 // Use primary constructor
    public DocumentNetworkController(ILookupClient service)
#pragma warning restore IDE0290

The linter didn't like this. It threw a new warning: that this suppression wasn't needed. Which was news to Esben, as clearly the suppression was needed if you wanted to make the warnings go away. The obvious solution was to disable the warning that you didn't need to disable the warning:

#pragma warning disable IDE0079, IDE0290 // Use primary constructor
    public DocumentNetworkController(ILookupClient service)
#pragma warning restore IDE0290, IDE0079

Except this doesn't work. These pragmas take effect on the next line, which means you can't disable IDE0079 on the same line as IDE0290 and expect it to work. Which means the final version of the code looked like this:

#pragma warning disable IDE0079 // Disable warning about not needed supression
#pragma warning disable IDE0290 // Use primary constructor
    public DocumentNetworkController(ILookupClient service)
#pragma warning restore IDE0290, IDE0079

Esben writes:

So the nice recommendation to use a primary ctor ended up with 3 lines of annoying boilerplate code. Good times \o/

While yes, this is frustrating, I will say there's an element of "when the table saw keeps taking fingers off, that may be more of a you problem." I don't know the details, so I can't say, "just change the linter config or adopt its recommendation" and claim that the problem goes away, but when the tool hurts you, it's a definite sign of one of two things: it's either the wrong tool, or you're using it wrong.

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

12:14

Spectre [RevK®'s ramblings]

A "Spectre" is a new shape.

Yes, I did say new shape! It is pretty incredible that there can be any such things as new shape, I mean, how do we not know all the shapes already? Also, to be fair, it is a reasonable bet that the ancient Greek's knew of this and forgot to write anything down. But certainly in my life time - this is indeed a new shape. Discovered in 2023!

So what makes it special? There is, of course, a whole wikipedia article on what is called The Einstein Problem, but I'll try and explain it simply.

This shape tessellates. Basically that means you can tile your bathroom wall with it - the shape fits together with itself to cover a surface with no gaps. Lots of shapes do this, squares, triangles, hexagons, and so on. You can rotate the tiles if needed (e.g. for triangle you have to). There are many that do not, such as pentagons, and circles, etc. You cannot tile a wall with circles without leaving gaps.

So what? Well, most tessellating shapes create a repeating pattern. Hexagons make a familiar honeycomb pattern for example. But with the Spectre you can make a pattern that does not repeat. In fact, you cannot make a repeating pattern with it at all, no matter how hard you try. Yes, some groups of tiles may appear the same in other places but even then these do not form a regular pattern, at any level.

There is some debate over the rules - this was, it seems, a competition. The rules allowed you to turn a tile over. The researchers created a shape called the "Hat" which worked but some of the tiles had to be turned over. People, quite reasonably, said "If I want to tile my bathroom wall I have to buy two sets of tiles". So the researchers came out with the "Spectre" a year later, and that works without turning over tiles. In fact if you can turn over titles, you can make a repeating pattern with it.

But basically, until this was discovered, nobody even knew if a forced aperiodic tessellating monotile shape even existed. That is what makes it a new shape.

You can now tile your bathroom wall with one type of tile and it is a non repeating pattern.

But how?

Well, you could just try placing randomly where they fit, but you quickly end up with gaps that are not Spectre shaped, and have to back track and try again.

However, there is an agorithm, published by Simon Tatham, here. I'd like to thank him for his work, but I have a word of caution if you want to use his paper. I also appreciate, as a coder, the counting from 0 all the way through.

It just so happens I had an idea how to use this shape, for reasons which will become apparent later this year I hope. So I wanted to code generating a surface covered with these tiles. Long story short - here it is, open source on Codeberg.

But this took me a couple of days, which is a long time for me, so let me explain the issues.

The principle is simple, a recursive set of meta tiles are groups of tiles in a pattern (represented has hexagons in the paper).

You can start at the top, pick a meta tile from a set of 9 different types, and that tells you how to place 7 or 8 subtitles in a honeycomb pattern, and their types (from the set of 9) and orientation. You repeat as far as you want and the last level you actually have Spectre tiles not hexagons.

You can also start at the bottom with a Spectre, and decide which of the meta tile types it shall be at random. You can then find a meta tile which includes that type, and it tells you the neighbouring Spectre tiles to place. This is then a meta tile which you can again decide is part of a higher level meta tile randomly, and that tells you what meta tiles to place for its neighbours and work down to Spectre tiles below. So you have one upward process in a loop, and at each point have a recursive downward process placing 6 or 7 neighbours at each level down. This is the approach I took.

If I started at the top I would pick one of 9 meta tiles, and maybe one of 12 orientations, and that would be it, the Spectres under that are determined by the algorithm and not any more random. By starting at the bottom, I pick one of 12 orientations and place the first tile, and pick one of 9 meta tiles, but at each level as I go up I get to pick which higher meta tile it is in, and in some cases which of two sub tiles it is. This is random at each level and makes for a much more randomised final output.

So what was confusing me?

The distraction that took me most of the time trying to get this working is the rather excellent graphic representations in Simon's paper. They show a hexagon meta tile substituted with 7 or 8 joined hexagon meta tiles, and shows a hexagon meta tile substituted with a Spectre tile. These diagrams have specific orientation, and rotation, so one is lulled in to a sense of simplicity that you are literally replacing one hexagon with a set of them, each with specific orientation.

Looks pretty, and simple, but this is NOT the case!

The diagrams are actually simply a mapping, a look up table for what gets joined to what and what side. The hexagon has 6 sides, basically at the lowest level each Spectre is joined to exactly 6 other Spectre tiles (there is a special case for that G meta tile where it is two Spectre tiles, the others are all one, just to add to the fun). So you have each Spectre tiles as having 6 connection sets of edges - but these are not simple, as each of the 9 types of meta tile is a Spectre with a specific set of edges for each of the 6 sides.

The numbering is the key - on the yellow tile there is a side 0, which is actually the three edges 8, 7, and 6 (marked 0.0, 1.0, and 2.0). On the purple, there is a side 4, which is edges 13, 12, and 11 (marked 0.4, 1.4, and 2.4). But side 4 on the yellow tile is only sides 12 and 11 (marked 0.4 and 1.4). But you a see yellow side 0 and purple side 4 would fit together. Some of these 6 sides are one edge (see purple side 3), but can be as many as 6 edges in some cases. Each of the 9 meta tiles has a specific set of edges making up the 6 joint points to other tiles. Each similarly has a set of edges on the hexagon pattern, which is different for each type.

So in practice you connect the defined edges, and they end up nothing like hexagonal tiles. In fact they twist and distort all over the place. The graphical representations are really not helpful in my view, sorry. Also, I would have numbered side.subedge so 1.0, 1.1, 1.2, not 0.1, 1.1, 2.1, personally.

Once I grasped that logic, the code became simple. As I say, you start with one Spectre, and connect neighbours. You only need to know the specific 6 sets of edges for that tile. Then when you use the meta tile rules you know which set of edges that connects to on the neighbouring Spectre. It is pretty simple to then align the new Spectre connected on that edge. Having placed the 7 or 8 Spectres to make a meta tile, you then just need to know the 6 joining points on that meta tile, which are themselves 6 sets of specific Spectre tile edges within the meta tile.

One issues is these connecting sides are several edges, so I actually picked one end, e.g. for yellow it would be 8, 5, 2, 0, 13, 12, 10, and for purple it would be 8, 5, 3, 0, 13, 10 as the 6 outgoing edges. These are the first edge of each side (numbered 0.x). When placing a Spectre next to one of these, you pick the other ends, so 6, 3, 1, 13, 11, 9 for yellow and 6, 4, 1, 0, 11, 9 for purple, the last edge for each side.

So connecting yellow side 0 to purple side 4 you connect yellow edge 8 to purple edge 11. The 11 being the incoming edge. This means you don't have to think of the sets of edges, just one edge on one Spectre tile for each of the 6 outgoings sides of your meta tile, at any level. This is quite a small amount of data to hold in a simple recursive algorithm

Another thing I got wrong is I stored a list of tiles, and referenced them as the 6 sides. Each tile with a starting point and rotation so I could plot it, and align new attached tiles. But this really is not necessary, and ends up using memory. I can plot the tiles (output a path to SVG) as I go, and I just need the 6 sides of a meta tile to be the 6 sets of position, rotation, and outgoing edge number. The only memory usage is a small set of data for the level of recursion. You quickly cover a very large number of tiles in each level (multiple by 8 or 9 each time), so need very few levels of recursion.

Co-ordinates

One issue is coordinates. Ultimately the output uses pixels or millimetres to several decimal places, and indeed I allow a final output rotation. But internally all lines on the Sprectre tiles are at multiples of 30 degrees. Even so you do not want to use floating point - rounding errors will accumulate as you recurse and lead to tiles not quite aligning, and also it is not possible to test two points are the same (why you need this is explained below). So the solution is to use coordinates that are integers! How do you do that with 30 degree angles, well simple - each distances is an integer multiple of sin60 plus an integer multiple of cos60 - at the final stage you multiple out and add these. You can also make a simple table of one unit distance integers for each 30 degree angle. And a table of the relative integer offsets for each point on a Spectre at each angle. This means no floating point maths, nor sin/cos, until you actually output to SVG.

Finishing

One problem which I don't think Simon's paper covers, and was unexpected, is knowing when to stop! I am trying to cover a rectangular area. How do I know I have got there?! I could just set a maximum level, but using random choices for meta tiles at each level one can find the whole things quickly gets to way bigger than the area but ends up one sided leaving gaps in your rectangle. If I had gone top down, or always picked the current tile being in the middle of the meta tile, I could maybe work out a max level, but that is not what I am doing.

After a bit of head scratching I finally worked out a way. I wanted to make a grout line on top of the final tile output so I decided to keep track of all the edges I placed. A simple start/end for each unit length edge in a list. This can be made as I go along, and the integers mean I can always match to an existing edge to plot the grout efficiently as a series of lines.

This also meant I could actually keep two lists - one a list of first use of an edge, and then moving over to another list of second use of an edge, when a tile is attached the other side.

I could also check each edge I added to a list to see if it falls (even one end) within my target rectangle, and so only keep edges I need.

But this has the side effect that as soon as my list of single used edges, within rectangle, is empty, I must have 100% covered the rectangle, as no edges that don't have a tile on one side are in the target area. I can then immediately abort the whole placement process at every level just by checking the list of single use edges is now empty.

Cropping

The final challenge was the edge of rectangle. Firstly the SVG has a viewBox, and so I can simply plot tiles that fall even slightly within the rectangle, and the same for grout lines. These go off the edge, but you cannot see when looking at the final SVG.

This has a problem, if you want to use the SVG in another design, as I did, the embedding does not inherently crop the edges. But SVG has an answer for this, clipPath. It allows me to clip an object to a path, a rectangle in this case. Perfect.

The snag is support of clipPath is not that good. I don't know why, but lots of things mess up, ignoring it, or barfing in some other way. One was my resin printer, which simply ignored the whole block of tiles if it has a clipPath.

So I ended up making a whole path generation set of functions which understood cropping the path to the edge of the rectangle. I could not sleep, and ended up coding this at 2am.

The final result is I can now make an SVG of a randomised set of tessellating aperiodic Spectre monotiles, with loads of options. I even added a sort of bevel edge to the tiles with a lighting based shade.

This is shaded per level, showing how the tiles actually join up at a meta level.



12:07

What Anthropic’s Mythos Means for the Future of Cybersecurity [Schneier on Security]

Two weeks ago, Anthropic announced that its new model, Claude Mythos Preview, can autonomously find and weaponize software vulnerabilities, turning them into working exploits without expert guidance. These were vulnerabilities in key software like operating systems and internet infrastructure that thousands of software developers working on those systems failed to find. This capability will have major security implications, compromising the devices and services we use every day. As a result, Anthropic is not releasing the model to the general public, but instead to a limited number of companies.

The news rocked the internet security community. There were few details in Anthropic’s announcement, angering many observers. Some speculate that Anthropic doesn’t have the GPUs to run the thing, and that cybersecurity was the excuse to limit its release. Others argue Anthropic is holding to its AI safety mission. There’s hype and counterhype, reality and marketing. It’s a lot to sort out, even if you’re an expert.

We see Mythos as a real but incremental step, one in a long line of incremental steps. But even incremental steps can be important when we look at the big picture.

How AI Is Changing Cybersecurity

We’ve written about shifting baseline syndrome, a phenomenon that leads people—the public and experts alike—to discount massive long-term changes that are hidden in incremental steps. It has happened with online privacy, and it’s happening with AI. Even if the vulnerabilities found by Mythos could have been found using AI models from last month or last year, they couldn’t have been found by AI models from five years ago.

The Mythos announcement reminds us that AI has come a long way in just a few years: The baseline really has shifted. Finding vulnerabilities in source code is the type of task that today’s large language models excel at. Regardless of whether it happened last year or will happen next year, it’s been clear for a while this kind of capability was coming soon. The question is how we adapt to it.

We don’t believe that an AI that can hack autonomously will create permanent asymmetry between offense and defense; it’s likely to be more nuanced than that. Some vulnerabilities can be found, verified, and patched automatically. Some vulnerabilities will be hard to find but easy to verify and patch—consider generic cloud-hosted web applications built on standard software stacks, where updates can be deployed quickly. Still others will be easy to find (even without powerful AI) and relatively easy to verify, but harder or impossible to patch, such as IoT appliances and industrial equipment that are rarely updated or can’t be easily modified.

Then there are systems whose vulnerabilities will be easy to find in code but difficult to verify in practice. For example, complex distributed systems and cloud platforms can be composed of thousands of interacting services running in parallel, making it difficult to distinguish real vulnerabilities from false positives and to reliably reproduce them.

So we must separate the patchable from the unpatchable, and the easy to verify from the hard to verify. This taxonomy also provides us guidance for how to protect such systems in an era of powerful AI vulnerability-finding tools.

Unpatchable or hard to verify systems should be protected by wrapping them in more restrictive, tightly controlled layers. You want your fridge or thermostat or industrial control system behind a restrictive and constantly updated firewall, not freely talking to the internet.

Distributed systems that are fundamentally interconnected should be traceable and should follow the principle of least privilege, where each component has only the access it needs. These are bog-standard security ideas that we might have been tempted to throw out in the era of AI, but they’re still as relevant as ever.

Rethinking Software Security Practices

This also raises the salience of best practices in software engineering. Automated, thorough, and continuous testing was always important. Now we can take this practice a step further and use defensive AI agents to test exploits against a real stack, over and over, until the false positives have been weeded out and the real vulnerabilities and fixes are confirmed. This kind of VulnOps is likely to become a standard part of the development process.

Documentation becomes more valuable, as it can guide an AI agent on a bug-finding mission just as it does developers. And following standard practices and using standard tools and libraries allows AI and engineers alike to recognize patterns more effectively, even in a world of individual and ephemeral instant software—code that can be generated and deployed on demand.

Will this favor offense or defense? The defense eventually, probably, especially in systems that are easy to patch and verify. Fortunately, that includes our phones, web browsers, and major internet services. But today’s cars, electrical transformers, fridges, and lampposts are connected to the internet. Legacy banking and airline systems are networked.

Not all of those are going to get patched as fast as needed, and we may see a few years of constant hacks until we arrive at a new normal: where verification is paramount and software is patched continuously.

This essay was written with Barath Raghavan, and originally appeared in IEEE Spectrum.

10:35

Puddles [Seth's Blog]

When there is motion, it creates an impact of the environment.

First, the path is barely noticeable. But then, others see the hint of a path and walk on it, making it more clear. Finally, the path becomes the route.

Sometimes there’s a small rut. But a rut shifts gravity and wheels or feet land in the rut, making it deeper. This is how moguls appear on ski hills as well.

When it rains, the paths and ruts fill with water, and we call them puddles.

Of course, puddles are a metaphor.

Puddles only exist when there’s been some sort of motion that caused a depression that could collect the water. If you want to see how the audience is responding, how the culture is shifting, how your customers are acting–look for the puddles.

Fill in the rut and a new one will appear somewhere else. There are almost always puddles.

10:21

Abhijith PA: Patience could've saved me time. [Planet Debian]

If I had been patient, it would have saved me time. One such instance is following.

From my early blogs, you might know I am using mutt to do email. Just after I get along with mutt, I started using notmuch. Because limit search in mutt is always a pain when you have multiple folders. And what better tool out there than notmuch-mutt to bind both these.

notmuch-mutt provide three macros by default.

macro index <F8> \
"<enter-command>set my_old_pipe_decode=\$pipe_decode my_old_wait_key=\$wait_key nopipe_decode nowait_key<enter>\
<shell-escape>notmuch-mutt -r --prompt search<enter>\
<change-folder-readonly>`echo ${XDG_CACHE_HOME:-$HOME/.cache}/notmuch/mutt/results`<enter>\
<enter-command>set pipe_decode=\$my_old_pipe_decode wait_key=\$my_old_wait_key<enter>" \
      "notmuch: search mail"
macro index <F9> \
"<enter-command>set my_old_pipe_decode=\$pipe_decode my_old_wait_key=\$wait_key nopipe_decode nowait_key<enter>\
<pipe-message>notmuch-mutt -r thread<enter>\
<change-folder-readonly>`echo ${XDG_CACHE_HOME:-$HOME/.cache}/notmuch/mutt/results`<enter>\
<enter-command>set pipe_decode=\$my_old_pipe_decode wait_key=\$my_old_wait_key<enter>" \
      "notmuch: reconstruct thread"
macro index <F6> \
"<enter-command>set my_old_pipe_decode=\$pipe_decode my_old_wait_key=\$wait_key nopipe_decode nowait_key<enter>\
<pipe-message>notmuch-mutt tag -- -inbox<enter>\
<enter-command>set pipe_decode=\$my_old_pipe_decode wait_key=\$my_old_wait_key<enter>" \
      "notmuch: remove message from inbox"

One for search, one for reconstructing threads and one for manipulating tags, which I missed.

Now my impatient part. I have already mapped f6 for my folder movements and in my initial days of notmuch, I only use just search. So I never cared about the f6 macro provided by notmuch-mutt. As time goes by I got very comfortable with notmuch. I was stretching my notmuch legs. I started to live more on notmuch search results date:today tag:unread than more on the mutt index. To the problem, since notmuch-mutt dump all results to a temp maildir location, can’t perform flag changes back to the original maildir which was annoying, because we need to distinguish what mail you read and what not when you subscribed to most of all debian mailing list.

I was under the impression that, the notmuch-mutt is not capable of doing so and I just went like that without checking docs. I started doing all crazy hack to sync these maildirs.

I even started reading notmuch-mutt codebase.

Later, I settled on notmuch-vim. Cause I can manipulate flags sync back from notmuch to maildir.

And while searching for something, I accidentally revisited the the the notmuch-mutt macro page and saw the tag manipulation. I was like :( .

If I read about the third macro patiently when added that to config, I could’ve saved time by not doing ugly hacks around it.

I think I learned my lesson.

09:35

Mustang VixSkin® Review by Jey Pawlik [Oh Joy Sex Toy]

Mustang VixSkin® Review by Jey Pawlik

Save a horse, ride a Mustang VixSkin® dildo from Vixen Creations! Join me on this review of the Mustang, my valiant steed for so many years. I was actually surprised that there hadn’t been a review on OJST about this already, so I dove in and gave this dildo the cowboy review it needed!Actually, I’ve […]

07:35

Pluralistic: Vicky Osterweil's "The Extended Universe" (28 Apr 2026) [Pluralistic: Daily links from Cory Doctorow]

->->->->->->->->->->->->->->->->->->->->->->->->->->->->-> Top Sources: None -->

Today's links



The Haymarket Books cover for Vicky Osterweil's 'The Extended Universe.'

Vicky Osterweil's "The Extended Universe" (permalink)

Vicky Osterweil's The Extended Universe: How Disney Killed the Movies and Took Over the World makes the kind of long, polemical, startling and illuminating argument that defines great cultural criticism; it's the sort of book that encapsulates the reasons I read criticism in the first place:

https://www.haymarketbooks.org/books/2525-the-extended-universe

My first brush with this kind of criticism came more than two decades ago, when I read John Kessel's now-classic "Creating the Innocent Killer," a critique of Orson Scott Card's Ender's Game, a book I had read and enjoyed enough to re-read several times:

https://johnjosephkessel.wixsite.com/kessel-website/creating-the-innocent-killer

Kessel's argument is that Card used Ender's Game to smuggle in some very ugly ideas, wrapped in a story that was compelling, even exhilarating. In Ender's Game, we meet Andrew "Ender" Wiggin, a small, physically weak boy possessed of a prodigious intellect and a great deal of sensitivity and empathy. Ender is tormented by an escalating series of aggressors, whom he retaliates against with overwhelming force, first to the point of lethality and then all the way to literal genocide. And here's where Card makes his move: Ender's sensitivity and empathy and intellect tell him that he must respond this way, because he can tell that his aggressors will not back off from their intention to harm him; and because Ender is so small and weak, he has to use whatever tactic his brilliant mind can devise, and if that tactic results in the death penalty for mere bullying, well, that's the bully's fault, not Ender's. Indeed, in dying at Ender's hands, these bullies re-victimize Ender, because Ender is a gentle, smart, wise, weak person, and these inescapable murders that he is goaded into committing are a stain on his soul that he can never wash away.

Before reading "Creating the Innocent Killer," I confess I didn't really understand what criticism was for. Like many people, I conflated "criticism" with "reviews," thinking of critical works as a species of inconveniently difficult-to-digest essays that might help me figure out which books to read and which movies to see.

Kessel's magnificent essay changed all that, and not in spite of the fact that Kessel had pointed out some very important problems with a book that I loved, but because of that fact. In helping me understand the ugliness hidden within something whose beauty and virtues I saw very clearly, Kessel taught me more about myself – about where my aesthetics and my values overlapped, and where they diverged. It was literally life-changing.

Like Kessel, Osterweil's 'Extended Universe' deals with media that I have a great deal of affection for – the products of the Walt Disney Company. Though I'm primarily interested in theme parks – I love a big, ambitious built environment of any description and Disney pursues these with a seriousness that few others can touch – the Disney films (and the films of the studios Disney purchased, like Marvel and Lucasfilm) are obviously intimately bound up in those theme park designs.

Osterweil has her own ambivalent affection for these movies. Like so many of us, she's been raised on them, and they've shaped how she sees the world and its stories. But – like me – Osterweil is deeply suspicious of capitalism, American imperialism, and the notion of "intellectual property," and she uses reviews of a dozen Disney films to make the case that Walt Disney and the studio he founded with his brother are standards-bearers for these odious forces, and not just in the overt ways that might immediately spring to mind, but also in subtle ways that can be teased out of a close reading of the films.

In so doing, Osterweil also makes a sharp and well-argued case that intellectual property, colonialism and racial oppression are all facets of the same drive, the drive of people who fancy themselves born to rule to dominate others, which requires that those others also be dehumanized and their work denigrated. When Walt Disney insisted that his be the only name associated with "his" movies, he was playing out the same logic that underpinned his virulent opposition to labor unions and his participation in American imperialism in Latin America.

As with Kessel, Osterweil's argument is full of surprises and illuminations that are especially vivid for those of us who have great affection for these works. As her chapter on Black Panther shows, this contradiction need not go unresolved. There is plenty of scope for fans to seize the reins of the narrative (and as her chapter on the reactionary backlash to the later Star Wars movies shows, it's not just the forces of progress and anti-racism who can pull off this move).

Like the very best criticism, Osterweil's book is more than a way to deepen your understanding of the material she dissects – it's a way to deepen your understanding of the world that produced it, and to deepen your understanding of yourself.


Hey look at this (permalink)



A shelf of leatherbound history books with a gilt-stamped series title, 'The World's Famous Events.'

Object permanence (permalink)

#20yrsago Frank Zappa’s anti-censorship letter https://www.flickr.com/photos/mudshark/117551768/in/set-72057594090059726/

#15yrsago Chemistry kit with no chemicals https://web.archive.org/web/20110427212354/http://blog.makezine.com/archive/2011/04/chemistry-set-boasts-no-chemicals.html

#15yrsago Russian corruption: crooked officials steal multi-billion-dollar company, $230M tax refund, then murder campaigning lawyer https://web.archive.org/web/20110426045152/http://www.foreignpolicy.com/articles/2011/04/20/russia_s_crime_of_the_century?

#15yrsago Golden-age short-change cons https://web.archive.org/web/20110429014539/https://blog.modernmechanix.com/2011/04/26/tricks-of-short-change-artists/

#10yrsago Campaigners search Londoners’ phones to help them understand the Snoopers Charter https://www.youtube.com/watch?v=szN7DlmMLYg

#10yrsago Mitsubishi’s dieselgate: cheating since 1991 https://web.archive.org/web/20160427145038/https://www.cnet.com/roadshow/news/mitsubishi-cheated-fuel-economy-tests-since-1991/#ftag=CAD590a51e

#10yrsago Bellwether: Connie Willis’s classic, hilarious novel about the science of trendiness https://memex.craphound.com/2016/04/26/bellwether-connie-williss-classic-hilarious-novel-about-the-science-of-trendiness/

#5yrsago The Big U https://pluralistic.net/2021/04/26/moolah-boolah/#poison-ivies


Upcoming appearances (permalink)

A photo of me onstage, giving a speech, pounding the podium.



A screenshot of me at my desk, doing a livecast.

Recent appearances (permalink)



A grid of my books with Will Stahle covers..

Latest books (permalink)



A cardboard book box with the Macmillan logo.

Upcoming books (permalink)

  • "The Reverse-Centaur's Guide to AI," a short book about being a better AI critic, Farrar, Straus and Giroux, June 2026 (https://us.macmillan.com/books/9780374621568/thereversecentaursguidetolifeafterai/)
  • "Enshittification, Why Everything Suddenly Got Worse and What to Do About It" (the graphic novel), Firstsecond, 2026

  • "The Post-American Internet," a geopolitical sequel of sorts to Enshittification, Farrar, Straus and Giroux, 2027

  • "Unauthorized Bread": a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, April 20, 2027

  • "The Memex Method," Farrar, Straus, Giroux, 2027



Colophon (permalink)

Today's top sources:

Currently writing: "The Post-American Internet," a sequel to "Enshittification," about the better world the rest of us get to have now that Trump has torched America. Third draft completed. Submitted to editor.

  • "The Reverse Centaur's Guide to AI," a short book for Farrar, Straus and Giroux about being an effective AI critic. LEGAL REVIEW AND COPYEDIT COMPLETE.
  • "The Post-American Internet," a short book about internet policy in the age of Trumpism. PLANNING.

  • A Little Brother short story about DIY insulin PLANNING


This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.

https://creativecommons.org/licenses/by/4.0/

Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.


How to get Pluralistic:

Blog (no ads, tracking, or data-collection):

Pluralistic.net

Newsletter (no ads, tracking, or data-collection):

https://pluralistic.net/plura-list

Mastodon (no ads, tracking, or data-collection):

https://mamot.fr/@pluralistic

Bluesky (no ads, possible tracking and data-collection):

https://bsky.app/profile/doctorow.pluralistic.net

Medium (no ads, paywalled):

https://doctorow.medium.com/

Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):

https://mostlysignssomeportents.tumblr.com/tagged/pluralistic

"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla

READ CAREFULLY: By reading this, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies ("BOGUS AGREEMENTS") that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer.

ISSN: 3066-764X

04:07

Ravi Dwivedi: A day in Vienna [Planet Debian]

On the 7th of September 2025, my friend Dione and I had a day trip to Vienna—the capital of Austria. We were attending a conference in Budapest, Hungary, which is 250 km from Vienna. So, it was a good opportunity to visit Vienna.

We took a morning train from Budapest to Vienna and got back to Budapest by night. However, booking these tickets turned out to be a bit complicated. There were many websites to book the train ticket—Hungarian Railways, Austrian Railways, and third-party sites such as Omio. All these websites had different prices for the same ticket.

I booked the tickets from the Hungarian Railways website as it was the cheapest. The train from Budapest to Vienna was €13, operated by Eurocity. Also, I had to pay €2 for the seat reservation on top. The train from Vienna to Budapest—operated by Railjet—was €21, along with €2 extra for reservation again—making it €23. The tickets for the two-way journey added up to €38.

The cost of these tickets varied depending on when one purchses them: the sooner you purchase, the lower the price. I bought my tickets 15 days ahead of the date of journey and paid just €38. In contrast, Dione booked just one day before her trip and paid around €100 for her tickets.

As for the seat reservation, long-distance trains in Europe usually require paying extra for the seat reservation. This ensures that you get your preferred seat, such as a window seat or an aisle seat. Nevertheless, you will get a seat on long-distance trains because they do not sell more tickets than there are seats. Therefore, you will get a seat without reservation as well. However, we reserved our seats so that we can sit together. This helped us more in the return part of the journey—from Vienna to Budapest—which was more crowded than the train we took from Budapest to Vienna in the morning.

On another note, reservation is mandatory on some trains in Europe, but ours wasn’t one of them. In addition, people also use rail passes, so an extra charge is required on top for reserving the seats for pass holders. On the other hand, local trains do not require seat reservations in general.

Our train’s scheduled departure was at 08:55 from the Budapest Kelenfold station. We reached the train station 40 minutes before the train’s scheduled departure. The Kelenfold station had free Wi-Fi, which was handy because I didn’t have a local SIM.

A departures board at Budapest Kelenfold station.

A departures board at Budapest Kelenfold station. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

A platform on Budapest Kelenfold station.

This is platform number 15 of Budapest Kelenfold station where we boarded our train. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

Our train arrived on time. I tried to find our coach number but could not find the numbers written anywhere on the side of the coach. Luckily, we were helped by a fellow passenger who directed me to look at the doors, where the numbers were mentioned clearly!

Then we got into our compartment and took our respective seats. Our tickets were checked twice - once while the train was in Hungary and the other when in Austria. Showing the PDF of the train ticket on our mobile to the ticket inspector was good enough for the purpose. Austria and Hungary are a part of the open transit Schengen area, which means this was the extent of the border control checks we had to go through.

Interior of the train.

Interior of our Budapest to Vienna train. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

The train also had free Wi-Fi, albeit with poor connection at times. There were no eatery options inside the train.

We deboarded at the Wien Hauptbahnhof station in Vienna. The journey was 250 km and took 2.5 hours, reaching Vienna at 11:25, which was the scheduled time.

A blue and white colored train on a railway platform

This blue colored train was the one we took for our Budapest to Vienna journey. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

A red colored train standing at the Vienna station

An ÖBB train standing at a platform of Vienna train station. ÖBB is the national carrier of Austria. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

Wien Hauptbahnhof train station

Wien Hauptbahnhof train station. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

At the station, we bought a 24-hour public transport pass from a ticket machine for €8. The pass includes unlimited access to all the public transport in Vienna for 24 hours. My pass was valid from the 7th of September at 11:34 to the 8th of September at 11:33. A single public transport ticket (from anywhere to anywhere) costs €2.4. A single ticket of €2.4 can be used once on any public transport in Vienna—trams, metros, and buses.

Therefore, the pass is a good deal if you are going to take at least four public transport trips in a day. Unlike the public transport pass I got in Budapest, the pass in Vienna was anonymous and not tied to the rider’s name.

Public transport pass for Vienna.

My public transport pass in Vienna.

We wanted to visit the Schönbrunn Palace. The palace was reachable by subway. In order to get to the subway station, we started by going outside the station. But it was not outside. So we came back inside the station building and realized that the subway was underground.

We took the subway and deboarded at the Schönbrunn subway station—the closest one to the palace. The ride was smooth; the train was pretty silent.

By the way, like Budapest, there were no AFC gates for boarding the subway in Vienna. The stations had ticket validators instead, where you are supposed to validate your tickets before getting into the subway.

Vienna subway

Instead of AFC gates, Vienna has ticket validators as in the picture. You need to tap your ticket in the validator before boarding the subway. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

These validators are in place to ensure that you use your ticket only once. Unlike AFC gates, which are present in metros of most of the countries I have been to, the ticket validators don’t act as a physical barrier to enter the boarding area.

If you board the metro without validating your ticket, you will be facing hefty fines upon getting caught. I have heard that the fine is around €100. On the other hand, if you have a public transport pass like we did, then you don’t need to validate it before boarding.

In addition, there were no annoying security checks either, unlike in Indian cities. In the Delhi metro, for example, you would need to scan your bags and pass through a security check before getting to the AFC gates.

Vienna subway

Vienna subway. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

Now back to the story, after alighting at the Schönbrunn subway station, we walked to the Schönbrunn Palace. One can roam around outside the palace and click pictures for free. To go inside, however, requires buying tickets. The tickets for the palace can be booked in advance on the internet. We didn’t take the tickets in advance, as we decided to visit the palace at the last moment.

So we went to the ticket counter and found out that we needed to wait for 1 hour 40 minutes before going inside if we took the tickets at that moment. In addition, one ticket costs €44 (around 4000 Indian rupees). Since we had to return to Budapest in the evening and only had a few hours in the city, we decided not to go inside the palace. Instead, we clicked a few pictures outside the palace.

Photo of Schönbrunn Palace.

Schönbrunn Palace. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

The Schönbrunn Palace is a UNESCO World Heritage Site and is a historically significant place. It servedas one of the residences of the powerful Habsburg dynasty. The palace looked so good that my friend Dione said, “It seemed like the palace was built yesterday”. This remark applied to other parts of Vienna we went to. For example, the subway stations also seemed like they were built yesterday.

A street near Schönbrunn Palace.

A street near Schönbrunn Palace. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

Now, we wanted to go someplace to grab a bite. I asked my friend Urbec for suggestions on where to go. They suggested we visit the steps named Strudlhofstiege, which had the added benefit of being in a neighborhood with good bakeries and buildings.

So, we took the subway and deboarded at the Roßauer Lände station, followed by walking around a kilometer to reach the stairs.

A subway station in Vienna.

Roßauer Lände subway station. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

Platform of the _Roßauer Lände_ subway station.

Platform of the Roßauer Lände subway station. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

stairs with road in the front and trees in the background. Blue sky can also be seen in the background.

The The Strudlhofstiege steps. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

On the way, we were also looking for a place to eat. Unfortunately, it was Sunday, and Vienna closes on Sunday. That means most of the shops—including bakeries and cafés—are closed. Only places like railway stations have shops open on Sundays.

By the way, walking around in the streets of Vienna was a treat. The streets were not crowded (as it was not exactly a touristy neighborhood) and had good pedestrian infrastructure, with clean streets and separate cycling tracks. The buildings were also beautiful.

Buildings and streets in Vienna.

A random street in Vienna.

Buildings and streets in Vienna.

Another street in Vienna.

After some walking, we found a restaurant open. I grabbed the menu to check the prices. A lady at the shop asked me what I was doing, and I told her that I was browsing the menu. She said that the menu was in German. I don’t know how she knew that we didn’t know German, but it seemed like a racist thing to be told.

We roamed around further and found a café by the name of Blue Orange, where we ordered coffee and croissants. When we got our order, the waiter told us that they were having some issues, so they wouldn’t charge us for the croissant if it wasn’t good.

Picture of a café.

A picture of Blue Orange café. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

My friend and I took a bite, and both of us didn’t like the croissant. After some time, the waiter came to us and asked whether the croissant was okay, to which we said no. Therefore, they didn’t charge us for the croissant. This was the first time something like this happened to me. It felt like I was in a different world. I added a small tip at the end for this gesture, which I had to put in a jar at the counter.

The cappuccino I ordered was €4.50, while the espresso that Dione ordered was €3.60. The croissant would have been €3.60. I remember Paris having cheaper croissants!

Then when the waiter brought our drinks out, they automatically gave me the espresso and Dione the cappuccino. Dione found this funny because there is a stereotype in her country (Australia) that men drink strong black coffee, and women drink milky drinks like cappuccinos. She found it interesting that this stereotype seems to exist in Austrian culture too.

We hopped on a tram to reach the nearest subway station and went to the Wien Hauptbahnhof station to have something before we caught our return train to Budapest.

Trams with buildings and the blue sky in the background

Trams in Vienna. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

At the station, I had Esterhazyschnitten and Punschkrapfen (thanks, Urbec, for the suggestion). The lady at the shop warned me that punschkrapfen had alcohol in it, to which I said okay.

Esterhazyschnitten was a cake made of almonds, while punschkrapfen was a jam-filled sponge cake, soaked in rum. Esterhazyschnitten was my favorite out of the two. The punschkrapfen was too sweet for my taste.

Punschkrapfen

Punschkrapfen. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

Esterhazyschnitten

Esterhazyschnitten. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

While the station was well-built, there were a couple of things about the Wien Hauptbahnhof station that we didn’t like. There were no seats inside the station, so we had to eat outside the building. Also, the toilets needed to be paid for. It costs 50 cents to use the toilets at this station.

The Vienna train station had departure boards all over the place. So, we went to the platform our train was to arrive on.

A departure board in Vienna displaying information about the trains

Departure boards in Vienna displaying information about the trains. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

Platform and tracks at Wien Hauptbahnhof station.

Platform and tracks at Wien Hauptbahnhof station. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

When our train arrived, we had some difficulty locating our compartment. This train was operated by a different company (Railjet) than the one we took in the morning (Eurocity) from Budapest to Vienna, and we were able to locate the coach numbers using the digital board at the station. Each compartment had a digital board next to it on the station displaying the coach number. However, that wasn’t the problem. Even after reading the coach numbers and trying to find ours, it didn’t appear where we expected in the sequence.

When we were not able to find our coach for a while, we asked a ticket inspector of the train who was standing on the platform. He directed us towards the front side of the train. So we started running to the front side as we didn’t know how long the train stops.

As we ran toward our coach, we found out that the engine of the back train was connected with the last compartment of the train at the front. At that point, we realized that the train was a combination of two trains. At a later station, the train on the back side parted ways and went towards Vienna Airport.

Inside our train.

Interior of the train we took from Vienna to Budapest. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

A red colored train standing on the platform of Budapest Kelenfold station.

This is the train we took for our return journey from Vienna to Budapest. It is standing on a platform in Budapest Kelenfold station. Photo by Ravi Dwivedi, released under CC-BY-SA 4.0.

We had a smooth journey and reached Budapest a couple of hours later.

Vienna is a beautiful city; we enjoyed being there, and we would like to visit the city again!

That’s it for now. Signing off. See you in the next one!

Credits: Thanks to Dione and Badri for proofreading.

01:49

Music For Your Monday: Tame Impala’s “Dracula” [Whatever]

I heard an absolute banger of an earworm this past week, and have been listening to it nonstop ever since. I want to bestow upon y’all Tame Impala’s new song, “Dracula.”

If you had asked me a week ago if I liked Tame Impala, I would’ve said I was completely indifferent about him and couldn’t even name a song from him. That is still true except for “Dracula.” This song is an absolute home-run of a bop, and there’s even a remix version with JENNIE which is also very good. Here’s both versions for your listening pleasure!

And the JENNIE version:

I have been debating which version I like better, and honestly it’s so hard to decide. I listen to both an equal amount, and both are great. Can’t go wrong with the original, but I love JENNIE’s ethereal voice and the harmonizing with Tame Impala.

My favorite part of the song is how they make “Dracula” rhyme with “spectacular.” Stellar stuff, really.

I hope you enjoy this bop, and that it helps you get movin’ and groovin’ through your next week!

-AMS

00:56

Tell Congress: Oppose the GUARD Act [EFF Action Center]

The GUARD Act may look like a child-safety bill, but in practice it’s a sweeping age-gating mandate that could apply to nearly every public-facing chatbot, from customer service tools to search assistants. It would require companies to collect sensitive identity data and chill online speech. The bill would also block teens from tools they rely on every day—as well as adults who cannot prove they are over 18.

EFF has long warned that age-verification laws undermine free expression, privacy, and competition. The GUARD Act is no different. It would make the internet less free, less private, and less accessible—while consolidating power in the largest tech companies and pushing smaller developers out.

There are real concerns about harms caused by AI systems, especially for young people. But the GUARD Act responds with a blunt, overbroad solution. Instead of addressing specific risks, it imposes sweeping restrictions that affect us all.

Congress should reject the GUARD Act and focus on policies that protect users without sacrificing privacy and access.

Tell your representatives to oppose the GUARD Act now.

00:14

00:00

Urgent: Public education V vouchers [Richard Stallman's Political Notes]

US citizens: call on your federal legislators in Congress to repeal federal school vouchers and protect public education.

See the instructions for how to sign this letter campaign without running any nonfree JavaScript code--not trivial, but not hard.

How Israel struck hospitals in Lebanon [Richard Stallman's Political Notes]

*Israel escalates attacks on medics in Lebanon with deadly "quadruple tap".*

Friendly fire info as terror, Kuwait [Richard Stallman's Political Notes]

A Kuwaiti-American journalist was visiting Kuwait and made footage of the mistaken shooting of an American F-15 and reported on this. Since then, Kuwait has arrested him, possibly for publishing that, or possibly for other journalism, under repressive new "terrorism" laws which can define journalism as "terrorism" under rather vague conditions.

Monday, 27 April

23:28

Dillo 3.3.0 released [OSnews]

Dillo is an amazing web browser for those of us who want their web browsing experience to be calmer and less flashing. Dillo also happens to be a very UNIX-y browser, and their latest release, 3.3.0, underlines that.

A new dilloc program is now available to control Dillo from the command line or from a script. It searches for Dillo by the PID in the DILLO_PID environment variable or for a unique Dillo process if not set.

↫ Dillo 3.3.0 release notes

You can use this program to control your Dillo instance, with basic commands like reloading the current URL, opening a new URL, and so on, but also things like dumping the current page’s contents. I have a feeling more commands and features will be added in future releases, but for now, even the current set of commands can be helpful for scripting purposes. I’m sure some of you who live and die in the terminal are already thinking of all the possibilities here.

You can now also add page actions to the right-click context menu, so you can do things like reload a page with a Chrome curl impersonator to avoid certain JavaScript walls. This, too, is of course extensible. Dillo 3.3.0 also brings experimental support for building the browser with FLTK 1.4, and implemented a fix specifically to make OAuth work properly.

22:42

Ubuntu is going to integrate “AI”, but Canonical remains vague about the how and why [OSnews]

Ubuntu, being one of the more commercial Linux distributions, was always going to jump on the “AI” bandwagon, and Jon Seager, Canonical’s VP Engineering, published a blog post with more details.

Throughout 2026 we’ll be working on enabling access to frontier AI for Ubuntu users in a way that is deliberate, secure, and aligned with our open source values. By focusing on the combination of education for our engineers, our existing knowledge of building resilient systems and our strengthening silicon partnerships, we will deliver efficient local inference, powerful accessibility features, and a context-aware OS that makes Ubuntu meaningfully more capable for the people who rely on it

Ubuntu is not becoming an AI product, but it can become stronger with thoughtful AI integration.

↫ Jon Seager at Ubuntu Discourse

The problem with this entire post is that, much like all other corporate communications about “AI”, it’s all deceptively vague, open-ended, and weasely. Adjectives like “focused”, “principled”, “thoughtful”, and “tasteful” don’t really mean anything, and leave everything open for basically every type of slop “AI” feature under the sun. Their claims about open weights and open source models are also weakened by words like “favour” and “where possible”, again leaving the door wide open for basically any shady “AI” company’s models and features to find their way into your default Ubuntu installation.

There’s also very little in terms of concrete plans and proposed features, leaving Ubuntu users in the dark about what, exactly, is going to be added to their operating system of choice during the remainder of the year. There’s mentions of improved text-to-speech/speech-to-text and text regurgitators, but that’s about it. None of it feels particularly inspired or ground-breaking, and the veneer of open source, ethical model creation, and so on, is particularly thin this time around, even for Canonical.

I don’t really feel like I know a lot more about Canonical’s “AI” intentions for Ubuntu after reading this post than I did before, other than Ubuntu users might be able to generate text in their email client or whatever later this year. Is that really something anybody wants?

22:14

This Week’s Weird Sideswipe by Current Events [Whatever]

Hello to the FBI/Secret Service/NSA people now monitoring this account because apparently the attempted shooter liked a few of my posts in the last month, here's a picture of my cat to get you started

John Scalzi (@scalzi.com) 2026-04-26T18:50:39.094Z

Apparently it’s true: The fellow who came to the Correspondent’s Dinner the other night with a bunch of weapons (and who, it should be noted, came nowhere near the president or anyone else in the ballroom), liked four Bluesky posts of mine in the last month. Which ones? I have no idea, although a cursory view of my last month of Bluesky posts shows nothing particularly spicy in a political sense. This does not surprise me, as I usually send all my really spicy political takes to Threads. Most of the last month of Bluesky posts for me were about JoCo Cruise, whacking on “AI,” photos of cats and Krissy, and talking about writing. Maybe this dude liked cat pictures? He’s arrested now and his Bluesky account is down in any event. We may never know.

My feeling about this is pretty much the same feeling I have about being in the Epstein Files: What the fuck, it’s not great, and also, it doesn’t actually have much to do with me, I’m mostly being sideswiped by this weird damn moment we’re in. I certainly don’t condone attempting to kill the president. Any president, and also, this one in particular. Among other things that would take away the fun of watching him one day rotting in prison along with the rest of his corrupt and horrible family and administration. Keep him alive! For justice!

I’m joking here about being on a federal watch list now, but I should be clear I’m pretty sure I already have an FBI file, and also that this FBI file is really super boring, so anything relating to this will almost certainly be funneled into that. I recently did an FOIA request for my file, so I suppose I will find out soon enough. In the meantime I’ll just have to imagine.

I’ve been informed that some of the folks associated with the Sad Puppies are trying to make hay of my tangential association to this fellow, which, I guess, they would, loud bad logic has always been their MO. My first thought is that when you’re related to an actual successful presidential assassin, a failed one liking your social media posts is weak sauce. My second thought was, huh, the right-wing chudguzzlers are whining about me again, whenever they do that something nice happens with my career, wonder what it will be this time. And indeed, today I got a foreign language offer on one of my books, which I happily accepted. It’s correlation, not causation, to be sure. But it sure does correlate a lot. So keep it up, right-wing chudguzzlers! We’re having our back deck rebuilt, I could use a few more foreign sales. Thanks in advance for your help.

— JS

20:21

pip 26.1 released [LWN.net]

Version 26.1 of the pip package installer for Python has been released. Richard Si has published a blog post that looks at some of the highlights of 26.1 including dependency cooldowns, experimental support for pylock (pylock.toml) files, and resolver improvements that will move pip closer to the goal of removing its legacy resolver. The release also includes several security fixes and drops support for Python 3.9.

19:28

Microboned [Penny Arcade]

Discord used to be a tool that I leveraged to communicate with friends and erstwhile allies, but over the years it's increasingly become something like a car up on blocks in my front yard - something to tinker with, absent any prospect or expectation of continuous functionality. I have to constantly remind it that I don't want to use the speaker in my monitor. And mics? "Forget about it." I would say that this is an unforgivable sin but I know at least one other person who might actually prefer this state of affairs. Also, this really happened. So.

18:49

18:07

[$] The rest of the 7.1 merge window [LWN.net]

By the time Linus Torvalds released 7.1-rc1 and closed the 7.1 merge window, 12,996 non-merge changesets had been pulled into the mainline repository; just over 9,000 of those arrived after the first-half summary was written. These changes were more driver-oriented than those seen earlier, but still also included many new features across the kernel as a whole.

18:00

Looking at consequences of passing too few register parameters to a C function on various architectures [The Old New Thing]

In our exploration of calling conventions for various processors on Windows, we learned that in many cases, some of the parameters are passed in registers.

Suppose that there is a function that takes two parameters, but you know that the function ignores the second parameter if the first parameter is positive. What happens if you call the function with just one parameter (say, passing zero). The function should ignore the second parameter, so why does it matter that you didn’t pass one?

Even though the function doesn’t use the parameter, it still may decide to use the storage for that parameter as a conveniently provided scratch space. For example:

int blah(int a, int b)
{
    if (a <= 0) {
        int c = f1();
        f2(a);
        return c;
    } else {
        return f3(a, b);
}

Is it okay to call blah with zero as its only parameter? You aren’t passing b, but the function doesn’t use b, so why does it matter?

Formally, the C and C++ languages say that if you call a function with the wrong number of parameters, the behavior is undefined, so officially, you’ve broken the rules and anything can happen.

But let’s look at what types of things could go wrong.

If you pass too few parameters on the stack, and it is a callee-clean calling convention, then the callee will clean too many bytes off the stack, resulting in stack imbalance and likely memory corruption.

Even if it’s not a callee-clean calling convention, the called function will think that the memory for the parameter is present, and it may use it as scratch space, resulting in memory corruption in the stack frame of the calling function.

In our example above, the compiler might realize, “Hey, I don’t need to allocate new memory for the variable c. I can just reuse the memory that holds the now-dead variable b.” In other words, it rewrites the function as

int blah(int a, int b)
{
    if (a <= 0) {
        b = f1();
        f2(a);
        return c;
    } else {
        return f3(a, b);
}

Even if you don’t reserve memory for the variable b, the compiler will assume that you did and overwrite whatever is at the location the reserved memory should have been.

But what if the parameters are passed in registers, and you didn’t pass enough of them?

On most processors, what happens is that the called function will try to use that register and read whatever uninitialized value happens to be lying in that register.

Except on Itanium.

One special Itanium quirk is the presence of the “Not a Thing” (NaT) bit, which is a bit attached to each general purpose register that indicates whether the register holds a valid value. The most common ways for a register to enter the NaT state are if it was the result of a failed speculative load, or if it was the result of a mathematical calculation where at least one of the inputs was itself NaT. Therefore, if your uninitialized output register happens to be a NaT left over from an earlier failed speculation, the called function might decide to spill the value onto the stack for safekeeping before using that register for something else.

extern bool is_valid(int);

int blah2(int a, int b)
{
    if (is_valid(a)) {
        return f3(a, &b);
    } else {
        return 0;
    }
}

The compiler realizes that it needs to take the address of b if a is not valid, so it has to spill the value to memory (so that it can have an address). But writing a NaT to memory raises a “NaT consumption” exception, so this function crashes even in the case where it never actually uses the b variable.

But wait, there’s more.

On Itanium, the function call mechanism is architectural rather than merely conventional. The calling function declares the number of output registers (registers that will be passed to the called function), and those registers are renumbered on entry to the called function so that they are visible starting at register r32. If a calling function says “I am passing 2 registers,” then the called function sees them as registers r32 and r33. I covered the details some time ago, but leaf functions are particularly interesting.

Leaf functions are functions that do not create a custom stack frame and simply make do with the architectural stack frame that the processor creates for them by default. And that default stack frame consists only of the inbound parameter registers. In the case of passing too few parameters to a function, that means that the default stack frame contains fewer registers than the function expects.

Architecturally, the rule is that if you read from a stacked register that lies outside the current frame, the results are “undefined”. I couldn’t find a formal definition of “undefined” in the Itanium documentation (though it’s eminently likely that I simply missed it), but I assume it means “can produce any result, including an exception, that is not dependent upon information outside the current processor execution mode.”¹ In particular, it can raise a processor exception, say, because the value of that stacked register happens to contain a leftover NaT.

The Itanium architecture takes an even stronger stance against writing a stack register that lies outside the current frame: It is required to raise an Illegal Operation fault.

I can imagine it being weird seeing an exception come out of a register-to-register move instruction.

So there you go, another case where the Itanium architecture more strictly enforces a programming rule, in this case, making sure that you pass the correct number of parameters to a function.

¹ This means that, for example, an “undefined” result in user-mode code cannot be dependent upon information available only to kernel mode.

The post Looking at consequences of passing too few register parameters to a C function on various architectures appeared first on The Old New Thing.

15:49

LibreLocal meetup in London, England, United Kingdom [Planet GNU]

May 16, 2026 at 12:00 BST (11:00 UTC).

Feeds

FeedRSSLast fetchedNext fetched after
@ASmartBear XML 10:49, Sunday, 03 May 11:30, Sunday, 03 May
a bag of four grapes XML 11:28, Sunday, 03 May 12:10, Sunday, 03 May
Ansible XML 11:28, Sunday, 03 May 12:08, Sunday, 03 May
Bad Science XML 10:49, Sunday, 03 May 11:38, Sunday, 03 May
Black Doggerel XML 10:49, Sunday, 03 May 11:30, Sunday, 03 May
Blog - Official site of Stephen Fry XML 10:49, Sunday, 03 May 11:38, Sunday, 03 May
Charlie Brooker | The Guardian XML 11:28, Sunday, 03 May 12:10, Sunday, 03 May
Charlie's Diary XML 11:07, Sunday, 03 May 11:55, Sunday, 03 May
Chasing the Sunset - Comics Only XML 10:49, Sunday, 03 May 11:38, Sunday, 03 May
Coding Horror XML 10:56, Sunday, 03 May 11:43, Sunday, 03 May
Comics Archive - Spinnyverse XML 10:56, Sunday, 03 May 11:40, Sunday, 03 May
Cory Doctorow's craphound.com XML 11:28, Sunday, 03 May 12:10, Sunday, 03 May
Cory Doctorow, Author at Boing Boing XML 10:49, Sunday, 03 May 11:30, Sunday, 03 May
Ctrl+Alt+Del Comic XML 11:07, Sunday, 03 May 11:55, Sunday, 03 May
Cyberunions XML 10:49, Sunday, 03 May 11:38, Sunday, 03 May
David Mitchell | The Guardian XML 10:49, Sunday, 03 May 11:32, Sunday, 03 May
Deeplinks XML 10:56, Sunday, 03 May 11:40, Sunday, 03 May
Diesel Sweeties webcomic by rstevens XML 10:49, Sunday, 03 May 11:32, Sunday, 03 May
Dilbert XML 10:49, Sunday, 03 May 11:38, Sunday, 03 May
Dork Tower XML 11:28, Sunday, 03 May 12:10, Sunday, 03 May
Economics from the Top Down XML 10:49, Sunday, 03 May 11:32, Sunday, 03 May
Edmund Finney's Quest to Find the Meaning of Life XML 10:49, Sunday, 03 May 11:32, Sunday, 03 May
EFF Action Center XML 10:49, Sunday, 03 May 11:32, Sunday, 03 May
Enspiral Tales - Medium XML 10:56, Sunday, 03 May 11:41, Sunday, 03 May
Events XML 11:07, Sunday, 03 May 11:55, Sunday, 03 May
Falkvinge on Liberty XML 11:07, Sunday, 03 May 11:55, Sunday, 03 May
Flipside XML 11:28, Sunday, 03 May 12:10, Sunday, 03 May
Flipside XML 10:56, Sunday, 03 May 11:41, Sunday, 03 May
Free software jobs XML 11:28, Sunday, 03 May 12:08, Sunday, 03 May
Full Frontal Nerdity by Aaron Williams XML 11:07, Sunday, 03 May 11:55, Sunday, 03 May
General Protection Fault: Comic Updates XML 11:07, Sunday, 03 May 11:55, Sunday, 03 May
George Monbiot XML 10:49, Sunday, 03 May 11:32, Sunday, 03 May
Girl Genius XML 10:49, Sunday, 03 May 11:32, Sunday, 03 May
Groklaw XML 11:07, Sunday, 03 May 11:55, Sunday, 03 May
Grrl Power XML 11:28, Sunday, 03 May 12:10, Sunday, 03 May
Hackney Anarchist Group XML 10:49, Sunday, 03 May 11:38, Sunday, 03 May
Hackney Solidarity Network XML 10:56, Sunday, 03 May 11:41, Sunday, 03 May
http://blog.llvm.org/feeds/posts/default XML 10:56, Sunday, 03 May 11:41, Sunday, 03 May
http://calendar.google.com/calendar/feeds/q7s5o02sj8hcam52hutbcofoo4%40group.calendar.google.com/public/basic XML 11:28, Sunday, 03 May 12:08, Sunday, 03 May
http://dynamic.boingboing.net/cgi-bin/mt/mt-cp.cgi?__mode=feed&_type=posts&blog_id=1&id=1 XML 10:56, Sunday, 03 May 11:41, Sunday, 03 May
http://eng.anarchoblogs.org/feed/atom/ XML 10:56, Sunday, 03 May 11:42, Sunday, 03 May
http://feed43.com/3874015735218037.xml XML 10:56, Sunday, 03 May 11:42, Sunday, 03 May
http://flatearthnews.net/flatearthnews.net/blogfeed XML 10:49, Sunday, 03 May 11:30, Sunday, 03 May
http://fulltextrssfeed.com/ XML 10:49, Sunday, 03 May 11:32, Sunday, 03 May
http://london.indymedia.org/articles.rss XML 10:56, Sunday, 03 May 11:43, Sunday, 03 May
http://pipes.yahoo.com/pipes/pipe.run?_id=ad0530218c055aa302f7e0e84d5d6515&amp;_render=rss XML 10:56, Sunday, 03 May 11:42, Sunday, 03 May
http://planet.gridpp.ac.uk/atom.xml XML 10:56, Sunday, 03 May 11:43, Sunday, 03 May
http://shirky.com/weblog/feed/atom/ XML 10:56, Sunday, 03 May 11:40, Sunday, 03 May
http://thecommune.co.uk/feed/ XML 10:56, Sunday, 03 May 11:41, Sunday, 03 May
http://theness.com/roguesgallery/feed/ XML 11:07, Sunday, 03 May 11:55, Sunday, 03 May
http://www.airshipentertainment.com/buck/buckcomic/buck.rss XML 10:49, Sunday, 03 May 11:38, Sunday, 03 May
http://www.airshipentertainment.com/growf/growfcomic/growf.rss XML 10:56, Sunday, 03 May 11:40, Sunday, 03 May
http://www.airshipentertainment.com/myth/mythcomic/myth.rss XML 11:28, Sunday, 03 May 12:10, Sunday, 03 May
http://www.baen.com/baenebooks XML 10:56, Sunday, 03 May 11:40, Sunday, 03 May
http://www.feedsapi.com/makefulltextfeed.php?url=http%3A%2F%2Fwww.somethingpositive.net%2Fsp.xml&what=auto&key=&max=7&links=preserve&exc=&privacy=I+accept XML 10:56, Sunday, 03 May 11:40, Sunday, 03 May
http://www.godhatesastronauts.com/feed/ XML 11:07, Sunday, 03 May 11:55, Sunday, 03 May
http://www.tinycat.co.uk/feed/ XML 11:28, Sunday, 03 May 12:08, Sunday, 03 May
https://anarchism.pageabode.com/blogs/anarcho/feed/ XML 10:56, Sunday, 03 May 11:40, Sunday, 03 May
https://broodhollow.krisstraub.comfeed/ XML 10:49, Sunday, 03 May 11:30, Sunday, 03 May
https://debian-administration.org/atom.xml XML 10:49, Sunday, 03 May 11:30, Sunday, 03 May
https://elitetheatre.org/ XML 10:56, Sunday, 03 May 11:43, Sunday, 03 May
https://feeds.feedburner.com/Starslip XML 11:28, Sunday, 03 May 12:10, Sunday, 03 May
https://feeds2.feedburner.com/GeekEtiquette?format=xml XML 10:49, Sunday, 03 May 11:32, Sunday, 03 May
https://hackbloc.org/rss.xml XML 10:49, Sunday, 03 May 11:30, Sunday, 03 May
https://kajafoglio.livejournal.com/data/atom/ XML 10:49, Sunday, 03 May 11:38, Sunday, 03 May
https://philfoglio.livejournal.com/data/atom/ XML 10:56, Sunday, 03 May 11:43, Sunday, 03 May
https://pixietrixcomix.com/eerie-cutiescomic.rss XML 10:56, Sunday, 03 May 11:43, Sunday, 03 May
https://pixietrixcomix.com/menage-a-3/comic.rss XML 10:56, Sunday, 03 May 11:40, Sunday, 03 May
https://propertyistheft.wordpress.com/feed/ XML 11:28, Sunday, 03 May 12:08, Sunday, 03 May
https://requiem.seraph-inn.com/updates.rss XML 11:28, Sunday, 03 May 12:08, Sunday, 03 May
https://studiofoglio.livejournal.com/data/atom/ XML 10:56, Sunday, 03 May 11:42, Sunday, 03 May
https://thecommandline.net/feed/ XML 10:56, Sunday, 03 May 11:42, Sunday, 03 May
https://torrentfreak.com/subscriptions/ XML 10:49, Sunday, 03 May 11:32, Sunday, 03 May
https://web.randi.org/?format=feed&type=rss XML 10:49, Sunday, 03 May 11:32, Sunday, 03 May
https://www.dcscience.net/feed/medium.co XML 10:49, Sunday, 03 May 11:38, Sunday, 03 May
https://www.DropCatch.com/domain/steampunkmagazine.com XML 10:49, Sunday, 03 May 11:30, Sunday, 03 May
https://www.DropCatch.com/domain/ubuntuweblogs.org XML 10:56, Sunday, 03 May 11:42, Sunday, 03 May
https://www.DropCatch.com/redirect/?domain=DyingAlone.net XML 10:56, Sunday, 03 May 11:43, Sunday, 03 May
https://www.freedompress.org.uk:443/news/feed/ XML 11:07, Sunday, 03 May 11:55, Sunday, 03 May
https://www.goblinscomic.com/category/comics/feed/ XML 11:28, Sunday, 03 May 12:08, Sunday, 03 May
https://www.loomio.com/blog/feed/ XML 10:56, Sunday, 03 May 11:42, Sunday, 03 May
https://www.newstatesman.com/feeds/blogs/laurie-penny.rss XML 10:49, Sunday, 03 May 11:30, Sunday, 03 May
https://www.patreon.com/graveyardgreg/posts/comic.rss XML 10:56, Sunday, 03 May 11:43, Sunday, 03 May
https://www.rightmove.co.uk/rss/property-for-sale/find.html?locationIdentifier=REGION^876&maxPrice=240000&minBedrooms=2&displayPropertyType=houses&oldDisplayPropertyType=houses&primaryDisplayPropertyType=houses&oldPrimaryDisplayPropertyType=houses&numberOfPropertiesPerPage=24 XML 10:49, Sunday, 03 May 11:32, Sunday, 03 May
https://x.com/statuses/user_timeline/22724360.rss XML 11:28, Sunday, 03 May 12:08, Sunday, 03 May
Humble Bundle Blog XML 10:56, Sunday, 03 May 11:43, Sunday, 03 May
I, Cringely XML 11:07, Sunday, 03 May 11:55, Sunday, 03 May
Irregular Webcomic! XML 10:49, Sunday, 03 May 11:30, Sunday, 03 May
Joel on Software XML 10:56, Sunday, 03 May 11:42, Sunday, 03 May
Judith Proctor's Journal XML 11:28, Sunday, 03 May 12:08, Sunday, 03 May
Krebs on Security XML 10:49, Sunday, 03 May 11:30, Sunday, 03 May
Lambda the Ultimate - Programming Languages Weblog XML 11:28, Sunday, 03 May 12:08, Sunday, 03 May
Looking For Group XML 10:56, Sunday, 03 May 11:40, Sunday, 03 May
LWN.net XML 10:49, Sunday, 03 May 11:30, Sunday, 03 May
Mimi and Eunice XML 10:56, Sunday, 03 May 11:41, Sunday, 03 May
Neil Gaiman's Journal XML 11:28, Sunday, 03 May 12:08, Sunday, 03 May
Nina Paley XML 10:56, Sunday, 03 May 11:43, Sunday, 03 May
O Abnormal – Scifi/Fantasy Artist XML 10:56, Sunday, 03 May 11:41, Sunday, 03 May
Oglaf! -- Comics. Often dirty. XML 11:07, Sunday, 03 May 11:55, Sunday, 03 May
Oh Joy Sex Toy XML 10:56, Sunday, 03 May 11:40, Sunday, 03 May
Order of the Stick XML 10:56, Sunday, 03 May 11:40, Sunday, 03 May
Original Fiction Archives - Reactor XML 11:28, Sunday, 03 May 12:10, Sunday, 03 May
OSnews XML 10:56, Sunday, 03 May 11:41, Sunday, 03 May
Paul Graham: Unofficial RSS Feed XML 10:56, Sunday, 03 May 11:41, Sunday, 03 May
Penny Arcade XML 11:28, Sunday, 03 May 12:10, Sunday, 03 May
Penny Red XML 10:56, Sunday, 03 May 11:41, Sunday, 03 May
PHD Comics XML 10:49, Sunday, 03 May 11:38, Sunday, 03 May
Phil's blog XML 11:07, Sunday, 03 May 11:55, Sunday, 03 May
Planet Debian XML 10:56, Sunday, 03 May 11:41, Sunday, 03 May
Planet GNU XML 10:49, Sunday, 03 May 11:30, Sunday, 03 May
Planet Lisp XML 10:49, Sunday, 03 May 11:38, Sunday, 03 May
Pluralistic: Daily links from Cory Doctorow XML 11:28, Sunday, 03 May 12:08, Sunday, 03 May
PS238 by Aaron Williams XML 11:07, Sunday, 03 May 11:55, Sunday, 03 May
QC RSS XML 10:56, Sunday, 03 May 11:43, Sunday, 03 May
Radar XML 11:28, Sunday, 03 May 12:10, Sunday, 03 May
RevK®'s ramblings XML 10:56, Sunday, 03 May 11:42, Sunday, 03 May
Richard Stallman's Political Notes XML 10:49, Sunday, 03 May 11:38, Sunday, 03 May
Scenes From A Multiverse XML 10:56, Sunday, 03 May 11:43, Sunday, 03 May
Schneier on Security XML 11:28, Sunday, 03 May 12:08, Sunday, 03 May
SCHNEWS.ORG.UK XML 10:56, Sunday, 03 May 11:40, Sunday, 03 May
Scripting News XML 11:28, Sunday, 03 May 12:10, Sunday, 03 May
Seth's Blog XML 10:56, Sunday, 03 May 11:42, Sunday, 03 May
Skin Horse XML 11:28, Sunday, 03 May 12:10, Sunday, 03 May
Tales From the Riverbank XML 10:49, Sunday, 03 May 11:38, Sunday, 03 May
The Adventures of Dr. McNinja XML 10:56, Sunday, 03 May 11:41, Sunday, 03 May
The Bumpycat sat on the mat XML 11:28, Sunday, 03 May 12:08, Sunday, 03 May
The Daily WTF XML 10:56, Sunday, 03 May 11:42, Sunday, 03 May
The Monochrome Mob XML 10:49, Sunday, 03 May 11:30, Sunday, 03 May
The Non-Adventures of Wonderella XML 10:49, Sunday, 03 May 11:32, Sunday, 03 May
The Old New Thing XML 10:56, Sunday, 03 May 11:40, Sunday, 03 May
The Open Source Grid Engine Blog XML 10:56, Sunday, 03 May 11:43, Sunday, 03 May
The Stranger XML 10:56, Sunday, 03 May 11:41, Sunday, 03 May
towerhamletsalarm XML 10:56, Sunday, 03 May 11:42, Sunday, 03 May
Twokinds XML 11:28, Sunday, 03 May 12:10, Sunday, 03 May
UK Indymedia Features XML 11:28, Sunday, 03 May 12:10, Sunday, 03 May
Uploads from ne11y XML 10:56, Sunday, 03 May 11:42, Sunday, 03 May
Uploads from piasladic XML 10:49, Sunday, 03 May 11:32, Sunday, 03 May
Use Sword on Monster XML 10:56, Sunday, 03 May 11:43, Sunday, 03 May
Wayward Sons: Legends - Sci-Fi Full Page Webcomic - Updates Daily XML 10:56, Sunday, 03 May 11:42, Sunday, 03 May
what if? XML 10:49, Sunday, 03 May 11:30, Sunday, 03 May
Whatever XML 10:49, Sunday, 03 May 11:38, Sunday, 03 May
Whitechapel Anarchist Group XML 10:49, Sunday, 03 May 11:38, Sunday, 03 May
WIL WHEATON dot NET XML 10:56, Sunday, 03 May 11:40, Sunday, 03 May
wish XML 10:56, Sunday, 03 May 11:41, Sunday, 03 May
Writing the Bright Fantastic XML 10:56, Sunday, 03 May 11:40, Sunday, 03 May
xkcd.com XML 10:49, Sunday, 03 May 11:32, Sunday, 03 May