Wednesday, 21 August



Link [Scripting News]

Last night I read on Twitter that Ethan Zuckerman had resigned as part of a developing scandal with the Media Lab and the notorious child molester Jeffrey Epstein. My first reaction was this is a mistake. Wait until more is known about what happened. I think it would be more courageous to stay in his position, and help the organization deal with the crisis. And he is staying through the academic year, according to his post. So I guess he's there for another year? A lot will change in that year. It was a very dramatic few hours last night. Twitter feasts on those moments. But there are people's lives, education and careers involved. Slow down and figure out what happened.

Link [Scripting News]

Background: Zuckerman is a former colleague at Berkman Center in the early 2000s. His work and mine were related to blogging as a civic act, both our projects were very successful. I also know Joi Ito from Silicon Valley and he was a frequent contributor at BloggerCon. I visited both of them at the Media Lab in 2016. I wrote up my thoughts from that visit (of course) in a blog post.

Link [Scripting News]

I don't know maybe it's just me, but I think "proclaims himself the second coming of God" might be grounds for impeachment.

Link [Scripting News]

Reading this story for me is what it must have felt like to be a dinosaur having its bones discovered by a curious archaeologist, before there was archeology.


Anthropodermic bibliopegy: the grotesque history of books bound in human skin [Cory Doctorow – Boing Boing]

On the Under the Knife show, Dr Lindsey Fitzharris elucidates the weird history of "anthropodermic bibliopegy," the weird practice of binding books in human skin, including the doctor who bound case histories in the skins of his dead patients, and the murderer who asked to have his biography bound in his skin and presented to the lawman who caught him after his execution. Other common ways to procure human skins for the practice included grave-robbing (Andrea wrote about the Burke and Hare editions back in 2016). (Thanks, Allen)


Link [Scripting News]

An update to the software that runs Sometimes when you'd click on the Sign on button in the upper left corner of the home page you'd get taken to a bogus page. Then you'd try it again and it'd work. Well I think I've got it now so that it always takes you to the right place.


The world's largest occult library has a public online archive [Cory Doctorow – Boing Boing]

Amsterdam's Bibliotheca Philosophica Hermetica (AKA "The Ritman Library) houses more ths 25,000 occult texts, covering "Hermetics, Rosicrucians, Theosophy, alchemy, mysticism, Gnosis and Western Esotericism, Sufism, Kabbalah, Anthroposophy, Catharism, Freemasonry, Manichaeism, Judaica, the Grail, Esotericism, and comparative religion."

The library has begun to scan and post its core collection to an online archive called The Hermetically Open Archive. The project was underwritten by Dan Brown in thanks for the library's contributions to his books "The Lost Symbol" and "Inferno" (the library houses the first illustrated edition of Dante's "Divine Comedy," from 1472).

Though the scans are all in the public domain, the library uses Javascript tricks to try to block scraping, though, according to Maika at Haute Macabre, there are plans to enable downloading in the future.

Haute Macabre has assembled a kind of highlight reel of the collection, which has some gorgeous illustrated texts in it.

Hermetically Open [Ritman Library]

Bury Us Beneath Occult Books: The Ritman Library Digitized [Maika/Haute Macabre]

My MMT Podcast appearance, part 2: monopoly, money, and the power of narrative [Cory Doctorow – Boing Boing]

Last week, the Modern Monetary Theory Podcast ran part 1 of my interview with co-host Christian Reilly; they've just published the second and final half of our chat (MP3), where we talk about the link between corruption and monopoly, how to pitch monetary theory to people who want to abolish money altogether, and how stories shape the future.

If you're new to MMT, here's my brief summary of its underlying premises: "Governments spend money into existence and tax it out of existence, and government deficit spending is only inflationary if it's bidding against the private sector for goods or services, which means that the government could guarantee every unemployed person a job (say, working on the Green New Deal), and which also means that every unemployed person and every unfilled social services role is a political choice, not an economic necessity."

Debullshitifying Trump's get-out-of-jail statement for Medicaid scammer Ted Suhl [Cory Doctorow – Boing Boing]

Ted Suhl was serving his third year of a seven-year sentence for bribery and Medicaid fraud when Trump commuted his sentence and sprung him, at the request of Mike Huckabee; the White House released a statement explaining Trump's reasoning for the commutation, and, as Propublica documents, it is full of "half-truths and omissions" about Trump's new grifter pal. Propublica did the hard work of annotating the Trump statement to remove the bullshit and tell the true tale.

"The Stab": a forgotten nearly-was Haunted Mansion changing portrait [Cory Doctorow – Boing Boing]

The queue area at the Haunted Mansion at Disneyland features a row of changing portraits wherein paintings everyday scenes are revealed as sinister and haunted (originally the effect was done with crossfading slide-projectors; now it's done with an amazing, crisp electroluminiscent effect).

There were a lot of potential gags designed for the hallway (for example, the miser who spontaneously combusts!), and while the paintings the Imagineers settled are part of the best queue in theme-park history, I can't help but wish a few of those nearly-was gags had made it into the ride.

One example is "The Stab," based on a well-known Currier and Ives print, which Imagineer Marc Davis reimagined as a murder scene. As the Long Forgotten Blog writes: "So it's another example of Marc riffing off of a known image, in this case wickedly reading murderous intent in this dear lady's so seemingly innocent eyes. You know, her face does seem to me to have an utter blankness about it, despite the Mona Lisa smile, that allows the viewer to imagine virtually any thought lurking behind it."

Long Forgotten also mentions that my friend (and sometime Boing Boing contributor, and former Imagineering colleague) Chris Merritt is just wrapping up an astounding, two-volume history of Marc Davis that comes out on Labor Day. Chris was Davis's protege, and the rarities, never-seen sketches, and insider dope he has on Davis are absolutely mind-blowing. I've pre-ordered my copy: it's a $105, slipcased, two-volume hardcover set.

The thing that probably inspired Davis, however, is the ambiguity of the object the woman is holding in her right hand. It's probably a folded fan, but it could be a knife, right? And she's sorta holding it where he can't see it, right? All of a sudden the picture is funny, if you have a macabre sense of humor.

Nine Years [Long Forgotten]


My MMT Podcast appearance, part 2: monopoly, money, and the power of narrative [Cory Doctorow's]

Last week, the Modern Monetary Theory Podcast ran part 1 of my interview with co-host Christian Reilly; they’ve just published the second and final half of our chat (MP3), where we talk about the link between corruption and monopoly, how to pitch monetary theory to people who want to abolish money altogether, and how stories shape the future.

If you’re new to MMT, here’s my brief summary of its underlying premises: “Governments spend money into existence and tax it out of existence, and government deficit spending is only inflationary if it’s bidding against the private sector for goods or services, which means that the government could guarantee every unemployed person a job (say, working on the Green New Deal), and which also means that every unemployed person and every unfilled social services role is a political choice, not an economic necessity.”


Joey Hess: releasing two haskell libraries in one day: libmodbus and git-lfs [Planet Debian]

The first library is a libmodbus binding in haskell.

There are a couple of other haskell modbus libraries, but none that support serial communication out of the box. I've been using a python library to talk to my solar charge controller, but it is not great at dealing with the slightly flakey interface. The libmodbus C library has features that make it more robust, and it also supports fast batched reads.

So a haskell interface to it seemed worth starting while I was doing laundry, and then for some reason it seemed worth writing a whole bunch more FFIs that I may never use, so it covers libmodbus fairly extensively. 660 lines of code all told.

Writing a good binding to a C library has art to it. I've seen ones that are so close you feel you're writing C and not haskell. On the other hand, some are so far removed from the underlying library that its documentation does not carry over at all.

I tried to strike a balance. Same function names so the extensive libmodbus documentation is easy to refer to while using it, but plenty of haskell data types so you won't mix up the parity with the stop bits.

And while it uses a mutable vector under the hood as the buffer for the FFI interface, so it can be just as fast as the C library, I also made functions for reading stuff like registers and coils be polymorphic so easier data types can be used at the expense of a bit of extra allocation.

The big win in this haskell binding is that you can leverage all the nice haskell libraries for dealing with binary data to parse the modbus data, rather than the ad-hoc integer and float conversion stuff from the C library.

For example, the Epever solar charge controller has its own slightly nonstandard way to represent 16 bit and 32 bit floats. Using the binary library to parse its registers in applicative style came out quite nice:

data Epever = Epever
    { pv_array_voltage :: Float
    , pv_array_current :: Float
    , pv_array_power :: Float
    , battery_voltage :: Float
    } deriving (Show)

getEpever :: Get Epever
getEpever = Epever
    <$> epeverfloat  -- register 0x3100
    <*> epeverfloat  -- register 0x3101
    <*> epeverfloat2 -- register 0x3102 (low) and 0x3103 (high)
    <*> epeverfloat  -- register 0x3104
    epeverfloat = decimals 2 <$> getWord16host
    epeverfloat2 = do
        l <- getWord16host
        h <- getWord16host
        return (decimals 2 (l + h*2^16))
    decimals n v = fromIntegral v / (10^n)

The second library is a git-lfs implementation in pure Haskell.

Emphasis on the pure -- there is not a scrap of IO code in this library, just 400+ lines of data types, parsing, and serialization.

I wrote it a couple weeks ago so git-annex can store files in a git-lfs remote. I've also used it as a git-lfs server, mostly while exploring interesting edge cases of git-lfs.

This work was sponsored by Jake Vosloo on Patreon.


Today in GPF History for Wednesday, August 21, 2019 [General Protection Fault: The Comic Strip]

As Trish, Patty, and Dex leave the Renaissance fair, then run into an unexpected familiar face


Link [Scripting News]

Braintrust query: Is there a way in the Chrome debugger to set a breakpoint when the value of a global changes? myGlobals.val is initialized when the app starts, but when it's used it has a different value. I want to break at the line that changes its value.


The SuperH-3, part 13: Misaligned data, and converting between signed vs unsigned values [The Old New Thing]

When going through compiler-generated assembly language, there are some patterns you’ll see over and over again. Note that the code you see may not look exactly like this due to compiler instruction scheduling. In particular, the sequences for misaligned memory access may bring additional registers into play in order to avoid register dependencies.

First, is the unsigned memory access. Bytes and words loaded from memory are sign-extended by default. If you want to load an unsigned value, you need to perform an explicit zero-extension.

    ; load unsigned byte from address in r0
    MOV.B   @r0, r1         ; loads sign-extended byte
    EXTU.B  r1, r1          ; zero-extend the byte to a longword

    ; load unsigned word from address in r0
    MOV.W   @r0, r1         ; loads sign-extended word
    EXTU.W  r1, r1          ; zero-extend the word to a longword

Next up is misaligned data. The SH-3 does not support unaligned memory access. Not only that, but the kernel doesn’t even emulate unaligned memory access. If you access memory from a misaligned address, you take an access violation and your process crashes. So don’t mess up!

There are no special instructions for accessing misaligned data. You are on your own to take individual bytes and combine them into the desired final value, or to take the starting value and decompose it into bytes.

    ; store 16-bit value in r1 to possibly unaligned address in r0
    ; destroys r1
    ;                           r1      @r0
    ;                         xxxxAABB  xx xx
    MOV.B   r1, @r0         ; xxxxAABB  BB xx
    SHLR8   r1              ; 00xxxxAA  BB xx
    MOV.B   r1, @(1, r0)    ; 00xxxxAA  BB AA

    ; store 32-bit value in r1 to possibly unaligned address in r0
    ; destroys r1
    ;                           r1      @r0
    ;                         AABBCCDD  xx xx xx xx
    MOV.B   r1, @r0         ; AABBCCDD  DD xx xx xx
    SHLR8   r1              ; 00AABBCC  DD xx xx xx
    MOV.B   r1, @(1, r0)    ; 00AABBCC  DD CC xx xx
    SHLR8   r1              ; 0000AABB  DD CC xx xx
    MOV.B   r5, @(2, r0)    ; 0000AABB  DD CC BB xx
    SHLR8   r1              ; 000000AA  DD CC BB xx
    MOV.B   r1, @(3, r0)    ; 000000AA  DD CC BB AA

    ; read 16-bit value from possibly unaligned address in r0
    ;                           r1      r2        @r0
    ;                         xxxxxxxx  xxxxxxxx  BB AA
    MOV.B   @(1, r0), r1    ; SSSSSSAA  xxxxxxxx
    SHLL8   r1              ; SSSSAA00  xxxxxxxx
    MOV.B   @r0, r2         ; SSSSAA00  SSSSSSBB
    EXTU.B  r2, r2          ; SSSSAA00  000000BB
    OR      r1, r2          ; SSSSAA00  SSSSAABB
                            ; r2 contains signed 16-bit value
    EXTU.W  r2, r2          ; SSSSAA00  0000AABB
                            ; r2 contains unsigned 16-bit value

    ; read 32-bit value from possibly unaligned address in r0
    ;                           r1      r2        @r0
    ;                         xxxxxxxx  xxxxxxxx  DD CC BB AA
    MOV.B   @(3, r0), r1    ; SSSSSSAA  xxxxxxxx
    SHLL8   r1              ; SSSSAA00  xxxxxxxx
    MOV.B   @(2, r0), r2    ; SSSSAA00  SSSSSSBB
    EXTU.B  r2, r2          ; SSSSAA00  000000BB
    OR      r2, r1          ; SSSSAABB  000000BB
    SHLL8   r1              ; SSAABB00  000000BB
    MOV.B   @(1, r0), r2    ; SSAABB00  SSSSSSCC
    EXTU.B  r2, r2          ; SSAABB00  000000CC
    OR      r2, r1          ; SSAABBCC  000000CC
    SHLL8   r1              ; AABBCC00  000000CC
    MOV.B   @r0, r2         ; AABBCC00  SSSSSSDD
    EXTU.B  r2, r2          ; AABBCC00  000000DD
    OR      r1, r2          ; AABBCC00  AABBCCDD

Less often, you will see code that sign-extends a 32-bit value to a 64-bit value.

    ; sign-extend 32-bit value in r0 to 64-bit value in r1:r0
    MOV     r0, r1          ; copy value to r1
    SHLL    r1              ; T contains high bit of value
    SUBC    r1, r1          ; if T=0, then r1 = 00000000
                            ; if T=1, then r1 = FFFFFFFF

If you happen to have the value 0 lying around in a register, you could accomplish the task in two instructions:

    ; sign-extend 32-bit value in r0 to 64-bit value in r1:r0
    ; assumes that r2 already contains the value zero
    CMP/GT  r0, r2          ; T = (0 > r0)
                            ; in other words, T=0 if r0 is positive or zero
                            ;                 T=1 if r0 is negative
    SUBC    r1, r1          ; if T=0, then r1 = 00000000
                            ; if T=1, then r1 = FFFFFFFF

That is just code golf on my part. I haven’t seen the compiler use this trick, or the next one.

    ; sign-extend 32-bit value in r0 to 64-bit value in r1:r0
    ; preserves flags
    ROTCL   r0              ; rotate r0 left, copying high bit into T
                            ; and saving old T in low bit of r0
    SUBC    r1, r1          ; if T=0, then r1 = 00000000, T stays 0
                            ; if T=1, then r1 = FFFFFFFF, T stays 1
    ROTCR   r0              ; rotate r0 right to restore original value
                            ; and recover original value of T

In general, you’ll see that SH-3 assembly code is somewhat verbose, even more so because compiler technology back in this time period was not as advanced as it is today, but you have to realize that each of these instructions is only half the size of the instructions of its RISC-style contemporaries, so even though you plowed through 2000 instructions, that’s only 4KB of code.

Okay, next time, we’re returning to reality and looking at function call patterns.


The post The SuperH-3, part 13: Misaligned data, and converting between signed vs unsigned values appeared first on The Old New Thing.


ESP32 progress [RevK®'s rants]

I am, of course, reinventing wheels, yet again with my move to ESP32 for IoT and access control stuff.

It is going well - I am actually on holiday in Greece, but whilst my wife and daughter sit on sun beds, I have been messing with code. Yes, I took a laptop, programming cable, some ESP32 boards, and even a hand held oscilloscope on holiday with me. Sorry. I spent 4 hours on the plane reading the data sheet on my iPad!

My plan is to get my ESP32 development environment and toolkit working like I had on the ESP8266, and build up my "Solar System" alarm and door control module code.

In some ways this is a nice project - it is often good to rework and redesign something for the second time as you have learned lessons and can do things better. It is also nice that my code effectively has a specification to work to. The previous code was being specified, designed, and coded all at once. But now I have a system which works with the ESP8266 modules and should work just the same with the ESP32 modules.


I am making good progress. I have the build environment on Mac and linux working. The instructions were very clear. I am actually ssh'd to my linux box to develop now. I have the basic support system which allows me to connect to the WiFi, and MQTT server, and do over the air s/w updates, and store settings in non volatile storage. These are all based on the ESP-IDF tools, which seem quite good.

I have hit a few snags - for example I cannot do certificate pinning on the TLS using the SHA1 hash of the certificate, it seems. I have to include the PEM format certificate itself. This seems needlessly complicated, but not a big change.

I have a snag with MQTT that it seems not to pass on messages with zero length data for some reason. I may have to work that out and do a pull request to the library.

I also have to restructure the design in some case - for example, previously I did the settings in a block of memory that was flashed. Now I am using the provided NVS library for settings. It seems good as it does not have to re-flash a block on every change - it incrementally updates a log of settings in flash as it goes. Quite sensible really. However, it seems to lack a means to enumerate the settings that are there - I have to ask for a setting by name. As a result I have changed my code structure to register settings from the application with my library. This has meant a redesign, but as always it makes it much better in the long run.

I should be able to sort the basic GPIO (inputs and outputs) really easily, and hence use almost all of my existing door control code with that. It looks like I can compile my linux version of DESFire stuff for NFC too, so no need for an ESP specific version (if needs ESP specific AES calls).

There will be some challenges with the PN532 library and VL53L0X library I expect. But they are not rocket science, thankfully. The Honeywell Galaxy RS485 bus should just work if I can work out the timer interrupt logic. Saldy each of these is simple in theory but getting my head around the specific way of working for the ESP32 takes an unknown amount of time. Interrupts were a challenge on the ESP8266!

PCB design

This had me kicking myself the night before we left on holiday. I have made up several ESP32 based PCBs. A general break out board; a door controller board; and several Galaxy keypad boards. I had checked I could load code on the door controller. It worked.

But I realised, on reading more of the data sheet, that 6 of the pins on the bottom of the module (ESP32-WROOM-32) are connected to the internal flash and so unusable. Why even have them come out of the module then? They are literally in the way of tracking the usable pins out of the chip. I had the same issue on the ESP-12F and taped over the pads. The ESP-12S sensibly does not have them, but then has a GND pad you have to tape over if you have tracks under it. The ESP32-WROOM-32 has both unusable pins on the end and a GND pad in the way. Arrrg! Even worse, it is only 6 pins of 10 at the end that you cannot use, so not actually easy to tape over just the 6 and still sensibly solder to others. I had read a web site on "safe" GPIO pins, but that documented the ESP32 chip not the ESP32-WROOM-32 module, and for that the pins 6-11 are "safe" to use as GPIO. There are other pins that are iffy - i.e. needs to be high or low at boot. Those I knew about.

The end result is that the keypad boards I made would simply not work! So I was left with three boards in total to take on holiday. The door controller would not "work" for the GPIO pins, but they are not connected to anything that would stop the chip actually working. So usable for developing code. I have managed to trash one board already with a flash / secure boot mix up!

The first thing I did when I got here was re-work my designs on the basis of taping all 10 connectors on the end of the chip. If I had not, it would have been bugging me all holiday. I had to be a bit creative in some cases, but I managed to redesign both boards. I'll mill them when I get back (I don't think my wife would have let me bring the milling machine on holiday for some reason).

Door controller PCB


Security with any IoT stuff is important, and reading the data sheets on the ESP32 and using the build environment, I can see that Espressif have done their homework. Nothing is 100% secure, but they are really trying hard here - which is good news as a lot of IoT stuff is totally crap!

The chip has a secure boot mode. This means the first stage bootloader in ROM checks the signature on the bootloader in flash before running it. The second stage bootloader checks the signature on the application. Doing an OTA (over the air) s/w update also checks the signature, even. This means you cannot load wrong code on the chip!

It also supports encrypted flash. This means that I can have the code, and all my settings (like login details to the MQTT server, and WiFi) securely stored in the flash. Even pulling it off the board won't get you the data.

It has a set of eFuses, which are built in to the chip. They can only be "blown", so are bits you cannot unset once set. These include bits that stop you reading the eFuses, but they are wired internally in the chip hardware allowing it to encrypt and decrypt the flash. You can even set a fuse to disable the JTAG debug port. This means you can really lock it down so that you cannot get credentials out of the chip or the flash. It might not stop someone de-capping the chip package and using a microscope to read the fuses, but it will stop most conceivable attacks.

They have built in AES, SHA, BIGNUM (RSA) and TRNG hardware as well, and the API makes using https and TLS simple and encouraged.

In short, security seems to have been done very well, and thought about at the outset in the hardware and software design. This is almost unheard of in the IoT world, which is generally pretty shit at security. Well done Espressif.


Once I have this starting point - code and PCBs - and it is working, then (apart from updating my existing door controllers), I have to think of other uses for these chips. They are packed with stuff - included a dedicated separate ultra low power processor. The opportunities for very low power devices are interesting. Even just the bluetooth looks fun - I could make it that door entry using a key fob needs to be able to see my phone in range, for example. I am sure there are a lot of opportunities!

They have some interesting direct AP-less WiFi modes in the API as well, and some low rate long range (e.g. 1km) WiFi functions too. There is even a MESH WiFi system. This all makes for some very interesting possibilities to make systems that do not rely on infrastructure such as access points.

I still have not made my environmental monitoring device yet, so that may well be the next project after this.


Link [Scripting News]

GEICO has an incredible spot running now. A group of office workers gather around a mobile phone playing a GEICO app with a virtual version of the gecko, on the backdrop of the actual desk. Then the real gecko shows up, waving his hand and saying in his British accent "Hey I'm real." The humans in the office laugh. "He thinks he's real." It's funny on three levels which makes it even funnier.

If Trump were a reality show [Scripting News]

If Trump were an actual reality show instead of a fake one, a group of expat neo-Nazi Danes would form a government-in-exile (offices in Trump Tower in NYC) and would do a deal with the US to sell Greenland for very little money. Remember, they are the reality show version, so they need the exposure. Win-win. Their Instagram influencer channel goes crazy, orders for Danish jack boots goes viral.

Trump declares war on the fake (in reality TV world, but actual in real world) government of Denmark. US war ships blockade Copenhagen. There is a crisis in the UN Security Council (the reality TV version). Interesting confluence, at the exact same moment the real UN Security Council is meeting about how to re-exert its dominance. Meanwhile reality TV governments-in-exile form on Instagram for the UK, China, Russia and the Philippines. Ratings soar.

Trump announces his Christmas Special will take place at the Tivoli Gardens amusement park in Copenhagen.


[$] Making containers safer []

On day one of the Linux Security Summit North America (LSS-NA), Stéphane Graber and Christian Brauner gave a presentation on the current state and the future of container security. They both work for Canonical on the LXD project; Graber is the project lead and Brauner is the maintainer. They looked at the different kernel mechanisms that can be used to make containers more secure and provided some recommendations based on what they have learned along the way.

Where to catch me at Burning Man! [Cory Doctorow – Boing Boing]

This is my last day at my desk until Labor Day: tomorrow, we're driving to Burning Man to get our annual dirtrave fix! If you're heading to the playa, here's three places and times you can find me:

1. The Liminal Labs Couch Chat, Weds, 12 noon, at Camp Liminal Labs (8:15 and Center Camp). Liminal Labs is my camp (celebrating its 21st year!), and every year, we put on a public lecture. This year, I'm hosting Andrew "bunnie" Huang (previously) the legendary hardware hacker and entrepreneur whom EFF is representing in a lawsuit to overturn part of the DMCA to make it legal to bypass DRM.

Seating is always limited at these things (our living room is big, but it's not that big!) so come by early!

2. Center Camp Speaker Series, Weds, 3PM, at Center Camp Cafe. I'm doing a solo talk on the Center Camp stage again, about Big Tech, competition, and corruption.

3. Palenque Norte at Camp Soft Landings, Fri, 1PM, 8:30 and E. I'll be talking about "Surveillance Capitalism" and our "epistemological crisis," and how we can choose to fix Big Tech, or the internet, but not both.

I hope you have an amazing burn -- we always do! This year I'm taking a break from working in the cafe pulling shots in favor of my first-ever Greeter shift, which I'm really looking forward to.

While we're on the subject, there's still time to sign up for the Liminal Labs Assassination Game!


Where to catch me at Burning Man! [Cory Doctorow's]

This is my last day at my desk until Labor Day: tomorrow, we’re driving to Burning Man to get our annual dirtrave fix! If you’re heading to the playa, here’s three places and times you can find me:

Seating is always limited at these things (our living room is big, but it’s not that big!) so come by early!

I hope you have an amazing burn — we always do! This year I’m taking a break from working in the cafe pulling shots in favor of my first-ever Greeter shift, which I’m really looking forward to.

While we’re on the subject, there’s still time to sign up for the Liminal Labs Assassination Game!


The Big Idea: Susan Forest [Whatever]

In today’s Big Idea, Susan Forest is an author with a mission — a real-world mission that reveals itself in a fantastical way in her novel Bursts of Fire.


I love traditional fantasy. I love the magic and the quest. I love the perils, the monsters, and the politics. So when I set out to write Bursts of Fire, I wanted to create a clever, light-hearted heist romp fantasy.

Ever have a story grow?


Ideas beget ideas prolifically, and the story became an epic political and family saga. When I spoke with Laksa Media about publishing my novel(s), the topic of the central driving idea came up. Laksa Media’s mission is to bring social issues to light for examination and discussion through fiction. When Publisher Lucas K. Law asked me what social cause my series addressed, I looked at him for a moment and blinked.

But I immediately knew the answer to his question. Addictions. Because the subconscious plays a role in story creation, even when the conscious mind is unaware of its influence, the subtext was there all along. It only took his question to make me see it—The Big Idea.

Why addictions? For me, the answer is clear. There is no family that has not been touched by addictions, including mine.

Take James (not his real name), a former boyfriend.

I met James when I was a newly-single mom breaking into my first semi-professional acting role. James was a pro, and if not handsome, he was charismatic, highly respected, and talented. He was also an alcoholic. And he was attracted to me—of course, at a time when I was feeling most unattractive and vulnerable.

James only stayed in my life for about six months, and those six months were wild. I was cautioned about him early on, but he made me feel like the only woman on earth. He also sent me away in tears. I knew he would never become a permanent part of my life, but I think the defining break came for me when my sister-in-law, a psychologist, warned me never to leave him alone with my children.

The idea of addictions has always fascinated and terrified me, which may be one reason why my subconscious had built this theme into the story when I wasn’t looking. But the topic dovetails perfectly with Laksa Media’s mission; it’s an issue of mental health hidden in shame, and needs to be brought into the open. Through Laksa’s editorial input, deeper research, and further drafts, Bursts of Fire—and the entire Addicted to Heaven series—deepened in complexity and richness.

Alcohol and drug addiction are little-understood forms of mental illness; for centuries, the stigma of the illness has colored research and treatment, leading to medical and health myths. Heredity, environment, social structures, economics, and politics all appear to have roles in the development and persistence of the malady.

Those with genetic conditions, such as untreated ADHD (which runs in my family), are statistically more likely to succumb. So are people with early trauma. Exposure to substances, particularly at a young age, seems to be a factor, and people diagnosed with addictions have measurable changes in their brain structure and chemistry.

Cultures which pair alcohol with masculinity, or those exposed to cultural trauma such as having children removed en masse from families, have higher rates of illness. Drug policy has been racially and economically influenced, from the British/Chinese opium wars to the criminalization distinctions between cocaine and crack.

But, much evidence is correlational, and causality is complex.

Treatment has ranged from exorcism, imprisonment, and forms of chemical intervention, to abstinence through willpower or religion. Harm reduction has shown promise, but it is highly controversial. Medical care models based on for-profit frameworks are not necessarily also based on best scientific research.

Hope for better treatments comes from our willingness to see addictions as chronic illnesses like diabetes, which can be treated through lifestyle and dietary adjustments or through medication. If addiction can be separated from its clinging stigma, perhaps we can create a similar range of individualized therapies with greater effectiveness.

The bottom line is we simply don’t have all the answers. But a seven-book series gives me the scope to explore a wide range of issues dealing with substance use.

The first book of the series, Bursts of Fire, releases three high-born, magical sisters into a world of abrupt change, to rely on their wits and unravel the mystery of a mad king’s inexplicable attack on their home. It abounds with breathless excitement, but it also deals with the first tastes of addictive spells.

Book Two, Flights of Marigolds, takes the adventure and politics higher with a rebel defeat and pursuit of a McGuffin, but it also takes the social issues deeper by examining the question of enabling and co-dependence. Later books in the series will introduce a pair of mischievous con men and a clever cat burglar and also address factors that make some individuals and groups more at risk for substance abuse than others.

The series considers not only adverse consequences, but also the joys and delights of substance use. Drugs and alcohol have been used throughout history and across cultures for celebration, rituals, social affiliation, pleasure, inspiration, and spiritual connection. Carl Hart, a professor of neuroscience and psychology at Columbia University, notes, “If the vast majority of people who use any drug do not become addicted, you can’t blame the drug for addiction. The reality is, the overwhelming amount of drug use that occurs in a society is positive; not only positive, but life enhancing.”

So, yes: Bursts of Fire (and its ensuing sequels) is many things: a political epic, a family saga, a heist romp, a magical fantasy. It is also a garden of exploration for big ideas. Bring on the delights of fantasy novels. Bring on the thieves, kings, and magic users. Bring on the hidden social issues. Big ideas can grow from stories of rollicking adventure. And one can also have thoughtful content within a book or a series that is fun to read.


Bursts of Fire: Amazon|Barnes & Noble|Indiebound|Powell’s

Read an excerpt. Visit the author’s site. Follow her on Twitter.



Seonag and the Seawolves [Original Fiction –]

A clan storyteller unfolds the tale of Seonag and the wolves, and the wolves and the waves.



I know you’ve heard the story of An Duine Aonarach, who one day walked into the sea and never returned. And likely you have at least heard of Seonag as well, who did the same thing but to less collective memory.

It’s been a long time since I’ve told a story, a ghràidh. It’s been a long time since I was our clan’s storyteller, but I think I’ve got one more in me, and I think it’s Seonag’s, because I remember her, and I’m the last one who does.

The rest forgot, mostly because they wanted to.

This is the story of Seonag and the wolves, and the wolves and the waves.

She came to me, not so very long ago. She carried no bother on her about whether her people had forgotten her or not (they have), and she took no worries from her brief visit. But she did bring with her a warning.

“Thoir an aire,” she said. “Thoir an aire a-rithist.”

The simplest of warnings, really. Beware and beware again.

I knew it was she the moment I saw her, even though what she has become is beyond what she once was. But that is for you to discover as I did.

So let us begin. Come closer, for my voice weakens and soon will not be here at all.


Seonag is born on a day where the clouds race each other across the sky. They pile up, layer upon layer, like a stampede of red deer in the glen. Like the deer, the clouds that day kick mist into the air, only down and not up, and the mist falls lightly upon Beinn Ruigh Choinnich.

It is low tide. The sea has drawn its breath to wait for her.

Seonag is born on Là Buidhe Bealltainn as the women go out into the mist under that jumble of clouds to wash their faces in the morning dew.

It is not dew that covers Seonag at her birth. It is more alive than that.

And yet the midwife brings in a sprig of heather, flicking a tick from the sprig into the fire. The wind through the open door dries the sweat on Seonag’s mother’s brow. The midwife lets the winter-aged and browned bells dust their dew across the newborn’s, melding with the blood and the birth fluids, a shock of cool water after the heat of the womb and the birth canal and the smoldering peat fire, and Seonag opens her eyes wide.

Somewhere the cuthag begins to call its gù-gù, gù-gù, and the midwife hurriedly dips a finger in ewe’s milk and places it against the lips of the baby to break Seonag’s fast before the bird can finish delivering its news of ill luck.

This is a lot for Seonag’s first moments.

Upon seeing the place she has just been thrust into, Seonag looks around. And then she goes to sleep.

It is as if this world has already shown her all its faces, and she is just born and tired of it.

This doesn’t change as Seonag grows older. Where the clouds raced each other on the morning of her birth, whispers race each other through the villages, from Loch Baghasdail to Dalabrog to Cill Donnain as she grows into an infant, and then a child, and then an adolescent.

She is peculiar, they say.

They think she does not hear them, because she is out of earshot.

Seonag is beautiful the way the each-uisge is beautiful. She has no rosy cheeks or hardiness in her features, though she is hardy enough (she has to be, to survive on our island). But some say that that first touch of dew meant to bring beauty came at the wrong moment, or at the wrong hands, or at the wrong time. It is the early dawn of early summer when she is born, the sky lightening after only just having darkened—it was the in-between time, and Seonag becomes an in-between person. Like the water-horse, the people fear she will lure them off to drown.

Sometimes Seonag sings when she is cutting peat in the springtime. Her voice unnerves the crofters and the fisherfolk who lift their own at the cèilidhean. Seonag never sings at a cèilidh.

For all that, you will think that Seonag is not of this world, and I must assure you: she is.

She feels those whispers even when she does not hear them. She wants to sing at the cèilidhean. Seonag wants to build a house for herself and cut peats with her own hands and work the machair like her father and mother. As Seonag grows into an adult, she learns the waterways of Uibhist the way she learns the waterways of her own body, and she loves this land.

Remember that.


When Seonag has just passed her twenty-fifth year, her parents board a ship to Canada.

Seonag is meant to go with them. They can no longer afford their rent for their croft.

Instead, Seonag hides in the cleft of the glen, weeping softly as her tears drip into the bog under a sharp bright sky.

After she has dried her face with the folds of her dress, she comes to visit my father.

My father is Tormod Mòr, Tormod Mac Raghnaill ’ic Aois ’ic Dhòmhnaill, Tormod the Bard, Tormod Ruadh—sometimes I think my father collected a name for every year he lived.

I see Seonag coming that day. I am some few years younger than her, and I’ve only ever really seen her in glimpses. I tell my father she’s coming when I see her crest the rise in the road.

“Tha Seonag Bhàn a’ tighinn,” I say.

My father leaves the Gaelic where I put it and answers in English, because he is trying to teach me. “Don’t call her that.”

My father is a big man (hence that first name), but Seonag came to her far-ainm in a way I often forget. Bàn means fair, and while she is pale, her hair is black like a crow’s feathers and shines like them besides. It is a small cruel joke, one at the behest of Dòmhnall Geur (who is known for small cruel jokes throughout our island) and one I still don’t understand. I am a wee bit infatuated with Seonag. I also don’t quite understand that.

“I thought she was gone,” I say softly. English feels wrong in my mouth. It lives in a different part of it.

My father understands both my infatuation and my words even if I do not. He also looks out the window and understands Seonag.

He opens the door before she can raise her hand to knock. He speaks Gaelic to her, even if he only speaks English to me.

“Madainn mhath, a Sheonag,” he greets her.

“Madainn mhath, a Thormoid,” she says as if she did not just let her parents sail across an ocean without her. “Ciamar a tha sibh?”

“Tha i teth,” Father says. “Fosgail an uinneag, a Chaluim.”

This last is to me, and it is a dismissal. I open the window as he asked, letting in the cooler air. And then I set myself in the corner to mend a net and listen, pushing their Gaelic words into English so I can prove to my father that I’m doing two useful things instead of one, if he asks (he won’t).

“I had no expectation of seeing you here still,” Father says.

“I had the expectation of leaving,” says Seonag.

She sits on a small stool by the peat fire. Her eyes are the color of that mòine, of that peat, and she does not use them to look at me. She looks at the peat instead.

Seonag puts her head in her hands.

“The ship has gone to its sailing already,” my father tells her softly.

“That is why I am here.” Seonag looks up.

I watch as breath moves her stomach, filling it. She holds her left hand out to the fire. And then she begins to speak.

“This is my home, a Thormoid,” she says. “Even if you or anyone else think I do not belong. There is nowhere else for me.”

“There could perhaps have been a life for you in Canada.”

“My life is here.” She says it with the heat of the fire, that low burning smolder that will not be put out, and she glances toward the open window as if she is looking through it and down through the years that have not yet had the chance to touch her.

My fingers still on the net in my lap and I hear her words in Gaelic as she said them and not how I clumsily pasted English on them. ’S ann a-bhos a tha mi beò. It is here I am alive.

“There will be more ships,” Father tells her. “Full of more people. The rents are too high and the food too scarce. Death will find us in Uibhist. You may yet change your mind.”

She will not change her mind. Anger reaches tendrils across the floor from Seonag to me, and now she does meet my eyes, as if I summoned her. I feel something like indignation and fury meld together on my face, and to my absolute shock, Seonag smiles at me. Her teeth do not show. Her lips are straight and even, despite the expression.

I am seen and understood. I will never forget this moment.

“Very well,” my father says in English, looking back and forth between us. I think he knows that in this moment, my allegiance has shifted. “In that case, I think I should tell you of the wolves of Uibhist.”

“Ach chan eil mic-thìre ann an-seo, Athair!” I fall into Gaelic and hurriedly say in English, “But there aren’t wolves here!”

My father smiles in the way of parents who know more than a child who assumes, in childish folly, that they know more than their parents. That smile turns back in on itself much like that sentence.

He holds up his hand, watching Seonag. “Ah, but there are madaidhean-allaidh.”

Madadh-allaidh, faol, sitheach, faol-chù—they are all words for wolf. This is why I need my Gaelic. My father has used these words as though he means there is a difference and in English there would be none. What is it that he means?

Seonag is now watching my father, too.

My father is a bard, and I almost expect him to sing. But he does not sing. Instead, he goes to Seonag, kneels at her feet, and takes her hands in his.

“Listen,” he says.

And I know neither Seonag nor I intend to do anything else.


It was two hundred years ago that we chased the wolves from Scotland, two hundred years, they say, since the last wolf howl was heard, but sometimes, just sometimes, in the Western Isles beyond the Minch, you will hear a sad and stolid song. In Steòrnabhagh, perhaps, in Leòdhas. Or in Èirisgeigh when the moon is healthy and bright, or in Beinn na Faoghla, or the Uibhist to the north.

I have never heard their voices, when Father begins to talk. I have thought the tales of their howls were only their ghosts, or the songs of selkies twisted by the gales.

“When the hunters come, it is their job to move through the land and push their prey out in front of them,” Father says. “They will go from place to place, here and there, over and under, yonder and back. They will seek out their prey and while their minds will be heavy upon it, the object that they seek will not be of consequence.”

Father is telling two stories at once. This is a power of his that I envy.

The wind coming through the open window is cold, but I cannot get up from my seat to close it. The net in my lap holds me faster than a fish in the sea—or perhaps what holds me is Seonag’s face.

“Hunters who hunt only to kill all have that in common. They seek no nourishment from it. They have a wider goal, and a narrower. It is prey that understands their minds that can survive. The wolves understood. The wolves scented the hunters on the wind and they found their survival in the waters.” Father pauses. For a moment his cheeks are slack, the weathered lines curved instead of taut, his jaw hanging although his lips are closed. When he speaks again, his lips part audibly. “They will have your answers, a Sheonag.”

“The wolves.” Seonag looks at me over my father’s shoulder where he still kneels on the floor. “In the water.”

She sits up even straighter, body tight; I could likely use her spine to draw a straight line against the wall.

I know that tightness. Even in my glimpses of her throughout the years, I have seen it. I’ve seen it when Dòmhnall Geur calls her Seonag Bhàn, I have seen it when she turns away with her wares at the shops and knows she leaves whispers in her wake, and I have seen it when I caught sight of her in the glen, when she was mid-song and her voice died at the sight of me. I swallow.

“How am I to get answers from wolves when even their hunters have no words of kindness for me and I am neither wolf nor hunter myself?” She asks the question in a low tone, the lilt of her words in English almost sarcasm.

I do not know what I expect from my father in this moment, but whatever it is, it’s quite something else that I get.

He gets to his feet and points to the west, toward where the ship would be sailing off with Seonag had she gotten on it, toward the open sea.

“If you came to me for advice, this is what I can tell you,” my father says. “You will go to the west, into the water, and swim until you can’t see land. You will pass Heisgeir. Do not come close to it. You must keep swimming until you hear them. Only then will it be safe to seek land.”

“Is this a joke?” Seonag is completely shuttered now, and my fingers have given over any guise of mending this net.

What is my father doing?

Tormod Mòr, Tormod Mac Raghnaill ’ic Aois ’ic Dhòmhnaill, Tormod the Bard, Tormod Ruadh—for all my father’s many names, right now I do not know him. He shrugs once and goes to shut the window.

“You could have had a new life in Canada,” he says.

It is then I see that he is angry.

He is angry at Seonag, but I do not understand why. He loves this land. He drinks its waters and taught me how to recognize the eggs of the cuthag where they push them into the nests of other birds. When I look at him looking at Seonag, I wonder if he sees her as a cuthag, thrust into his nest when he expected only eggs of his own.

But this is Seonag’s story, not my father’s.

She gets up from her seat quietly. Seonag leaves without looking at me.

My father stares after her, his expression like the lochans before the stirring of the breeze. I get to my feet and run after Seonag.

“Wait,” I say, just as she reaches the edge of the heather.

Seonag looks at me once, then out to the west. The sun is trying to burn off the mist this morning, but I have a feeling Seonag sees all the way through it. I am nineteen to her twenty-five, and in this moment she has a lifetime on me. I follow her gaze to the sea where my father just told her to swim to her death.

“My granny’s house,” I say. The words tumble from my lips like drips of wax over the edge of a candle. “You could go there. It’s just on the edge of the machair.”

It comes upon me that I do not know what Seonag can do to live, alone, with few friends (am I her friend?) and no husband, and in that moment the urge to propose to her nearly overtakes me. It renders me so confused that I forget what I was saying about my granny’s house.

“Tapadh leat,” she says, her voice the equivalent of my father’s expression.

And then she leaves, and my gut twists itself into a semblance of the tangled net I threw on the floor to catch her. Just before she goes out of sight around a hillock, though, she looks once over her shoulder at me, a sad smile painted with one brush stroke on her lips.


I am filled with anger.

At the time, I thought this was my story. I was wrong. It was hers. It is still her story. I am merely a player in it, and what happened to me next is also what happened to her.

I spend an hour walking by the edge of Loch na Liana Mhòire before I return home. When I do, I hear voices through the open-again window.

One voice is my father’s, naturally.

The other is Dòmhnall Geur’s.

“It is to you to report her,” Dòmhnall Geur says. “She cannot be allowed to stay like a ghost, stealing from crops and honest working people.”

“You have decided this will be her future, then?” My father’s voice is that wry, flat calm I know too well.

“She has no land or husband or property; what is it you think she will resort to?”

“She may make another choice.”

I know my father is referring to the wolves, to these creatures that do not even exist. At this moment I think the only wolves in Uibhist a Deas are the two men in my house.

“And what is that, a Thormoid? Are you going to marry her? Or perhaps Calum will—I’ve seen his eyes on her. She will drain the life from your boy; it would be best for your sake to keep him from her.”

I have never known Dòmhnall Geur to have a kind word for anyone who was not currently licking his boots. His words are too close to my own thoughts this time, and I slink back farther from the window to avoid being seen.

“I told her of the wolves.”

Dòmhnall Geur does not scoff. He goes quiet. “And you expect her to believe this tale.”

Dòmhnall Geur believes this tale. I hear it in his words.

“’S dòcha,” says my father.

My father believes Seonag will believe it.

Which means my father truly does believe it.

I hear the crack of Dòmhnall Geur’s knuckles and can picture the expression on his face even though I cannot see it. His weak chin does nothing to reduce the harsh lines of his cheeks. His lips he holds at a constant half-sneer except when he has made a decision—usually one few will like—and then light reaches his eyes as if causing harm to others is the one thing that brings him joy.

“That’s me away,” he says. “Shall I congratulate you on your forthcoming nuptials?”

He laughs as his footsteps make their way toward the door. I am a coward. I steal around the edge of the house on light feet and wait until he has passed out of sight before I go in.

I cannot shake the feeling that Seonag is in danger.

I cannot tell if that danger is my father’s making or if it is Dòmhnall Geur’s.

My father stands by the sputtering fire, staring into it.

“Dùin an uinneag,” he says without looking up.

I close the window. It is now cold, outside and inside the house.

“He believes in those wolves,” I say. My anger feels like the sharp edges of shells on the beach. “I think he is one of the wolves.”

I say it in English even though for once Father made his words of Gaelic for me.

“Amadan,” my father says.

I don’t know if he’s calling me a fool or Dòmhnall Geur. Perhaps both.

“Do you remember what I said earlier, when you said there were no mic-thìre here?” Father adds a brick of peat to the fire. He is speaking English now. A puff of smoke, full of the scent of the earth, whispers through the house.

I do remember.

He said there are no mic-thìre, but there are madaidhean-allaidh.

The first means children of the land.

The second means wild dogs.


By the time I make it to my granny’s house after all of my work, it is clear Seonag has been there.

Granny’s house has sat empty this past half year, the windows shuttered, the door closed. Father and I come here once a week to check the thatch and make sure no beasties have made it their home. When I arrive, there is a small bundle on the table and a snubbed out candle. A basket of peats sits by the fire, untouched. The stove is clean—she hasn’t used it.

There is a note on Granny’s table. It has my name on it.

It’s written in charcoal on a scrap of rag, and all it says is a thank you.

I clutch it in my hand, where a stray tail of string tickles against my skin.

In my chest there is—something at war.

It feels like fingers pulling apart my heart. I do not know what my father meant. I do not know what Dòmhnall Geur means to do. I know only that I need to find her.

The sky is liath. The clouds have burned off, leaving only a lump of them smeared across the horizon to the west, over the sea.

It will be hours yet before the sun sets, but it is the light of a twilit sky.

I run due west from the house. It is perhaps a mile to the shore. My legs are strong, and I run as fast as I am able.

It is Monday and tomorrow the crofters will begin the plowing of the machair. They will not have begun such a large task today; it invites trouble to begin a large task on a Monday.

I try not to think that beginning a large task is exactly what I myself am doing.

When I reach the dunes, there is the sound of bleating sheep in the distance, an answering lowing of the cattle. The tide is out, pulled all the way out, like a breath drawn in and waiting to be screamed.

Footsteps lead from the dune to the shore.

With them, drifted to the northeast with the wind, are scattered clothes. The thick wool dress Seonag wore this morning. Her shoes, set in a perfect pair. Stockings, blown a bit away. Chemise flapping in the breeze.

The footsteps become imprints of feet and toes. There is another set near them, near me. I try not to think of those ones. They turn back halfway to the water.

The bare footprints lead directly into the sea.

It is said that the warmth returns to the water at Bealltainn.

I have known that to be a lie for most of my life, but when I throw off my shoes back toward the dunes and wade into the water in my stockinged feet and trousers, cold shoots up to my knees, my hips, jabbing into my heart and lungs. I press on. Father said to swim until she couldn’t see land.

I cast one glance behind me, at Uibhist a Deas, at my home, my island.

Then I turn out to sea and dive.


When Seonag reaches the water’s edge an hour earlier, she is naked and grìseach, shivering and rubbing her hands against the bumps on her skin. She is too aware of the irony of walking naked into the sea when she could have been sailing west on a ship, clothed and warm.

She doesn’t know why she does it anyway. Perhaps she believes my father wants her dead. Perhaps she believes Dòmhnall Geur does too. Perhaps she simply believes.

This seems as good a way as any. The shore is an in-between place, and Seonag is an in-between person.

She wades into the water.

Like me, she decides it is best to dive.

Seonag comes up gasping and sputtering, her entire body revolting against the cold. Her arms and legs spasm. Behind her, someone shouts.

It might just be a sheep or a goat.

She dives again, the waves pushing against her.

Seonag is a strong swimmer; the brother of her mother drowned when he was fifteen, and her mother insisted Seonag learn to swim.

It has been some time since she did, though, and fighting the waves is different than the smooth peat-colored waters of the lochs.

The tide is turning.

Seonag swims west.

Every stroke of her arms feels like a miracle from the very first of them. She is certain this will be her last act, an act of defiance, an act of doing precisely as she was told, just as she always did, convinced that if she were good enough, modhail enough, kind enough, the whispers would cease.

She feels this will be one more story for my father to tell at the cèilidhean.

(My father will never tell this story. He will forever carry on him far too much shame. No matter how he washes, he will not be able to scrub it away.)

So Seonag swims.

She looks back every so often, when she can spare any small bit of energy. The land disappears quickly only to appear again on the other side of a swell. It does not recede fast enough. Seonag stops looking back.

Her muscles are fire under the ice of her skin. Her lips choke on salt, and her eyes and nose burn with it. Her eyes and nose make their own in retaliation, but they cannot compete against the sea.

Once, Seonag sees dolphins, which in Gaelic are called leumadairean-mara, sea jumpers. She watches them and feels envy, because her body was not made for this and theirs were.

They circle her, out of curiosity or confusion. One comes close enough for her to touch; her elbow brushes against something warmer than the sea and rubbery, and if she were less exhausted she might recoil from it.

When her ear dips beneath the rolling waves for an instant, she hears them. They call to each other, with clicks and whistles that she feels she should understand.

They swim with her—which is to say, they swim ahead, then back, then ahead again, winding around her as she aims herself into the now-blinding light of a sun that has peeked from behind the clouds—and Seonag at once is glad of the company and resents it.

She has always wanted to get close to these creatures, but this is not how she thought it would happen.

Eventually, they swim ahead of her and vanish. She does not see them again.

Time passes.

We are aware of the worlds beyond our own. We know there are times when you can touch them, at twilight and dusk, at the shores and on days that mark the turning of the year. But it is impossible to know when we have gone from touching those worlds to finding ourselves in one.

Seonag certainly never thought she would swim herself over a blurred boundary, into something deep and cold and dark but full of life and salt and energy nonetheless.

When Seonag pauses in her swimming to rest aching shoulders, she is surprised to see Heisgeir breaking the waves ahead of her. The sight of land in front and not behind shocks her into flailing beneath the waves for a moment, coughing and struggling to stay afloat.

Seeing Heisgeir is impossible. It is west of Beinn na Faoghla. She has drifted to the north as she swam. She has left Uibhist miles behind.

Seonag remembers my father’s words. She cries out then, in sorrow or in frustration, and she moves herself to begin swimming due west again, keeping Heisgeir on her right.

She will not go near its shores.

When it fades from view, Seonag realizes she is crying. She tastes her own tears over the brine of the sea. She is sure she will soon drown.

She begins to pray, not to a god who forsook her all of these years, but to the each-uisge, to the selkies, to the storm kelpies, to anything that would listen. She longs for the dolphins to return, belatedly thankful for their company and kindness.

She swims until the late evening sun finally touches the horizon.

She swims until she can see nothing except the red, red clouds touched by the sunset, the sea turned from gorm to dearg itself, waves like flames.

Seonag is not sure if she is still cold, or if the sun has turned the sea to hellfire.

And then she hears it.

A voice on the wind, raised high and so bright for a moment Seonag is blinded by the sound of it.

She fumbles in her swimming.

It comes again, unmistakable. A howl.

Seonag has never heard wolf-song. Seonag has never seen a wolf.

Here, miles from shore and swimming through water turned red, she hears a wolf howl for the first time.

She has nothing better to do. She swims toward it.

At that moment, Seonag is nearly overcome. She expected to die, and oh, she does realize she still might. She does not know how she has swum so far, alone and naked, into the frigid waters of the North Atlantic.

It does not occur to her that she has already passed into a world she was not born into.

On the horizon, Hiort appears.


Seonag’s experience is not my experience.

When I begin to swim, my clothes stick to my body, trying to strangle the life from me before the ocean can. I don’t know what it is I expect to happen. Fatigue sets in before I’m a hundred yards from the shore.

I hear a muffled shout, and before I can find where it’s coming from, a hand grasps me by the back of my shirt and hauls me over the edge of a fishing boat.

The hand is Dòmhnall Geur’s.

There are two other men in the boat, Seòras Eachainn and Dòmhnall Dubh, whose black hair is now far closer to white.

It’s a small fishing vessel with a sail. The boat is called Anna, after Seòras’s mum. I’ve been aboard it before.

“What’re you doing, lad?” Seòras grunts it at me while Dòmhnall Geur dries his hand on his trousers.

“S-s-eonag,” I stutter, pointing westward.

Seòras exchanges a glance with the two Dòmhnalls.

“Saw her going into the water,” Dòmhnall Geur says, his voice surprisingly thoughtful.

“If the weather holds, we’ll go,” Seòras says. There’s caution between his words, and I don’t think it’s about the weather. “We turn back if—”

“I’ve been sailing at least as long as you, Seòras,” says Dòmnhall Geur.

“Sail where?” My teeth are chattering.

Seòras throws a plaid over my shoulders. It’s wool and rough and smells of fish and brine.

No one answers.


Seonag pulls herself onto the sand with arms that quiver like the leftover gelatin in a mutton stew.

She has no reference for the kind of tired she is in this moment. Her fingers are shaking from exhaustion—she stopped shivering from cold long ago—and when she looks up, moving only her eyes from where her cheek is glued to the sand, feet still getting tickled by the waves lapping the shore, she doesn’t know where she is.

Seonag aimed herself at Hiort. She thought it was Hiort. But Hiort has been inhabited for two thousand years, and this place looks like it has never seen the footprint of a human being.

But there are paw prints in the sand.

Seonag drags herself farther onto the beach, close enough to look at one of the paw prints.

It is the size of her hand, almost. If she curls her fingers in—which she does—she can lay her hand in the depression made by the paw pads and see the indentation of a wet tuft of fur, the pricks of claws.

She has never seen such a track.

The set of prints leads away from the water.

There is more than one set of prints.

If she expects to hear more howling, she is disappointed. There is only the sound of the wind and the waves and her own labored breathing. Seonag knows she will need to find shelter soon. She will likely need to build it.

She has swum through the short summer night, and already to the east, the sky lightens.

She is covered in sand, only on her right side. There are no clouds. She is alone.

Seonag is used to being alone, even when she is surrounded by people.

She pushes herself to her feet.

The sound of waves is in her blood, her ears, all around her. Indeed even the land seems to be shaped like waves; from the small beach where she landed, cliffs rise up like arms embracing and sheltering the center of the small island, far too small to be Hiort in truth.

There are trees over the dunes. Trees. There are almost no trees in Uibhist—they don’t grow because the wind likes to be able to run across the machair and moors unhindered.


The word cuts through Seonag. She could not have told you what language it came in, only that she feels it the way she is feeling the waves.

She looks around.

There, at the top of the dune, is movement.

Something beckons her.

Seonag’s heart gives a jolt, a spark. She follows on unsteady feet.

There is a glimpse of driftwood, moving. Of seaweed and kelp streaming out behind. Seonag tastes fear, but it tastes like the salt of the sea and she has steeped in it all night. She ignores it now.

A figure passes between an oak and a hazel.

Seonag follows.

More movement shows through the trees and underbrush. A tail beyond a bush of holly, upright ears passing just behind a rowan.

Seonag does not know much about trees, but she remembers learning that different kinds don’t grow all in the same place.

The wind falls quiet here, in the embrace of the cliff arms. The slope up is steep; the island looks like a god reached down with a hand and scooped out the middle of a mountain. Seonag doesn’t know what a volcano is. This one has been hibernating for a long time, and will not wake in any lifetime soon.

She walks for an hour into this bowl of trees, past elm and birch, alder and yew. They are the trees that make up the alphabet in Gaelic. She wonders what stories they will tell here.

The figure is among the trees, in a circle of them, on spring grass both thick and green like a bed.

Seonag longs to lie down on that grass and sleep in the circle of these trees. She might never wake if she does.

Someone is here.

Seonag is confused by this. Of course someone is here; she is standing right in front of the figure, which she cannot bring herself to look at. She hears rather than sees the rustle of seaweed. Beyond that, a low, rumbling growl that seems to come from all around her.

And beyond that, a crackle of underbrush from the direction she’s just come from.


My feet are heavier and heavier as I help drag the boat onto the shore of the island. Seòras and Dòmhnall Dubh help me secure it, with Seòras turning toward the cleft in the cliffs where Dòmhnall Geur vanished and muttering “Craobhan” over and over, so shocked is he by the presence of trees.

My feet are heavier, or perhaps it is my heart. Urgency creeps up my spine, using each ridge of my vertebrae for a ladder. There is a need to hurry.

Almost before I have tied off the ropes, I start to pull away toward where Dòmhnall Geur left.

Seòras catches my hand. “Duilich, a ghille.”

I don’t understand why he is apologizing to me until Dòmhnall Dubh catches the other.

Before I can react, Seòras stuffs a rag into my mouth. It tastes of fish and sweat, and I almost vomit. They wrench my hands behind me and tie me to the boat.

In the distance, a wolf howls.


Seonag is not surprised to see Dòmhnall Geur striding into the clearing with no hint of wariness about him. She is not surprised by the gun in his hand, an old hunting rifle that belonged to her own father, who by now is far from the sight of land and crossing the Atlantic forever.

“You must have hidden your boat well,” he says.

“I swam,” she says.

He laughs.

Seonag is still naked except for the crust of sand on her right side, which itches. His laugh has always been a spiteful laugh, one that made her skin into bumps as if ready for anything that might follow.

“I’ve been wanting an excuse to come here for a very long time,” he says. “When I rid the islands of wolves once and for all, everyone will know my name.”

He does not seem to see the figure behind Seonag, or perhaps only Seonag can see them.

“And you will be put on the next ship to Canada where you cannot pollute my island any longer.”

“Your island?” Seonag hears all of his words distantly, like the waves barely audible over the whispers of the leaves around her. But that bit stays. “You are born to a place and believe you own it more than others who are the same as you.”

“You are not the same as anyone.” His voice is low and thick with disgust.

“Why do you hate me?” Seonag truly wants to know.

Dòmhnall takes a breath to answer, but before he can speak, a wolf howls behind him.

He raises his rifle and fires.


I hear the shot ring out through the air. Seòras and Dòmhnall Dubh are out of sight already, following after with their own rifles.

There is another shot, then another. Closer—without reloading time. The others are shooting at whatever Dòmhnall Geur shot at. The sound of a distant snarl.

I jerk at my bonds. The rope is rough and made of heather. It digs into my skin like a flail. My father and I make this rope together. We may have made this one.

I let out a scream of frustration and rage.

The sound of breathing greets me when my scream dies away.

I turn.

A wolf stands at my right, soaking wet and staring at me with liquid amber eyes. In its jaw is a cod, still twitching.

The wolf looks at me. I forget to breathe.

They are real. The story my father told was real. It is large, far larger than the working dogs we use to herd the sheep on our island. It comes up to my waist.

I can smell its wet fur, full of brine and warmth and the manky smell it does have in common with the working dogs. I can smell its breath, hot and fishy.

It melds with the taste of the cloth in my mouth.

The wolf drops the fish, and fear spikes from my bound wrists up the nerves of my arms. My nose is half-stuffed, and my breath enters in gaps around the gag as much as through my nostrils.

The wolf stalks closer, close enough for its breath to glance off my skin and my still-damp clothes.

Its muzzle is cold and wet, its nose colder and wetter.

When it ducks behind me, between me and the boat, I almost cry out. Warm breath hits my wrists, then the wolf’s powerful jaws clamp down on the rope, pulling and gnawing. My skin warms with the animal’s saliva.

Another shot rings out. The wolf flinches against me, but does not stop.

When my hands pop free, I pull the spit-covered rag from my mouth.

“Taing,” I say, trying to thank the animal, but it has already taken its fish and gone.

I go after it.


Around Seonag, a dance of chaos swirls.

Wolves partner with hunters, at least two fur-covered bodies to each of the three men. In its center, Seonag stands like a maypole, her body warm from something she cannot place. The figure recedes behind her, waiting, not intervening.

Seonag feels something well within her. She is certain of it, even though it comes to her without words, without voice. It is like the waves that lifted and dipped beneath her as she swam. It is like the impulse that made her turn and run from the ship the day before, an age before, and hide in the glen.

She has to make a choice.

She feels it again, then, as she decides. Her feet hold to the grasses she so longingly admired a short time ago. Toes dig into their young growth.

Seonag stands taller. Perhaps she is taller.

It comes upon her like the tide, creeping with every breath closer. The smell of leaves around her. The scent of seaweed and kelp. The grit of sand against her skin…and something else.

Her skin is flesh and not.

Her body turns with the swirl of air and breath and grunts around her.

She says one word: stad.

Everyone in the clearing does. They stop. They turn to stare at her, men and wolves alike. There is blood on the wind, human and canine.

“I told you, I told you,” Dòmhnall Geur says, stumbling backward. “She is not of our world, she is not—”

“I was,” Seonag says softly. She looks at Seòras, at Dòmhnall Dubh. “Go.”

Seòras looks over his shoulder once. He sees a glimpse of the figure beyond Seonag herself. Whatever he sees, it is enough. His face goes so white that it is he who will be named Bàn when he returns, though he will never tell anyone why.

This is the scene I come upon when I enter the clearing.

Seòras is half-dragging Dòmhnall Dubh with him. He does not look even to the side to see me. They stumble away.

What I see is this:

Seonag, and not Seonag.

Her arms are no longer pale flesh but the soft, sun-bleached grain of driftwood that curves with her muscles, her joints, her neck. She is naked, but her nakedness is no longer human nakedness. Where her black hair reached past her hips is now seaweed, lustrous and shining in the first rays of the early morning sun. Her eyes are obsidian, their whites abalone.

Behind her I see a figure like her, smiling with seal bone teeth. This figure leans against a yew.

Seonag walks to Dòmhnall Geur, who stands rooted to his place on the earth.

When I step closer, flanked by two wolves I hardly notice, I see that rooted is not a metaphor.

Where Dòmhnall Geur’s feet were, now his toes have entered the earth, punching through the leather of his boots and digging deeper by the second.

He writhes where he stands, but he does not scream. I think he cannot scream.

When Seonag touches his face with gentle nails of shining scales, he flinches away.

“You will stay here, like the others before you,” she says absently. I cannot tell which language she is speaking, if any.

I look around me at the trees, so many different kinds.

“Dair,” Seonag says. “Darach.”

Dair is the name for D, the first letter of his name. He will become an oak.

Already his hair has sprung free of its tie.

Seonag has an acorn in her hand. She places it in Dòmhnall Geur’s open mouth.

It sprouts before his lips close, a sprig of green reaching out, another sprouting from his nose.

A wolf howls, so close to my side that I jump, a stick cracking under my feet.

“A Chaluim,” Seonag says, looking over her shoulder at me. Then, sadly, “You shouldn’t have come.”

Like the others, I cannot seem to speak.

The figure behind Seonag moves forward. Slowly. I think I hear the brittle crack of wood.

“Who are you?” Seonag asks.

The figure is like her, like this new Seonag, and not. Where Seonag’s seaweed hair hangs straight and glossy in ripples, the figure’s is wild, covered in barnacles and fragments of shell and motes of sand embedded in the leaves that sparkle in the sun.

Perhaps this figure is simply older.

“A guardian,” says the figure. “I was.”

I understand before Seonag seems to.

“Was,” she says. “Of what?”

The figure gestures around her. “Of whom do you think?”

Those who are hunted.

For the first time, I see a dead wolf. The figure gazes sadly upon it. There is a knife in its side, and a cod by its mouth.

I cannot make words, but a strangled cry escapes me.

The figure seems to understand.

Seonag goes to the wolf and pulls the knife from its chest. She walks to the new oak tree, now reaching up higher, higher. Flutters of fabric wave in the wind. Seonag tears away what was Dòmhnall Geur’s shirt.

She wraps the knife in it, blood and all. She walks to me. “Carry this home.”

Before I can try and ask her how, she pushes it into my chest. In through my shirt and in through my skin and ribs. I feel it, harsh and heavy and sharp inside me, against my heart that beats so quickly.

Seonag looks at me once more. If she is sad, I cannot tell.

Her sudden smile is fierce.

I blink once, and she is gone. I hear the beat of wings above my head, in the branches of a tree.

The figure remains.

My voice works again. “Who are you?”

The words sound strange in the air, like they are not words at all.

“Old,” says the figure. “Tired.”

I look upward. My hand massages my chest. I can feel the knife there. It feels like panic just out of reach.

“Tell your father thank you,” says the figure.

When I jerk my gaze back down, they are also gone.


You will wonder, I suppose, how I made it home. Seòras and Dòmhnall Dubh returned, days after I did, silent for days after that, jumping every time they saw me.

The wolves swam me out past the breakwater, the pack leading me around the riptides and into the open sea with yips and broken notes. Some peeled off to hunt on a small chain of rocky islets; others waited until we reached a place I could never find again no matter how I tried. Hiort appeared in the distance.

Oh, how the fear gripped me then. It coated me more heavily than the water, ready to pull me under with its weight.

I swam, though. I swam through the length of the day. They say the journey back is shorter than the journey there. I think in this they are wrong.

When I arrived on the shore of Uibhist a Deas, I collapsed and lay for hours before one of the crofters found me and carted me home, naked and shivering, on the back of his horse.

I did not hear what he said to my father.

Father built up the fire and closed all the shutters and when the heat from the peat warmed me enough, I rose to my hands and knees and began to heave, spots swimming in front of my eyes and a terrible ripping feeling in my chest and when tears stung at me, I heard a thud, and to the floor fell the knife that had killed the wolf.

My father picked up the small parcel and opened it. The blood appeared as fresh as if he had stabbed me with it himself.

“Dòmhnall Geur killed the wolf that freed me,” I told my father then, unthinking of how absurd my words would sound in any language. “He became an oak.”

“A life for a life,” was all my father said in return.

I think of the many trees on that island sometimes.

I think that is why I am telling you this now.

When Seonag came to me not so long ago, she came with a warning. I do not think it was meant for me.

Perhaps it is meant for you.

There are no mic-thìre left in Scotland, but there are madaidhean-allaidh. They are wild and they are free, and they found that freedom in the sea.

Their hunters are the ones to fear.

Sometimes, when the winds are still and the tide pulls back far, far from the shore, I hear their song echo across the waves. I am not the only one who hears them; perhaps Seonag as their guardian strengthened them after the strength of their old guardian flagged.

On those nights, it is whispered that Seòras and Dòmhnall Dubh hide with their pillows over their ears, but no matter how they try, they cannot escape the sound. They forgot her, but they still remember that sound.

I am old now, and Seòras and Dòmhnall Dubh are older still. But you are young, and the young have the chance not to repeat the mistakes of their elders.

If you look around you, you might see someone like Seonag, who wants so desperately to belong. Let her sing at the cèilidhean. Invite her to share your meals.

You know who I mean and who I do not. Those someones like Seonag are not like the hunters who prowl for something they decided was their own, to take, to steal, to kill.

Someday perhaps someone else will take that swim to relieve Seonag of her duties. I have thought sometimes that it might be me, but I am still a coward.

Sometimes, on those nights, I think of her.

Sometimes, on those nights, I walk the glen.

Sometimes, on those nights, I hear her singing again.

There are hunters among the sheep of the machair, a ghràidh.

But there are wolves, too.

“Seonag and the Seawolves” copyright © 2019 by M. Evan Matyas
Illustration copyright © 2019 by Rovina Cai


Security updates for Wednesday []

Security updates have been issued by Fedora (ghostscript, pango, and squirrelmail), openSUSE (libcryptopp, squid, tcpdump, and wireshark), SUSE (flatpak), and Ubuntu (giflib and NLTK).


Four short links: 21 August 2019 [All - O'Reilly Media]

Competition vs. Convenience, Super-Contributors and Power Users, Forecasting Time Series, and Appreciating Non-Scalability

  1. Less than Half of Google Searches Now Result in a Click (Sparktoro) -- We can see a consistent pattern: organic shrinks while zero-click searches and paid CTR rise. But the devil’s in the details, and, in this case, mostly the mobile details, where Google’s gotten more aggressive with how ads and instant answer-type features appear. Everyone has to beware of the self-serving, "hey, we're doing people a favor by taking (some action that results in greater market domination for us)" because there's a time when the fact that you have meaningful competition is better for the user than a marginal increase in value add from keeping them in your property longer. (via Slashdot)
  2. Super-Contributors and Power Laws (MySociety) -- Overall, two-thirds of users made only one report—but the reports made by this large set of users only makes up 20% of the total number of reports. This means that different questions can lead you to very different conclusions about the service. If you’re interested in the people who are using FixMyStreet, that two-thirds is where most of the action is. If you’re interested in the outcomes of the service, this is mostly due to a much smaller group of people. This dynamic applies pretty much everywhere and is worth understanding.
  3. Facebook Prophet -- a procedure for forecasting time series data based on an additive model where non-linear trends are fit with yearly, weekly, and daily seasonality, plus holiday effects. It works best with time series that have strong seasonal effects and several seasons of historical data. Prophet is robust to missing data and shifts in the trend, and typically handles outliers well. Written in Python and R.
  4. On Nonscalability: The Living World Is Not Amenable to Precision-Nested Scales -- to scale well is to develop the quality called scalability, that is, the ability to expand—and expand, and expand—without rethinking basic elements. [...] [B]y its design, scalability allows us to see only uniform blocks, ready for further expansion. This essay recalls attention to the wild diversity of life on earth through the argument that it is time for a theory of nonscalability. (via Robin Sloan)

Continue reading Four short links: 21 August 2019.

Data orchestration for AI, big data, and cloud [All - O'Reilly Media]

Haoyuan Li offers an overview of a data orchestration layer that provides a unified data access and caching layer for single cloud, hybrid, and multicloud deployments.

Continue reading Data orchestration for AI, big data, and cloud.


Forced Password Reset? Check Your Assumptions [Krebs on Security]

Almost weekly now I hear from an indignant reader who suspects a data breach at a Web site they frequent that has just asked the reader to reset their password. Further investigation almost invariably reveals that the password reset demand was not the result of a breach but rather the site’s efforts to identify customers who are reusing passwords from other sites that have already been hacked.

But ironically, many companies taking these proactive steps soon discover that their explanation as to why they’re doing it can get misinterpreted as more evidence of lax security. This post attempts to unravel what’s going on here.

Over the weekend, a follower on Twitter included me in a tweet sent to California-based job search site Glassdoor, which had just sent him the following notice:

The Twitter follower expressed concern about this message, because it suggested to him that in order for Glassdoor to have done what it described, the company would have had to be storing its users’ passwords in plain text. I replied that this was in fact not an indication of storing passwords in plain text, and that many companies are now testing their users’ credentials against lists of hacked credentials that have been leaked and made available online.

The reality is Facebook, Netflix and a number of big-name companies are regularly combing through huge data leak troves for credentials that match those of their customers, and then forcing a password reset for those users. Some are even checking for password re-use on all new account signups.

The idea here is to stymie a massively pervasive problem facing all companies that do business online today: Namely, “credential-stuffing attacks,” in which attackers take millions or even billions of email addresses and corresponding cracked passwords from compromised databases and see how many of them work at other online properties.

So how does the defense against this daily deluge of credential stuffing work? A company employing this strategy will first extract from these leaked credential lists any email addresses that correspond to their current user base.

From there, the corresponding cracked (plain text) passwords are fed into the same process that the company relies upon when users log in: That is, the company feeds those plain text passwords through its own password “hashing” or scrambling routine.

Password hashing is designed to be a one-way function which scrambles a plain text password so that it produces a long string of numbers and letters. Not all hashing methods are created equal, and some of the most commonly used methods — MD5 and SHA-1, for example — can be far less secure than others, depending on how they’re implemented (more on that in a moment). Whatever the hashing method used, it’s the hashed output that gets stored, not the password itself.

Back to the process: If a user’s plain text password from a hacked database matches the output of what a company would expect to see after running it through their own internal hashing process, that user is then prompted to change their password to something truly unique.

Now, password hashing methods can be made more secure by amending the password with what’s known as a “salt” — or random data added to the input of a hash function to guarantee a unique output. And many readers of the Twitter thread on Glassdoor’s approach reasoned that the company couldn’t have been doing what it described without also forgoing this additional layer of security.

My tweeted explanatory reply as to why Glassdoor was doing this was (in hindsight) incomplete and in any case not as clear as it should have been. Fortunately, Glassdoor’s chief information officer Anthony Moisant chimed in to the Twitter thread to explain that the salt is in fact added as part of the password testing procedure.

“In our [user] database, we’ve got three columns — username, salt value and scrypt hash,” Moisant explained in an interview with KrebsOnSecurity. “We apply the salt that’s stored in the database and the hash [function] to the plain text password, and that resulting value is then checked against the hash in the database we store. For whatever reason, some people have gotten it into their heads that there’s no possible way to do these checks if you salt, but that’s not true.”


You — the user — can’t be expected to know or control what password hashing methods a given site uses, if indeed they use them at all. But you can control the quality of the passwords you pick.

I can’t stress this enough: Do not re-use passwords. And don’t recycle them either. Recycling involves rather lame attempts to make a reused password unique by simply adding a digit or changing the capitalization of certain characters. Crooks who specialize in password attacks are wise to this approach as well.

If you have trouble remembering complex passwords (and this describes most people), consider relying instead on password length, which is a far more important determiner of whether a given password can be cracked by available tools in any timeframe that might be reasonably useful to an attacker.

In that vein, it’s safer and wiser to focus on picking passphrases instead of passwords. Passphrases are collections of multiple (ideally unrelated) words mushed together. Passphrases are not only generally more secure, they also have the added benefit of being easier to remember.

According to a recent blog entry by Microsoft group program manager Alex Weinert, none of the above advice about password complexity amounts to a hill of beans from the attacker’s standpoint.

Weinert’s post makes a compelling argument that as long as we’re stuck with passwords, taking full advantage of the most robust form of multi-factor authentication (MFA) offered by a site you frequent is the best way to deter attackers. has a handy list of your options here, broken down by industry.

“Your password doesn’t matter, but MFA does,” Weinert wrote. “Based on our studies, your account is more than 99.9% less likely to be compromised if you use MFA.”

Glassdoor’s Moisant said the company doesn’t currently offer MFA for its users, but that it is planning to roll that out later this year to both consumer and business users.

Password managers also can be useful for those who feel encumbered by having to come up with passphrases or complex passwords. If you’re uncomfortable with entrusting a third-party service or application to handle this process for you, there’s absolutely nothing wrong with writing down your passwords, provided a) you do not store them in a file on your computer or taped to your laptop or screen or whatever, and b) that your password notebook is stored somewhere relatively secure, i.e. not in your purse or car, but something like a locked drawer or safe.

Although many readers will no doubt take me to task on that last bit of advice, as in all things security related it’s important not to let the perfect become the enemy of the good. Many people (think moms/dads/grandparents) can’t be bothered to use password managers  — even when you go through the trouble of setting them up on their behalf. Instead, without an easier, non-technical method they will simply revert to reusing or recycling passwords.


Google Finds 20-Year-Old Microsoft Windows Vulnerability [Schneier on Security]

There's no indication that this vulnerability was ever used in the wild, but the code it was discovered in -- Microsoft's Text Services Framework -- has been around since Windows XP.


Ask a busy person [Seth's Blog]

You might know one.

The busy person has a bias for action, the ability to ship, and a willingness to contribute more than is required. The busy person is wrong more than most people (if you get up to bat more often, you’re going to have more hits and more strike outs, right?). Those errors are dwarfed by the impact they create.

Being a busy person is a choice.

It might not work for you, but you could try it out for a while.

We need more busy people.


Coded Smorgasbord: Unstrung Manager [The Daily WTF]

Deon was on a contract-to-hire job. In the beginning, it sounded like a good job. Then he looked at the C# codebase. It didn’t take him long to decide that this wasn’t going to be a job he’d work at...


Vader Streams Was Shut Down By ACE, Must Pay $10m Damages [TorrentFreak]

There are several large IPTV providers with brands that are well known across the unlicensed industry. One of those was Vader, otherwise known as Vader Streams, or just Vaders.

Notable for its Darth Vader logo, the platform served large numbers of direct customers and subscription re-sellers with at least 1,300 TV channels and a library of VOD content running close to 3,000 titles.

This May, however, something went seriously wrong.

“We have no choice but to close down Vader. We can’t reveal much publically, but by now some of you should know through the other means what happened,” a notice posted to the site’s Telegram channel read.

“We tried everything in our power to avoid this, to avoid any outage, but enough people worked against us.”

With that, Vader went down, never to appear again. As highlighted in our subsequent review of the Vader closure, we had strong suspicions that anti-piracy giant the Alliance for Creativity and Entertainment (ACE) had become involved.

We’d obtained an unverified copy of what looked like a cease-and-desist notice, apparently sent by ACE members to Vader, over its VOD content. Unable to confirm its authenticity, we made a decision not to publish it.

However, it’s now 100% clear that ACE, the global anti-piracy company made up of dozens of powerful content companies, did indeed shutter Vader. And it’s now evident why they refused to comment.

ACE proceeded against Vader through a secret court proceeding in Canada through which it obtained a so-called “Anton Piller” order, a civil search warrant that grants plaintiffs no-notice permission to enter a defendant’s premises in order to secure and copy evidence to support their case, before it can be destroyed or tampered with. A similar process was used against TVAddons founder Adam Lackman in 2017.

While the case against Lackman is moving forward at glacial speed more than two years later, the Vader matter now appears to be over. After obtaining a permanent injunction from the Federal Court in Canada, ACE has shuttered the service and landed Vader with a bill for $10 million in damages.

According to ACE, Vader must also “cede administrative control” over its entire “piracy infrastructure”, permanently cease-and-desist from doing anything in future connected to offering, selling, or promoting unlicensed streams, and/or developing, updating, hosting or promoting any Kodi add-ons connected to pirated content.

“On behalf of all ACE members, I applaud the Court’s decision to permanently put an end to piracy operations conducted by Vader Streams,” Charles Rivkin, Chairman and CEO of the Motion Picture Association of America, said in a statement.

“Actions like these can help reduce piracy and promote a dynamic, legal marketplace for creative content that provides audiences with more choices than ever before, while supporting millions of jobs in the film and television industry.”

Robert Malcolmson, Senior Vice President Regulatory Affairs and Government Relations, Bell Canada – a prominent ACE member – described the action by the Federal Court as “strong and appropriate”, adding that “illegal streaming services like Vader Streams cause serious harm to creators and distributors, the entire broadcasting and cultural sectors and ultimately Canadian consumers.”

While ACE says that Vader must “cede administrative control” over its entire “piracy infrastructure”, it remains unclear what that means in real terms.

At the time of the shutdown, Vader said that it was “going to make sure, no Email, IP, account + reseller name goes to the wrong hands. Everything will be wiped clean and that’s all.”

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

09:28 Berlin Lispers Meetup, Monday, 26th August 2019 [Planet Lisp]

We meet again on Monday 8pm, 26th August. Our host this time is James Anderson (

Berlin Lispers is about all flavors of Lisp including Clojure, Scheme and Common Lisp.

Willem Broekema will talk about his work-in-progress on "Developments in the AllegroGraph query engine".

We meet in the Taut-Haus at Engeldamm 70 in Berlin-Mitte, the bell is "James Anderson". It is located in 10min walking distance from U Moritzplatz or U Kottbusser Tor or Ostbahnhof. In case of questions call Christian +49 1578 70 51 61 4.


It’s The Little Things [Ctrl+Alt+Del Comic]

No Man’s Sky has come a long way. The last patch, Beyond, was a veritable laundy list of further tweaks and improvements to the experience. But if you skimmed reactions to the update, you would be forgiven for thinking that all they did was turn some grass pink and let you sit in chairs.

Perhaps the NMS community just really loves fuschia flora. That’s possible.

A more skeptical mind might ponder if perhaps this is merely indicative of the sorry state the game launched in; that its players had literally nowhere to go but up, to the point where sitting down was a revelation to be celebrated in earnest.

The post It’s The Little Things appeared first on Ctrl+Alt+Del Comic.

Comic: Asymmetrism [Penny Arcade]

New Comic: Asymmetrism


Girl Genius for Wednesday, August 21, 2019 [Girl Genius]

The Girl Genius comic for Wednesday, August 21, 2019 has been posted.



Russ Allbery: Review: Trail of Lightning [Planet Debian]

Review: Trail of Lightning, by Rebecca Roanhorse

Series: The Sixth World #1
Publisher: Saga
Copyright: 2018
ISBN: 1-5344-1351-0
Format: Kindle
Pages: 286

Maggie Hoskie is a monster hunter. Trained and then inexplicably abandoned by Neizghání, an immortal monster-slayer of her people, the Diné (Navajo), she's convinced that she's half-monster herself. Given that she's the sort of monster hunter who also kills victims that she thinks may be turned into monsters themselves, she may have a point. Apart from contracts to kill things, she stays away from nearly everyone except Tah, a medicine man and nearly her only friend.

The monster that she kills at the start of the book is a sign of a larger problem. Tah says that it was created by someone else using witchcraft. Maggie isn't thrilled at the idea of going after the creator alone, given that witchcraft is what Neizghání rescued her from in an event that takes Maggie most of the book to be willing to describe. Tah's solution is a partner: Tah's grandson Kai, a handsome man with a gift for persuasion who has never hunted a monster before.

If you've read any urban fantasy, you have a pretty good idea of where the story goes from there, and that's a problem. The hair-trigger, haunted kick-ass woman with a dark past, the rising threat of monsters, the protagonist's fear that she's a monster herself, and the growing romance with someone who will accept her is old, old territory. I've read versions of this from Laurell K. Hamilton twenty-five years ago to S.L. Huang's ongoing Cas Russell series. To stand out in this very crowded field, a series needs some new twist. Roanhorse's is the deep grounding in Native American culture and mythology. It worked well enough for many people to make it a Hugo, Nebula, and World Fantasy nominee. It didn't work for me.

I partly blame a throw-away line in Mike Kozlowski's review of this book for getting my hopes up. He said in a parenthetical note that "the book is set in Dinétah, a Navajo nation post-apocalyptically resurgent." That sounded great to me; I'd love to read about what sort of society the Diné might build if given the opportunity following an environmental collapse. Unfortunately, there's nothing resurgent about Maggie's community or people in this book. They seem just as poor and nearly as screwed as they are in our world; everyone else has just been knocked down even farther (or killed) and is kept at bay by magical walls. There's no rebuilding of civilization here, just isolated settlements desperate for water, plagued by local warlords and gangs, and facing the added misery of supernatural threats. It's bleak, cruel, and unremittingly hot, which does not make for enjoyable reading.

What Roanhorse does do is make extensive use of Native American mythology to shape the magic system, creatures, and supernatural world view of the book. This is great. We need a wider variety of magic systems in fantasy, and drawing on mythological systems other than Celtic, Greek, Roman, and Norse is a good start. (Roanhorse herself is Ohkay Owingeh Pueblo, not Navajo, but I assume without any personal knowledge that her research here is reasonably good.) But, that said, the way the mythology plays out in this book didn't work for me. It felt scattered and disconnected, and therefore arbitrary.

Some of the difficulty here is inherent in the combination of my unfamiliarity and the challenge of adopting real-world mythological systems for stories. As an SFF reader, one of the things I like from the world-building is structure. I like seeing how the pieces of the magical system fit together to build a coherent set of rules, and how the protagonists manipulate those rules in the story. Real-world traditions are rarely that neat and tidy. If the reader is already familiar with the tradition, they can fill in a lot of the untold back story that makes the mythology feel more coherent. If the author cannot assume that knowledge, they can get stuck between simplifying and restructuring the mythology for easy understanding or showing only scattered and apparently incoherent pieces of a vast system. I think the complaints about the distorted and simplified version of Celtic mythology in a lot of fantasy novels from those familiar with the real thing is the flip-side to this problem; it's worse mythology, but it may be more approachable storytelling.

I'm sure it didn't help that one of the most important mythological figures of this book is Coyote, a trickster god. I have great intellectual appreciation for the role of trickster gods in mythological systems, but this is yet more evidence that I rarely get along with them in stories. Coyote in this story is less of an unreliable friend and more of a straight-up asshole who was not fun to read about.

That brings me to my largest complaint about this novel: I liked exactly one person in the entire story. Grace, the fortified bar owner, is great and I would have happily read a book about her. Everyone else, including Maggie, ranged from irritating to unbearably obnoxious. I was saying the eight deadly words ("I don't care what happens to these people") by page 100.

Here, tastes will differ. Maggie acts the way that she does because she's sitting on a powder keg of unprocessed emotional injury from abuse, made far worse by Neizghání's supposed "friendship." It's realistic that she shuts down, refuses to have meaningful conversations, and lashes out at everyone on a hair trigger. I felt sympathy, but I didn't like her, and liking her is important when the book is written in very immediate present-tense first person. Kai is better, but he's a bit too much of a stereotype, and I have an aversion to supposedly-charming men. I think some of the other characters could have been good if given enough space (Tah, for instance), but Maggie's endless loop of self-hatred doesn't give them any room to breathe.

Add on what I thought were structural and mechanical flaws (the first-person narration is weirdly specific and detail-oriented in a way that felt like first-novel mechanical problems, and the ending is one of the least satisfying and most frustrating endings I have ever read in a book of this sort) and I just didn't like this. Clearly there are a lot of people nominating and voting for awards who think I'm wrong, so your mileage may vary. But I thought it was unoriginal except for the mythology, unsatisfying in the mythology, and full of unlikable characters and unpleasant plot developments. I'm unlikely to read more in this series.

Followed by Storm of Locusts.

Rating: 4 out of 10


The Most Accomplished Skywalker [Diesel Sweeties webcomic by rstevens]

this is a diesel sweeties comic strip

Orange you glad you're not a TIE Fighter pilot?


🎉 Get your party on with the Humble Jackbox Party Bundle 2019!... [Humble Bundle Blog]

🎉 Get your party on with the Humble Jackbox Party Bundle 2019! 🎈

Laugh until your sides ache with this bundle full of party games including The Jackbox Party Pack 2, YOU DON’T KNOW JACK Vol. 4: The Ride, Quiplash, and more!

Assets for Press & Partners


Savage Love [The Stranger, Seattle's Only Newspaper: Savage Love]

How does consent work when you're dating a sexsomniac? by Dan Savage

I took Molly with my best bud. We wound up cuddling and telling each other everything. We didn't mess around—we're both straight guys—but one of the things I told him is that I would much rather eat pussy than fuck, and one of the things he told me is that he's not at all into eating pussy and pretty much only likes to fuck. I think we'd make a great team: We're both good-looking, athletic dudes and we should find a woman who loves to have her pussy eaten and loves to get fucked. I would go down on her and get her going (and coming), then he steps in and dicks her down (and gets her off one last time). What say you?

Ultimate Package Deal

I would say, "FUCK YES!" if I were a woman, UPD, which I'm not. And while I can't promise you every woman will have the same reaction I did, some women most definitely will.

I'm a male in my late 50s. I went to a urologist for my erection problem, which was helped with ED medication. But orgasms are very hard to achieve, and the ED medication does not seem to make orgasms any easier to have. My girlfriend appreciates the erections, but I would also like to climax. This is very frustrating. Any advice?

Pills Inhibiting Lusty Loads

Tits and dicks both sag with age, which is why push-up bras and push-up pills were invented. And while ED meds do make it easier for a guy to get an erection, they can also make it more difficult for a guy to climax. Upside: You last longer. Downside: You may sometimes have sex without climaxing. Or you can shift your perspective and try to see this downside as a secret upside: Sometimes you get to enjoy sex without climaxing—and next time, when you do climax, you'll blow a bigger load.

I am a bisexual man who's active in the sex-positive community, and I love playing with couples. I was updating my Feeld profile to reflect this desire, but I realized there's no consistent term for a male unicorn. So I listed "Male/Stag/Stallion/Minotaur/Pegasus," various terms I've seen people use. WTF, it shouldn't require a whole line in my profile to run through all the terms! As the person who famously crowdsourced "pegging," I was hoping you could work your magic and get everyone to agree on a nonbinary term that works for all sexual identities.

Having One Reliable Name

What's wrong with "unicorn"? Unicorns—the mythical beasts—can be female, male, or, I suppose, genderless or genderfluid. They can be anything we want them to be, HORN, since we made them up. And while the term first came into use to describe bi women who weren't just open to having sex with an established, opposite-sex couple, but open to committing to a couple and forming a poly triad, there's no reason men and/or nonbinary folks who are interested in the same—hooking up with and forming relationships with established couples—couldn't identify as unicorns, too. But are you a unicorn? People began to call those bi women "unicorns" because they were hard to find and everyone, it seemed, was looking for one. People interested in simply playing with couples aren't anywhere near as hard to find.

I've recently begun to experiment with a few kinky friends. One of them is a voyeur who is super into bukkake. I'd be open to a group bukkake scene, but how do I avoid contracting an STI?

Anonymous Assistant

"On me, not in me" was a safe-sex message crafted in the earliest, darkest, most terrifying days of the AIDS Crisis—and a bukkake scene, which involves multiple men ejaculating on one person, is all about "on me," which makes it relatively safe. So long as you're careful not to get anyone's come in your eyes (ocular gonorrhea, syphilis, and chlamydia are all things) or on your hole(s), you won't have anything to worry about.

Is there a regional difference between people who use the word "come" versus people who use "jizz"? I personally only use the word "come" and rarely hear anyone use "jizz." Do people not use "jizz" or do they just not use it where I live?

Seeking Pretty Unnecessary Niche Knowledge

I've seen maps that track regionalisms like "soda" versus "pop," SPUNK, but I've never seen one tracking "come" versus "jizz." Seems like something a sex-positive linguist might want to jump on.

I'm a 46-year-old man and I recently met a 31-year-old woman. We have not had PIV sex yet, but we have enjoyed several nights of cuddling, spooning, etc. as the relationship progresses. She has made it very clear she wants our first time to be a fairy-tale evening, so we have yet to take things past mild foreplay. Plot twist: After two nights of us sleeping together, I realized she's a sexsomniac. She had no idea until I told her, and she barely believes me. But if I put my arm around her to cuddle when she's asleep, she immediately sexually responds to the skin-to-skin contact. On two occasions she's performed oral on me. I'm not complaining, as this is quite possibly every guy's dream. My question is around consent when dealing with situations like this.

She's My Dream Girl

Unless your new girlfriend gave you permission to initiate skin-to-skin contact in the middle of the night—unless she not only didn't have a problem with the first blowjob you accidentally triggered but explicitly gave you the go-ahead to trigger more—you have already and repeatedly violated her consent. If she doesn't want to do more than cuddle or spoon when she's awake, you shouldn't be manipulating her into blowing you when she's asleep. Most people who are partnered with sexsomniacs prefer not to have sex with their partners when they're unconscious, but some do—with their sexsomniac partner's prior consent. It's a gray area, because an unconscious person can't offer meaningful, enthusiastic, ongoing consent. But unless there are details you've omitted—details like your partner saying, "I blew you in my sleep? Really! Neat! I'm happy to keep doing that!"—stop initiating skin-to-skin contact when she's asleep or stop pretending you care about consent. (You should care about consent and you should stop.)

I've been seeing a guy. We're not really "boyfriend and girlfriend" and we're not exclusive. Last night, him and my best friend and I were all hanging out in his bedroom. After a while, I went to sleep on the couch in the living room and left them in the bedroom. When I woke up, they were having sex. I had told them both it was okay for them to have sex with each other, but I didn't expect them to do it when I was just in the other room.

Unwelcome Personal Surprise Enraging Totally

You're not exclusive, UPSET, and you gave this guy and your best friend permission to fuck, and... they fucked. But you got something out of it, too: You learned an important lesson. Namely, no one can read your mind. If you give someone permission to do something with someone else sometime, and both those someones are sitting on a bed, you need to bring up any and all additional conditions before falling asleep on the couch in the next room.

On the Lovecast, when your twin brother is a white supremacist:

@fakedansavage on Twitter

[ Comment on this story ]

[ Subscribe to the comments on this story ]


Announcing notqmail []

The notqmail project has announced its existence and shipped an initial release. It's a new fork of the venerable qmail mail transport system. "Our first release is informed, conservative, and careful — but bold. It reflects our brand-new team’s rapid convergence on where we’re going and how we’ll get there."

Open source POWER ISA takes aim at Intel and Arm (TechRepublic) []

TechRepublic reports on the opening of the POWER instruction-set architecture. "While the POWER ISA was itself licensable following the creation of the OpenPOWER Foundation in 2013, that came at a cost. Now, the POWER ISA is available royalty-free, inclusive of patent rights." The OpenPOWER Foundation is also being folded into the Linux Foundation.


Philipp Kern: Alpha: Self-service buildd givebacks [Planet Debian]

Builds on Debian's build farm sometimes fail transiently. Sometimes those failures are legitimate flakes, for instance when an in-progress build happens to exhaust its resources because of other builds on the same machine. Until now, you always needed to mail the buildd, wanna-build admins or the Release Team directly in order to get the builds re-queued.

As an alpha trial I implemented self-service givebacks as a web script. As SSO for Debian developers is now a thing, it is trivial to add authentication in a way that a role account can use to act on your behalf. While at work this would all be an RPC service, I figured that a little CGI script would do the job just as well. So lo and behold, accessing<package>&suite=<suite>&arch=<arch> with the right parameters set:

You are authenticated as pkern. ✓
Working on package fife, suite sid and architecture mipsel. ✓
Package version 0.4.2-1 in state Build-Attempted, can be given back. ✓
Successfully given back the package. ✓

Note that you need to be a Debian developer with a valid SSO client certificate to access this service.

So why do I say alpha? We still expect Debian developers to act responsibly when looking at build failures. A lot of times there is a legitimate bug in the package and the last thing we would like to see as a project is someone addressing flakiness by continuously retrying a build. Access to this service is logged. Most people coming to us today did their due diligence and tried reproducing the issue on a porterbox. We still expect these things to happen but this aims to cut on the round-trip time until an admin gets around to process your request, which have been longer than necessary recently. We will audit the logs and see if particular packages stand out.

There can also still be bugs. Please file them against when you see them. Please include a copy of the output, which includes validation and important debugging information when requests are rejected. Also this all only works for packages in Build-Attempted. If the build has been marked as Failed (which is a manual process), you still need to mail us. And lastly the API can still change. Luckily the state change can only happen once, so it's not much of a problem for the GET request to be retried. But it should likely move to POST anyhow. In that case I will update this post to reflect the new behavior.

Thanks to DSA for making sure that I run the service sensibly using a dedicated role account as well as WSGI and doing the work to set up the necessary bits.

Tuesday, 20 August


Link [Scripting News]

I've heard said that Reagan introduced the trickle-down theory of economics, but apparently that's wrong -- it was an issue in the election of 1896, which McKinley, the Repub, won.

Link [Scripting News]

Why would anyone get excited about what Susan Sarandon thinks? Her candidate Bernie is lookin good, but will anyone at all be persuaded by Susan Sarandon? She's had an incredible career for sure, but there are lots of stars at her level. Let's find out what Geena Davis thinks, or Brad Pitt or Kevin Costner.

Link [Scripting News]

Getting pretty close with the nightly-email version of Scripting News. Lots of moving interconnected parts.


Cox Asks Court to Sanction Labels Over Destroyed Tracking Evidence [TorrentFreak]

Last year, Cox ended its piracy liability lawsuit with music company BMG, agreeing to a “substantial settlement.”

The ISP is now in the clear, however, Cox is still caught up in another lawsuit filed by a group of major music companies, all members of the RIAA.

The music outfits, including Capitol Records, Warner Bros, and Sony Music, argue that Cox categorically failed to terminate repeat copyright infringers and that the ISP substantially profited from this ongoing ‘piracy’ activity. All at the expense of the record labels and other rightsholders.

Over the past several months, both parties have conducted discovery and the case is currently scheduled to go to trial in December. While there were talks of a potential settlement a few weeks ago, things look rather different now.

Last week we reported that the ISP canceled a scheduled settlement discussion. As a result, the music outfits called for sanctions, accusing the ISP of gamesmanship. Now, it’s Cox’s turn to ask for sanctions, this time with a formal request.

Cox submitted a motion for discovery sanctions at the Virginia federal court, where it accuses the plaintiffs of relying on unsubstantiated evidence.

The concerns relate to the piracy evidence which the music companies are relying on. This is the data that was used to send copyright infringement notices to Cox, pointing out how its subscribers allegedly shared infringing material. As such, it is the basis of the “repeat infringer” claims that are central to the lawsuit.

The data in question was collected by the anti-piracy firm MarkMonitor, which keeps a close eye on global BitTorrent activity. For the lawsuit, these infringement allegations were summarized in two spreadsheets. However, Cox notes that underlying evidence has since been deleted.

“MarkMonitor failed to retain critical portions of this evidence, and the document that Plaintiffs intend to rely on is, at best, a partial and inaccurate summary of these analyses,” Cox informs the Court.

As such, Cox requests sanctions. Specifically, it asks the court for a ruling that the piracy evidence in question can’t be used to back up any claims.

“Because Plaintiffs’ agent destroyed the underlying data, leaving no way to assess the accuracy of this summary, Cox respectfully requests that the Court enter discovery sanctions against Plaintiffs in the form of a preclusion order prohibiting Plaintiffs from relying on the incomplete and unreliable MarkMonitor evidence.”

According to Cox, MarkMonitor deleted data which showed that claimed copyright infringements were indeed linked to copyrighted files. These data concern the “matching” logs it received from the fingerprinting service Audible Magic.

During discovery, Cox learned that MarkMonitor used data from Audible Magic to reach its infringement conclusions. A subsequent subpoena explained how this worked, and a deposition of Audible Magic later revealed that MarkMonitor deleted the transaction logs.

“Ultimately, Cox learned in a deposition on the last day of discovery that MarkMonitor did not produce the transaction logs at issue or the relevant database because it had destroyed them,” Cox informs the Court.

The deleted data was crucial according to the ISP, as it’s the only way to prove that the alleged infringements detailed in the spreadsheet are correct. In addition, the routinely deleted data “strongly suggests” that MarkMonitor’s spreadsheet is inaccurate.

“The destroyed Audible Magic data was undeniably material and foundational to the MarkMonitor Spreadsheet,” Cox notes.

The ISP backs up its ‘inaccuracy’ claims in redacted parts of its memorandum, mentioning that it was a “coin flip” whether or not a claimed infringement actually took place.

Coin flip

Cox argues that the record labels withheld unfavorable information so sees no other option than to scrap the spreadsheets as evidence. In their current form, they can’t be backed up.

“Because Plaintiffs failed to preserve and produce the best and most complete—indeed, the only—evidence of the alleged direct infringements, the Court should preclude Plaintiffs from relying on the ‘236 and ‘431 Spreadsheets, and any derivative documents, which are merely incomplete and inaccurate summaries of what the data would have shown,” Cox concludes.

It the Court agrees with Cox and excludes the piracy data as evidence, the case could be severely impacted.

Interestingly, this isn’t the first time that Cox has complained about spoilt evidence. The company did the same a few years ago in the BMG case, after it found out that anti-piracy company Rightscorp destroyed older versions of its piracy tracking code.

At the time the Court ruled that sanctions were indeed appropriate. However, the copyright infringement claims were not disregarded and Cox’s request to dismiss the case in its entirety was denied.

A copy of Cox’s memorandum in support of the motion for discovery sanctions and to preclude the MarkMonitor evidence is available here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.


Elizabeth Warren has a plan to reform "Heirs' Property," which allows wealthy white property developers to steal Black family homes [Cory Doctorow – Boing Boing]

Heirs' property is a relic of post-Reconstruction law, which allows white developers to exploit the diffuse ownership of Black family homes to steal them and kick out the people who live there.

Following on an in-depth Propublica investigation into heirs' property title-thefts, Democratic presidential hopeful Elizabeth Warren has included heirs' property reform in her ambitious agriculture plan.

In her plan, Warren pledged to end policies that have perpetuated this discrimination and to help these families of color preserve their ownership, build wealth and gain access to sustainable livelihoods. Specifically, she promised to fully fund a relending program laid out in the 2018 Farm Bill, which would provide loans to heirs’ property owners to clear their titles and consolidate ownership. She announced that she would prioritize funding lending organizations in states that have enacted the Uniform Partition of Heirs’ Property Act, legislation that expands heirs’ rights to their property.

Warren said she plans to further address the wider needs of heirs’ property owners, who struggle to qualify for U.S. Department of Agriculture loans, disaster relief and housing assistance when they lack a clear title. Her plan would allow heirs’ property owners who want to access farm loans to present alternative documentation to the USDA. In addition, those in need of other forms of federal assistance could use such documentation to access loans through the Federal Emergency Management Agency and the U.S. Department of Housing and Urban Development.

Elizabeth Warren Announces Plans to Help Heirs’ Property Owners [Lizzie Presser/Propublica]

Adding pink seaweed to cow feed eliminates their methane emissions [Cory Doctorow – Boing Boing]

One of the major contributors to greenhouse gases is the methane that cows belch up as they break down cellulose, but five years ago, research from Australia's Commonwealth Scientific and Industrial Research Organisation (CSIRO) found that adding small amounts of a pink seaweed called Asparagopsis to cows' diets eliminated the gut microbes responsible for methane production and "completely knocks out" cows' methane emissions.

Asparagopsis grows on the coast of Australia, and cows actually seek it out and eat it without encouragement. Replacing 2% of cows' feed with Asparagopsis is sufficient to end their methane production.

Researchers at the University of the Sunshine Coast are trying to ramp up Asparagopsis production to scale to meet a potential global market for it.

The USC team is working at the Bribie Island Research Centre in Moreton Bay to learn more about how to grow the seaweed species, with the goal of informing a scale-up of production that could supplement cow feed on a national—and even global scale.

“This seaweed has caused a lot of global interest and people around the world are working to make sure the cows are healthy, the beef and the milk are good quality,” Dr. Paul said.

“That’s all happening right now. But the one missing step, the big thing that is going to make sure this works at a global scale, is to make sure we can produce the seaweed sustainably.

Burp-free cow feed drives seaweed science at USC [University of the Sunshine Coast]

(via Kottke)


A free/open tool for making XKCD-style "hand-drawn" charts [Cory Doctorow – Boing Boing]

Tim Qian, a "full stack developer and open source activist," has published chart.xkcd, a free/open tool that lets you create interactive, "hand-drawn" charts in the style of XKCD comics. It's pretty fabulous! (via Four Short Links)

A deep dive into how parasites hijack our behavior and how we evolved to resist them [Cory Doctorow – Boing Boing]

On Slate Star Codex (previously), Scott Alexander breaks down Invisible Designers: Brain Evolution Through the Lens of Parasite Manipulation, Marco Del Giudice's Quarterly Review of Biology paper that examines the measures that parasites take to influence their hosts' behaviors, and the countermeasures that hosts evolve to combat them.

Diligent readers will know that parasites manage some incredible feats of behavior modification (one of Scott Westerfeld's best novels looks at vampirism as a form of parasitic behavior modification and it's just great).

It's a truism that the predator carves the prey and the prey carve the predator. Del Giudice's investigations into parasite tactics and countermeasures lead him to hypothesize that perhaps human variation is driven by responses to parasites' attempts at behavior modification (for example, humans have a lot of variability in our major histocompatibility complex genes, which mean that our immune systems can readily distinguish between our cells and invasive ones).

It's a super-interesting paper, and Alexander's breakdown is a great path into it.

Sixth, you use antiparasitic drugs as neurotransmitters. This is the kind of murderous-yet-clever solution I expect of evolution, and it does not disappoint. Several neurotransmitters, including neuropeptide Y, neurokinin A, and substance P are pretty good antimicrobials. The assumption has always been that the body kills two birds with one stone, getting its signaling done and also having some antimicrobials around to take out stray bacteria. But Del Giudice proposes that this is to prevent parasites from hijacking the signal; any parasite that tried to produce or secrete an antiparasitic drug would die in the process.

Dopamine is mildly toxic. The body is usually pretty good at protecting itself, but the mechanism fails under stress; this is why too much methamphetamine rots your brain. Why would you use a toxic chemical as a neurotransmitter? For the same reason you would use antiparasitic drugs – because you want to kill anything smaller than you that tries to synthesize it.

People always talk about the body as a beautiful well-oiled machine. But sometimes the body communicates with itself by messages written with radioactive ink on asbestos-laced paper, in the hopes that it’s killing itself slightly more slowly than it’s killing anyone who tries to send it fake messages. Honestly it is a miracle anybody manages to stay alive at all.

All these features together are a pretty effective way of dealing with parasite manipulation. There are a few parasites that can manipulate human behavior – rabies definitely, toxoplasma maybe – but overall we are remarkably safe.

Maybe Your Zoloft Stopped Working Because A Liver Fluke Tried To Turn Your Nth-Great-Grandmother Into A Zombie [Scott Alexander/Slate Star Codex]

Invisible Designers: Brain Evolution Through the Lens of Parasite Manipulation [Marco Del Giudice/Quarterly Review of Biology]

(Image: Yale Rosen, CC BY-SA)

First detailed look at Poland's challenge to the EU Copyright Directive [Cory Doctorow – Boing Boing]

After the EU Copyright Directive passed with a slim majority that only carried because some MEPs got confused and pressed the wrong button, the government of Poland filed a legal challenge with the European Court of Justice, arguing that the Directive -- and its rule requiring that all online discourse be filtered by black-box algorithms that will block anything that might be infringing -- violated both Polish and European law.

Now, the first official documents from that court challenge have been made public for the first time. As expected, the challenge asks the court to rule on whether filters are "proportional and necessary" to preventing copyright filters.

Poland has asked the court, at a minimum, to strike parts b) and c) of Article 17 (originally Article 13). These are the rules that require online providers to make "best efforts to ensure the unavailability" of works that someone, somewhere has claimed as their copyrighted work; and to make "best efforts to prevent their future uploads."

Poland's anticipated that the court may find that removing these parts will prove difficult, and so it's proposed that, as an alternative, the court could just strike down all of Article 17.

The Republic of Poland claims specifically that the imposition on online content-sharing service providers of the obligation to make best efforts to ensure the unavailability of specific works and other subject matter for which the rightholders have provided the service providers with the relevant and necessary information (point (b) of Article 17(4) of [EU Copyright] Directive 2019/790) and the imposition on online content-sharing service providers of the obligation to make best efforts to prevent the future uploads of protected works or other subject-matter for which the rightsholders have lodged a sufficiently substantiated notice (point (c), in fine, of Article 17(4) of Directive 2019/790) make it necessary for the service providers -- in order to avoid liability -- to carry out prior automatic verification (filtering) of content uploaded online by users, and therefore make it necessary to introduce preventive control mechanisms. Such mechanisms undermine the essence of the right to freedom of expression and information and do not comply with the requirement that limitations imposed on that right be proportional and necessary.

Case C-401/19: Action brought on 24 May 2019 — Republic of Poland v European Parliament and Council of the European Union [EUR-LEX]

It's On: Details Emerge Of Polish Government's Formal Request For Top EU Court To Throw Out Upload Filters [Glyn Moody/Techdirt]



Link [Scripting News]

All there is to read about Quentin Taratino's new movie.

From search-engine to walled garden: majority of Google searches do not result in a click [Cory Doctorow – Boing Boing]

As tech began to concentrate, two dominant strategies emerged: Google's (instrument the whole internet for surveillance, which means that you don't have to lock people in in order to spy on them) and Apple's (lock everyone into a walled garden, and extract revenue by refusing to let them out again).

But the along came Facebook, whose strategy is lock everyone in and spy on them.

Now that Facebook has blazed that trail, everyone else is slowly turning into Facebook. Apple is using its walled garden to turn into a surveillance company and Google is likewise turning into a walled garden.

Google's original pitch to the rest of the web was, "We deliver traffic: people search here for answers, and we send them back to you to get them." But over time, and for a variety of reasons (not all of them bad, see e.g., "Not sending people to sites that have malvertising"), the company has been trying to serve the answer to your question with no further clicking required.

Now, that strategy has hit a tipping point. According to analytics from Jumpstream, the majority of Google searches no longer end with a click. On Sparktoro, Rand Fishkin calls this "a milestone in Google’s evolution from search engine to walled-garden."

Much of Fishkin's post is about how Google's reps refused to give a clear answer to Congress when questioned on this subject (which is true), but the real lesson here is that firms do not practice forbearance: once a company dominates its market, whatever odious measures it took off the table to attain its dominance are reconsidered. Transhuman, immortal colony organisms do not tolerate limits on effective growth strategies over the long term. In the absence of competition, they will gradually become their own worst selves: "The creatures outside looked from pig to man, and from man to pig, and from pig to man again; but already it was impossible to say which was which."

While you're reading this, cast your mind back to 2012: "Sergei Brin on the existential crisis of the net: walled gardens + snooping governments."

June (as shown at the top of this post) is when zero-click searches in browsers passed 50%, but the pie chart above shows that even before that, Google was sending a huge portion of search clicks to their own properties (~6% of queries and ~12% of clicks). Those properties include YouTube, Maps, Android, Google’s blog, subdomains of, and a dozen or so others (full list here).

Maybe Google’s websites are ranking exclusively because they’re the best result, but if Congress is asking questions about whether a monopoly is potentially abusing its market dominance in one field to unfairly compete in another, I’ve got something else they’ll want to see. It’s a chart of where searches happened on major web properties in Q2, and as you can see, there’s no competition.

Google is almost certainly even more dominant than the chart above suggests. That’s because mobile apps, which Jumpshot doesn’t currently measure, aren’t included — this is just browser-based search data. The Google Maps App, Google Search App, and YouTube are installed on almost every mobile device in the US, and likely have so much usage that, if their search statistics were included, Google’s true market share would be 97%+.

(Image: Jumpshot)

(via Four Short Links)


IBM open sources Power chip instruction set [OSnews]

It has been a long time coming, and it might have been better if this had been done a decade ago. But with a big injection of open source spirit from its acquisition of Red Hat, IBM is finally taking the next step and open sourcing the instruction set architecture of its Power family of processors. Opening up architectures that have fallen out of favour seems to be all the rage these days. Good news, of course, but a tad late.

The STC Executel 3910 [OSnews]

Standard Telephone & Cable made quite a few phones for British Telecom in the 70s/80s that most people will recognise instantly even though they didn’t actually know who made them. Probably like me they thought that BT made all their own stuff which I later found out was completely wrong but hey. In the early 80s they branched out into computerised telephones with this lovely looking beast, the Executel 3910. Fellow collector Tony brought this one to my attention and on seeing the pictures I said ‘what the hells is THAT!’ and bought it. It’s a desk phone, pure and simple, but massively computerised with an AMD8085 processor and 32K RAM plus a 5″ monitor for displaying diary and phonebook entries AND, and it’s a big AND, PRESTEL access! A recent video by Techmoan – who bought a working model – brought this device to my attention, and I instantly fell in love with it. This is an incredible piece of engineering and forward-thinking.


Gawker's new owners demand right to search journalists, ban encrypted email and institute dress code [Cory Doctorow – Boing Boing]

After Deadspin's Laura Wagner published an incredible, brave, detailed look at how her new private equity masters -- Jim Spanfeller/Great Hill Partners -- were running Gawker now that they'd acquired it from Univision, the company (now called "G/O Media") struck back.

Wagner's piece painted a picture of a dysfunctional workplace where cronyism and buck-passing were the order of the day, where women who'd earned promotions were leapfrogged by bros who came in with the new boss and failed spectacularly to do their jobs, then blamed the hard working people they'd stepped over for their failures.

In the wake of Wagner's piece, Deadspin's editor in chief, Megan Greenwell, resigned over the way that her bosses had handled the event, saying that she'd been "repeatedly undermined, lied to, and gaslit in my job."

The same week that Greenwell resigned, the company circulated a draft staff handbook that included the right to search employees' "personal vehicles, parcels, purses, handbags, backpacks, briefcases, lunch boxes" and to review employee emails, tweets, and communications.

The same rule prohibits the use of encrypted email systems that might frustrate employers' ability to snoop on journalists. As Mike Masnick points out, this is bonkers: telling top-notch journalists who deal with confidential sources who face retaliation and even physical violence for going public that they're not allowed to use cryptography isn't just stupid, it's malpractice. It could literally get someone killed.

Indeed, in light of the monumental stupidity and pig-ignorance behind this policy, the company's ridiculous new dress code looks positively reasonable: " Employees must arrive between 9:30 a.m. and 5:30 p.m., according to the handbook, and are required to wear 'smart casual' attire. 'Offensive' logos or 'sweatpants, exercise pants, Bermuda shorts, short shorts, biker shorts, Mini-skirts, beach dresses, midriff tops, and halter tops' are all banned."

G/O's writers are unionized, which means that bosses can't just unilaterally impose this kind of outrageous shit, so there's hope yet.

The G/O handbook declares that the company can search employees’ “personal vehicles, parcels, purses, handbags, backpacks, briefcases, lunch boxes,” review all electronic communications made on company property, and disclose those messages to others if the company deems it appropriate. The new rules also strangely allow the company to access reporters’ “tweets” and bars employees from using encrypted email programs—a common tool journalists often use to protect highly confidential sources.

This Is How Things Work Now At G/O Media [Laura Wagner/Deadspin]

Deadspin Editor Quits, Rails Against Bosses: ‘I’ve Been Repeatedly Lied To and Gaslit’ [Maxwell Tani/The Daily Beast]

Gizmodo Media's Clueless New Owners Tell Reporters They Can't Use Encrypted Email Any More [Mike Masnick/Techdirt]

How "meritocracy" went from a joke to a dogma, and destroyed the lives of everyone it touched [Cory Doctorow – Boing Boing]

The term "meritocracy" was coined in Michael Young's satirical 1958 novel, "The Rise of Meritocracy," where it described a kind of self-delusion in which rich people convinced themselves that their wealth was evidence of their moral superiority; it's well-documented that a belief in meritocracy makes you act like an asshole, and also makes you incapable of considering how much of your good fortune is attributable to luck; now, in a new book, The Meritocracy Trap: How America's Foundational Myth Feeds Inequality, Dismantles the Middle Class, and Devours the Elite, Yale Law professor Daniel Markovits documents how a belief in meritocracy also makes rich people totally miserable.

Meritocracy poses our society as a finite game with winners and losers, and its circular reasoning ("the best people succeed, therefore anything you do to succeed makes you better than the people who lose") creates an endless drive to destroy yourself with workaholism and to ruin the lives of the people around you with cheating (think of the parents who ruined their kids lives by bribing them into top institutions).

Meritocracy also turns you into a eugenicist, because the only way to reconcile your ability to grift your kids to the front of the line even though they've done nothing to deserve pride of place is to invest in a belief in "good blood": you succeeded because of your in-born meritocratic grit and gumption, so your kids deserve to succeed because they inherited that trait from you. In other words, "meritocracy" starts in a belief that deeds are evidence of worth, but ends up being a belief that blood is your evidence of worth: a doctrine that is supposed to be anti-aristocratic ends up reinventing the aristocracy (this is why Whuffie is a terrible idea).

From his perch at Yale Law School (an institution that deliberately reformed itself to end hereditary admissions preference in favor of a "meritocratic" process), Markovits has a front-row seat for the ways that the meritocracy delusion ruins life for everyone, not just the losers. As he says: "The elite should not—they have no right to—expect sympathy from those who remain excluded from the privileges and benefits of high caste. But ignoring how oppressive meritocracy is for the rich is a mistake. The rich now dominate society not idly but effortfully."

A person who extracts income and status from his own human capital places himself, quite literally, at the disposal of others—he uses himself up. Elite students desperately fear failure and crave the conventional markers of success, even as they see through and publicly deride mere “gold stars” and “shiny things.” Elite workers, for their part, find it harder and harder to pursue genuine passions or gain meaning through their work. Meritocracy traps entire generations inside demeaning fears and inauthentic ambitions: always hungry but never finding, or even knowing, the right food.

The elite should not—they have no right to—expect sympathy from those who remain excluded from the privileges and benefits of high caste. But ignoring how oppressive meritocracy is for the rich is a mistake. The rich now dominate society not idly but effortfully. The familiar arguments that once defeated aristocratic inequality do not apply to an economic system based on rewarding effort and skill. The relentless work of the hundred-hour-a-week banker inoculates her against charges of unearned advantage. Better, then, to convince the rich that all their work isn’t actually paying off.

They may need less convincing than you might think. As the meritocracy trap closes in around elites, the rich themselves are turning against the prevailing system. Plaintive calls for work/life balance ring ever louder. Roughly two-thirds of elite workers say that they would decline a promotion if the new job demanded yet more of their energy. When he was the dean of Stanford Law School, Larry Kramer warned graduates that lawyers at top firms are caught in a seemingly endless cycle: Higher salaries require more billable hours to support them, and longer hours require yet higher salaries to justify them. Whose interests, he lamented, does this system serve? Does anyone really want it?

How Life Became an Endless, Terrible Competition [Daniel Markovits/The Atlantic]

The Meritocracy Trap: How America's Foundational Myth Feeds Inequality, Dismantles the Middle Class, and Devours the Elite [Daniel Markovits/Penguin]

(via Naked Capitalism)


Dirty tricks 6502 programmers use [OSnews]

This post recaps some of the C64 coding tricks used in my little Commodore 64 coding competition. The competition rules were simple: make a C64 executable (PRG) that draws two lines to form the below image. The objective was to do this in as few bytes as possible. These people are wizards.

I miss Microsoft Encarta [OSnews]

Most folks at Microsoft don’t realize that Encarta exists and is used TODAY all over the developing world on disconnected or occasionally connected computers. (Perhaps Microsoft could make the final version of Encarta available for a free final download so that we might avoid downloading illegal or malware infested versions?) What are your fond memories of Encarta? If you’re not of the Encarta generation, what’s your impression of it? Had you heard or thought of it? I have vague memories of using Encarta back in the early ’90s, but I was much more interested in technology and games as a young kid. These days I tend to read a lot of Wikipedia pages every day, so had I been my current age 25 years ago, I can definitely see myself using Encarta a lot. In any event, definitely neat that the final version of Encarta – from 2009 – runs just fine on Windows 10.


Read: Jeannette Ng's Campbell Award acceptance speech, in which she correctly identifies Campbell as a fascist and expresses solidarity with Hong Kong protesters [Cory Doctorow – Boing Boing]

Last weekend, Jeanette Ng won the John W Campbell Award for Best New Writer at the 2019 Hugo Awards at the Dublin Worldcon; Ng's acceptance speech calls Campbell, one of the field's most influential editors, a "fascist" and expresses solidarity with the Hong Kong pro-democracy protesters.

I am a past recipient of the John W Campbell Award for Best New Writer (2000) as well as a recipient of the John W Campbell Memorial Award (2009). I believe I'm the only person to have won both of the Campbells, which, I think, gives me unique license to comment on Ng's remarks, which have been met with a mixed reception from the field.

I think she was right -- and seemly -- to make her remarks. There's plenty of evidence that Campbell's views were odious and deplorable. For example, Heinlein apologists like to claim (probably correctly) that his terrible, racist, authoritarian, eugenics-inflected yellow peril novel Sixth Column was effectively a commission from Campbell (Heinlein based the novel on one of Campbell's stories). This seems to have been par for the course for JWC, who liked to micro-manage his writers: Campbell also leaned hard on Tom Godwin to kill the girl in "Cold Equations" in order to turn his story into a parable about the foolishness of women and the role of men in guiding them to accept the cold, hard facts of life.

So when Ng held Campbell "responsible for setting a tone of science fiction that still haunts the genre to this day. Sterile. Male. White. Exalting in the ambitions of imperialists and colonisers, settlers and industrialists," she was factually correct.

Not just factually correct: also correct to be saying this now. Science fiction (like many other institutions) is having a reckoning with its past and its present. We're trying to figure out what to do about the long reach that the terrible ideas of flawed people (mostly men) had on our fields. We're trying to reconcile the legacies of flawed people whose good deeds and good art live alongside their cruel, damaging treatment of women. These men were not aberrations: they were following an example set from the very top and running through fandom, to the great detriment of many of the people who came to fandom for safety and sanctuary and community.

It's not a coincidence that one of the first organized manifestation of white nationalism as a cultural phenomenon was within fandom, and while fandom came together to firmly repudiate its white nationalist wing, these assholes weren't (all) entryists who showed up to stir trouble in someone else's community. The call (to hijack the Hugo award) was coming from inside the house: these guys had been around forever, and we'd let them get away with it, in the name of "tolerance" even as these guys were chasing women, queer people, and racialized people out of the field.

Those same Nazis went on to join Gamergate, then take up on /r/The_Donald, and they were part of the vanguard of the movement that put a boorish, white supremacist grifter into the White House.

The connection between the tales we tell about ourselves and our past and futures have a real, direct outcome on the future we arrive at. White supremacist folklore, including the ecofascist doctrine that says we can only avert climate change by murdering all the brown people, comes straight out of sf folklore, where it's completely standard for every disaster to be swiftly followed by an underclass mob descending on their social betters to eat and/or rape them (never mind the actual way that disasters go down).

When Ng took the mic and told the truth about his legacy, she wasn't downplaying his importance: she was acknowledging it. Campbell's odious ideas matter because he was important, a giant in the field who left an enduring mark on it. No one disagrees about that. What we want to talk about today is what that mark is, and what it means.

Scalzi points out:

There are still people in our community who knew Campbell personally, and many many others one step removed, who idolize and respect the writers Campbell took under his wing. And there are people — and once again I raise my hand — who are in the field because the way Campbell shaped it as a place where they could thrive. Many if not most of these folks know about his flaws, but even so it’s hard to see someone with no allegiance to him, either personally or professionally, point them out both forcefully and unapologetically. They see Campbell and his legacy abstractly, and also as an obstacle to be overcome. That’s deeply uncomfortable.

He's not wrong, and the people who counted Campbell as a friend are legitimately sad to confront the full meaning of his legacy. I feel for them. It's hard to reconcile the mensch who was there for you and treated his dog with kindness and doted on his kids with the guy who alienated and hurt people with his cruel dogma.

Here's the thing: neither one of those facets of Campbell cancel the other one out. Just as it's not true that any amount of good deeds done for some people can repair the harms he visited on others; it's also true that none of those harms cancel out the kindnesses he did for the people he was kind to.

Life is not a ledger. Your sins can't be paid off through good deeds. Your good deeds are not cancelled by your sins. Your sins and your good deeds live alongside one another. They coexist in superposition.

You (and I) can (and should) atone for our misdeeds. We can (and should) apologize for them to the people we've wronged. We should do those things, not because they will erase our misdeeds, but because the only thing worse than being really wrong is not learning to be better.

People are flawed vessels. The circumstances around us -- our social norms and institutions -- can be structured to bring out our worst natures or our best. We can invite Isaac Asimov to our cons to deliver a lecture on "The Power of Posterior Pinching" in which he literally advises men on how to grope the women in attendance, or we can create and enforce a Code of Conduct that would bounce anyone, up to and including the Con Chair and the Guest of Honor, who tried a stunt like that.

We, collectively, through our norms and institutions, create the circumstances that favor sociopathy or generosity. Sweeping bad conduct under the rug isn't just cruel to the people who were victimized by that conduct: it's also a disservice to the flawed vessels who are struggling with their own contradictions and base urges. Create an environment where it's normal to do things that -- in 10 or 20 years -- will result in your expulsion from your community is not a kindness to anyone.

There are shitty dudes out there today whose path to shitty dudehood got started when they watched Isaac Asimov deliver a tutorial on how to grope women without their consent and figured that the chuckling approval of all their peers meant that whatever doubts the might have had were probably misplaced. Those dudes don't get a pass because they learned from a bad example set by their community and its leaders -- but they might have been diverted from their path to shitty dudehood if they'd had better examples. They might not have scarred and hurt countless women on their way from the larval stage of shittiness to full-blown shitlord, and they themselves might have been spared their eventual fate, of being disliked and excluded from a community they joined in search of comradeship and mutual aid. The friends of those shitty dudes might not have to wrestle with their role in enabling the harm those shitty dudes wrought.

Jeannette Ng's speech was exactly the speech our field needs to hear. And the fact that she devoted the bulk of it to solidarity with the Hong Kong protesters is especially significant, because of the growing importance of Chinese audiences and fandom in sf, which exposes writers to potential career retaliation from an important translation market. There is a group of (excellent, devoted) Chinese fans who have been making noises about a Chinese Worldcon for years, and speeches like Ng's have to make you wonder: if that ever comes to pass, will she be able to get a visa to attend?

Back when the misogynist/white supremacist wing of SF started to publicly organize to purge the field of the wrong kind of fan and the wrong kind of writer, they were talking about people like Ng. I think that this is ample evidence that she is in exactly the right place, at the right time, saying the right thing.

And I am so proud to be part of this. To share with you my weird little story, an amalgam of all my weird interests, so much of which has little to do with my superficial identities and labels.

But I am a spinner of ideas, of words, as Margaret Cavendish would put it.

So I need say, I was born in Hong Kong. Right now, in the most cyberpunk in the city in the world, protesters struggle with the masked, anonymous stormtroopers of an autocratic Empire. They have literally just held her largest illegal gathering in their history. As we speak they are calling for a horological revolution in our time. They have held laser pointers to the skies and tried to to impossibly set alight the stars. I cannot help be proud of them, to cry for them, and to lament their pain.

I’m sorry to drag this into our fantastical words, you’ve given me a microphone and this is what I felt needed saying.

John W. Campbell, for whom this award was named, was a fascist. [Jeannette Ng/Medium]

(Image: @JeannetteNg)

(via Whatever)


My 2019 Worldcon Experience [Whatever]

It was great!

Okay, that’s it, thank you for coming.

Oh, wait, you wanted details? Well, I mean, okay, I guess I can do that. In no particular order:

* Krissy and I actually started our trip a few days before the Worldcon started. This year the Worldcon was in Dublin, Ireland, and neither of us had been either in Dublin or in Ireland, and we wanted to make sure we had time to be tourists and see the city before we basically confined ourselves to a single convention center for several days.

And we totally did the tourist thing! We saw the Book of Kells! We visited the crypts at Christchurch Cathedral! We ate adequate Irish food at a tourist trap in Temple Bar! We visited the Guinness Storehouse, which is like the Willy Wonka tour with beer! And so on! I really enjoyed Dublin because, at least where we were, it’s very walkable and pleasant to get around in. Krissy also managed to get outside of the city when she and some gal pals took a day trip to the shore. It was all very excellent.

So, Dublin and Ireland: A+++, very much enjoyed, plan to enjoy again at some point in the future. It had always been a dream of mine to visit the country, and this initial trip to the Emerald Isle did not disappoint. I have hundreds of pictures; I will probably put some up in a separate post when I get the time.

* My personal Worldcon was also delightful. I did not overschedule myself, confining myself to a single event per day (sort of; more on that later), so that left lots of time for hanging out with friends, which is probably the major reason I come to conventions these days. I like most writers spend most of my time looking into a screen, so bulk-loading friendships at conventions is kind of an actual thing. I got to see friends from the US, of course, who had made the trip, but I also got to see friends from Europe and elsewhere who see less frequently, including some I’ve not seen for years. It was fabulous.

Of my own events, I have to say the highlight was the dance I DJed on Saturday. As ever, I wondered whether people would show up — but they did, and from 10pm, when the dance started, to 1am when it ended, the dance floor was never empty. There’s something delightful about a whole, fairly large room of nerds getting sweaty to the greatest dance hits of the last several decades. As for myself, it will not surprise you that in addition to being the DJ I also danced, pretty much for the entire three hours. Two things about that: I dance almost exactly the way I did when I was in my early 20s, and also, I am now 50, so I’m still feeling it on Tuesday. I could barely walk down the stairs this morning. I think if I’m going to keep doing this dance thing — and I will — I should probably stretch more than I do.

I also read from two upcoming books, The Last Emperox and A Very Scalzi Christmas, and both were well received. I got help in reading from the second by my very good friend Yanni Kuznia, who is also the COO of Subterranean Press, which is publishing the book. She was delightful reading as an exasperated boss having to deal with an enthusiastic but clueless subordinate, read by me. If you know either of us, you may realize that this was typecasting. She was wonderful.

I also did an interview with Diane Duane, who was one of the guests of honor at the Worldcon. Diane and I had known each other online for more than a decade but the Worldcon was the first time we’d ever met in real life. I’m very happy to say we got on like the proverbial house on fire, and that Diane is simply delightful, both to have drinks with and then to have a conversation with in front of a couple hundred people. It was, in fact, pretty effortless, and also, the phrase “I have two highly polished uranium spheres, watch what happens when I clack them together” may have come up at some point in the conversation. You had to be there for that, but it brought the house down.

* Overall I thought the Dublin 2019 Worldcon was well-run and enjoyable, but I think it struggled with the number of people who attended (about 7,000 as I understand it), which is a high class problem to have, but still a problem. The convention center itself could only hold so much programming, so there was a satellite site for the art show, autographing and some other programming, almost a kilometer from the convention center proper. I definitely got my steps in during the week. It also meant that while pretty much every panel was well-attended, there was a lot of stuff people couldn’t get into, and lots of queuing. I know everything I had was jammed, to the point that I added both a second kaffeeklatsch and a second reading in order to accommodate attendees who wanted to see me (and here I give mad props to the Dublin 2019 programming staff, who made those happen on the fly). Every Worldcon — every convention, really — has its own challenges. As far as I can see, Dublin 2019 mostly handled them pretty well.

* And what did you think about the Hugos this year, Scalzi? Well, I think by now everyone knows how I feel about the Best Novel win, and otherwise I was very pleased by how the results sorted themselves out. Mind you, this is in no small part because I thought the composition of the finalist lists was very very good this year — you pretty much could have had entirely different winners in every category, and I would still walk away, as a reader and appreciator of the genre, with a feeling of satisfaction. I like Hugo years when it is really hard to decide what is one’s first choice, and this was indeed one of those years.

The award ceremony itself I thought was really well done except for one thing, which I will get to in a minute. Things that were praiseworthy were the hosting, the set design (as well as the design of the Hugo itself), and the interstitial bits, which included live music (which it should have, this is Ireland) and also a chance for both hosts to have their minute in the spotlight. All accomplished in just about two hours and fifteen minutes! That was pretty impressive.

The one thing that didn’t work was the attempt at live captioning, which was shown on a screen hanging above the presenters and hosts. The live captioning was apparently done via machine rather than by human translators, and it showed, because it mangled what presenters were saying, which was sometimes comical, but also messed up people’s names, which was not appropriate. Ultimately it became distracting and disconcerting. Ada Palmer, who was there to present the Campbell Award, didn’t know why people were weirdly and spontaneously laughing at her speech (which was interesting but not intentionally comical), until she finally looked up and saw how the machine translation was mistranslating her words. She took it with good grace, but she deserved better.

* I wrote something here about Jeannette Ng and her Campbell win and speech, but then after I was done writing it I realized I wrote a whole post inside a post, so I made it its own post, which you can find here.

* I showed my ass yesterday on Twitter — funny, that! — in discussing the the Best Related Work Hugo win of An Archive of Our Own, the fanfic repository initially started by (among others) Naomi Novik. I saw a headline from The Mary Sue saying something along the lines of “Thousands of Fanfic Writers Now Hugo Winners” and I RT’d it and was all “Well actually it doesn’t work that way blah blah blah blah blah,” to which fanfic people were all “Yeah, we all know that, but we’re invested in the win anyway, and also that’s coming across as pedantic and dickish, so that’s a great look for you,” and I was all “Yeah, you’re right, I’m a dumbass, sorry.”

And I am sorry! I was pleased with the Archive win and happy for the fanfic folks because I think they get dismissed a lot when what they do is really fundamental fanwork, and I’m generally supportive of fanfic, both as a writer and a fan. And also, you know: Fuzzy Nation. It’s totally fanfic. I made people feel bad who I should not have made feel bad, and now I feel bad, which I should, because I was bad. I used bad a lot in that last sentence. Anyway. I done fucked up, and if you’re a fanficcer and I made you feel bad because I came across as an elitist dick, I apologize. That’s on me and there’s no excuse. I’ll do better.

(What was nice was that after I apologized people were all, “well, you’ve been traveling so your brain was probably fried,” which while accurate is not a good excuse. But I appreciate people rationalizing my dumbassery. Thanks, folks.)

* I’ve seen some grumbling in the usual whiny dude quarters about the Hugos being dominated again by women this year, and my thought about that is what it usually is, which is: Meh. Even if I were inclined to suspect A CONSPIRACY OF THE FEEEEEEEEEMALES, which to be clear I am not, it’s hard to complain when the individual works and people under consideration are so strong. Unlike other recent conspiracies one could think about, The Feeeeeeeemale Conspiracy did the actual work. If it existed, which it doesn’t. And on a personal level a bunch of the people who won (and who were finalists) are people I like, so there was that, too. It was a good Hugo ceremony for me.

(Plus! Because I’ve lost weight recently I fit into a suit I hadn’t been able to wear for a few years, and I will tell you what, it looked good on me. I’ll take it.)

In all, the Dublin Worldcon was really happy-making for me, and I’m glad I came. They made a good one this year, folks. I’m looking forward to the one in New Zealand next year.


Huawei’s Kirin 990 chipset will finally support 4K video capture at 60fps [OSnews]

With Huawei’s P20 Pro last year and this year’s P30 Pro, the company pulled off some incredible camera innovations, at least in the photo department. In terms of recording video, it hasn’t done as much. Part of the reason for this is because the Kirin 970 and Kirin 980 chipsets don’t support recording video at 4K 60fps, a feature that you’d expect from such camera-centric smartphones. Luckily, that’s about to change with the next generation. While I was in Shenzhen for the past week, Huawei confirmed that the Kirin 990 will indeed support recording video at 4K 60fps. Starting with the Mate 30 series, you’ll no longer have to choose between a high resolution and a high frame rate. It’s incredible how fast Chinese companies manage to improve. If you ever wonder why the United States government is trying to hit Huawei so hard, it’s because of things like this. Aside from the possibly valid spying concerns, Huawei is simply also a major competitor to Silicon Valley, and this is a great way for American corporations/government to strike back. There aren’t many companies who can make every part of a device. Huawei is one of them.


Coding workshop (Ballarat, VIC, Australia) [Events]

This workshop, presented by Sturm Software Engineering, will teach you how to code on a free software project, and guide you through the whole process.

On the Friday evening you'll meet the mentors and participants, go through some preliminaries and then head out for dinner with the group. Dinner not included.

You'll spend a full day on Saturday working with the mentors and participants, with the aim of making your first contribution. Lunch and snacks will be provided.

It's a lot to do in one day, so a mentor will be in touch with you a week before and again two weeks after the event to help you get set up and to overcome any hurdles.

This event is suitable for tertiary students, hobbyists and technology professionals. We won't be teaching programming as such, so basic coding experience is required. We'll aim to find you a project that suits your skills and experience.

Please provide your own laptop.

Places limited to 16 participants

Location: 136 Albert St, Ballarat, Central Highlands of Victoria 3350, Australia

See here for registration information.

Please fill out our contact form, so that we can contact you about future events in and around Ballarat.

Today in GPF History for Tuesday, August 20, 2019 [General Protection Fault: The Comic Strip]

As the U.G.A. agents close in on PC and Trudy, they make a grave tactical error...


Happy birthday Mom! [Scripting News]

Facebook reminds me that today would be my mom's 87th birthday.

Happy birthday mom, where ever you are! I bought a house in the country. You would like it. Trump is still president. He hasn't blown up the world yet.

Seeya soon. ❤️ ❤️

Love, your son, David


Link [Scripting News]

Ted Howard found the problem with my NPM module that didn't work. Braintrust to the rescue!


The SuperH-3, part 12: Calling convention and function prologues/epilogues [The Old New Thing]

The calling convention used by Windows CE for the SH-3 processor looks very much like the calling convention for other RISC architectures on Windows.

The short version is that the first four parameters (assuming they are all 32-bit integers) are passed in registers r4 through r7, and the rest go onto the stack after a 16-byte gap. The 16-byte gap is the home space for the register parameters, and even if a function accepts fewer than four parameters, you must still provide a full 16 bytes of home space.

More strictly, the first 16 bytes of parameters are passed in registers r4 through r7. If a parameter is a floating point type, then how it gets passed depends on how the parameter is declared in the function prototype.

  • If the floating point type is prototyped as non-variadic, then it goes into the corresponding register fr4 through fr7, and the integer register goes unused.
  • If the floating point type is prototyped as variadic, then it stays in the integer register.
  • If the function has no prototype, then the floating point type goes into both the floating point register and the integer register.

The reason for this rule is the same as before. Variadic parameters go into integer registers because the callee doesn’t know what type they are upon function entry. To make things easier, variadic parameters are always passed in integer registers, so that the callee can just spill them into the home space and treat them all as stack-based parameters. And unprototyped functions pass the floating point values in both floating point and integer registers because it doesn’t know whether the function is going to treat them as variadic or non-variadic, so it has to cover both bases.

Unlike the Windows calling convention for the MIPS R4000, the Windows calling convention for the SH-3 does not require 64-bit values to be 8-byte aligned. For example:

void f(int a, __int64 b, int c);
MIPS Contents   SH-3 Contents
a0 a r4 a
a1 unused r5 b
a2 b r6
a3 r7 c
on stack c

On entry to the function, the return address is provided in the pr register, and on exit the function’s return value is placed in the r0 register. However, if the function’s return value is larger than 32 bits, then a secret first parameter is passed which is a pointer to a buffer to receive the return value. The parameters are caller-clean; the function must return with the stack pointer at the same value it had when control entered.

If the concept of home space offends you, you can think of it as a 16-byte red zone that sits above the stack pointer.

The stack for a typical function looks like this:

param 6 (if function accepts more than 4 parameters)
param 5 (if function accepts more than 4 parameters)
param 4 home space
param 3 home space
param 2 home space
param 1 home space ← stack pointer at function entry
saved registers
saved return address ← stack pointer after saving registers
local variables
outbound parameters
beyond 4 (if any)
param 4 home space
param 3 home space
param 2 home space
param 1 home space ← stack pointer after prologue complete

The function typically starts by pushing onto the stack any nonvolatile registers, as well as its return address. This takes advantage of the pre-decrement addressing mode. In practice, the Microsoft C compiler allocates nonvolatile registers starting at r8 and increasing, and preserves them on the stack in that order, followed by the return address.

In this example, the function has four registers to save, plus the return address.

    MOV.L   r8, @-r15   ; push r8
    MOV.L   r9, @-r15   ; push r9
    MOV.L   r10, @-r15  ; push r10
    MOV.L   r11, @-r15  ; push r11
    STS.L   pr, @-r15   ; push pr

At some point (perhaps not immediately), the function will adjust its stack pointer to create space for its local variables and outbound parameters. If the function has a small stack frame, it can use the immediate form of the SUB instruction. Otherwise, it’s probably going to load a constant into a register and use that as the input to the two-register form of the SUB instruction.

If the function has a large stack frame, it will be difficult to access variables far away from r15 due to the limited reach of the register indirect with displacement addressing mode. To help with this problem, the compiler might park the frame pointer register r14 in the middle of the frame, or at least close to a frequently-used variable, so that it can reach more local variables in a single instruction.

At the exit of the function, the operations performed in the prologue are reversed: The stack pointer is adjusted to point to the saved return address, and the saved registers are popped off the stack. Finally, the function returns with a rts.

    LDS.L   @r15+, pr   ; pop pr
    MOV.L   @r15+, r11  ; pop r11
    MOV.L   @r15+, r10  ; pop r10
    MOV.L   @r15+, r9   ; pop r9
    RTS                 ; return
    MOV.L   @r15+, r8   ; pop r8 (in the delay slot)

Lightweight leaf functions are those which call no other functions and which can accomplish their task using only volatile registers and the 16 bytes of home space. Such functions may not modify the pr register or any nonvolatile registers (which includes the stack pointer).

Next time, we’ll look at some code patterns you’ll see in the compiler-generated code, y’know, the stuff that goes inside the function.


The post The SuperH-3, part 12: Calling convention and function prologues/epilogues appeared first on The Old New Thing.


Jeannette Ng, John W. Campbell, and What Should Be Said By Whom and When [Whatever]

In the aftermath of the Hugo Award ceremony this year, there’s been quite the harumph harumph about the fact that this year’s winner, Jeannette Ng, started her acceptance speech by offering up the opinion that John W. Campbell, the foundational science fiction editor for whom the award is named, was a fascist (you can read the actual words of her prepared speech here). Immediately there was fallout from various quarters, on the order that Ng was a) insufficiently grateful, b) should not have put politics into her speech, c) should have declined the award rather than denigrate Campbell, d) should have done pretty much anything other than what she did up there on the stage.

Unlike most of the people who are now grousing about this, I am an actual winner of the Campbell Award, so I think I am uniquely positioned to have some thoughts about this.

And what I think is: Hey, you know what? Campbell, aside from everything else he might have been, was a racist and a sexist and as time went on pretty deeply way the hell out there, and from his lofty perch he was able to shape the genre into what he thought it should be, in a way that still influences how people write science fiction — for fuck’s sake, I write science fiction in an essentially Campbellian manner, and it would be foolish for me to suggest otherwise.

Do those bigoted aspects about about Campbell make him an actual fascist? Well, I wouldn’t have characterized him as such, but then never thought to think of it in those terms, so there’s that. Now that I have been made to think of it, I know that the people and organizations I would have unhesitatingly called fascist actively incorporated the mechanisms of American racism into their worldview. It’s not exactly a secret that the actual Nazis looked to the United States’ “Jim Crow” laws for inspiration and justification for their own racism and, ultimately, genocide. American racism — the racism that Campbell both actively and passively forged into the structure of the science fiction genre — is at the very least an ur-text to fascism, and of course racism is so deeply ingrained into fascism today, and vice versa, that you couldn’t separate the one from the other without killing both, which, incidentally, is a very good idea.

So when Jeannette Ng stands up and calls Campbell a fascist, what I can say is: It’s not the argument I would have made (in no small part because, again, I literally never thought to make it), but it is an argument to be made. Nor is it a facile, unserious or utterly indefensible argument, for the reasons I note above, and for other reasons as well –seriously, go have a deep dive into some of the things Campbell believed and espoused; the Venn diagram for “Things Campbell Said” and “Things Fascists Say” is, uhhhhh, overlappy. One doesn’t have to agree (or know if one agrees) with Ng’s fundamental proposition to accept that she has a perfect right to say it, and by saying it, to force us to haul out Campbell’s track record and words to examine and interrogate.

Moreover, she has a perfect right to say it as she is up on the stage laying claim to the award that has Campbell’s name on it. Was what she said comfortable, and happy, and appropriately cheerful? No, it wasn’t, at least not the first bit of it. Certainly when I won I wouldn’t have (and didn’t) say anything of the sort, even though by that time I was well aware of who Campbell was a person, and his various imperfections. But, and quite obviously, I’m not Ng; the world of science fiction and fantasy was literally designed — by Campbell and many others! — to take me, a chummy white dude writing in the Campbellian/Heinleinian mode, to its bosom. The ease with which I slid into its good graces is pretty much a matter of record, and you can believe I was happy to slide right on in there.

The world of science fiction and fantasy wasn’t so felicitously designed for Ng and many others who are not, shall we say, chummy white dudes writing in a way Campbell would approve. She and they have spent years working to make the genre a place where they could work and build and thrive. She and they know better than I the work they had to do, where they found resistance, where they found help and from whom, and what they had to rebel against — and still have to. And she has a right to say all of those things, while claiming an award named for a person who she could argue would have been resistant to her presence in the field. Her relationship to him is not the relationship I have to him, in no small part because he would not have been to her what he would have been to me.

You can claim the John W. Campbell Award without revering John W. Campbell, or paying him lip service, and you can criticize him, based on what you see of his track record and your interpretation of it. The award is about the writing, not about John W. Campbell, and that is a solid fact. If a recipient of the Campbell Award can’t do these things, or we want to argue that they shouldn’t, then probably we should have a conversation about whether we should change the name of the award. It wouldn’t be the first time an award in the genre has been materially changed in the fallout of someone calling out the problems with the award’s imagery. The World Fantasy Award was changed in part because Nnedi Okorafor and Sofia Samatar were public (Samatar in her acceptance speech!) about the issue of having a grotesque of blatant racist HP Lovecraft as the trophy for the award. There was a lot of grousing and complaining and whining about political correctness then, too. And yet, the award survives, and the new trophy, for what it’s worth, is gorgeous. So, yes, if this means we have to consider whether it’s time to divorce Campbell from the award, let’s have that discussion.

Now, here’s a real thing: Part of the reaction to Ng’s speech is people being genuinely hurt. There are still people in our community who knew Campbell personally, and many many others one step removed, who idolize and respect the writers Campbell took under his wing. And there are people — and once again I raise my hand — who are in the field because the way Campbell shaped it as a place where they could thrive. Many if not most of these folks know about his flaws, but even so it’s hard to see someone with no allegiance to him, either personally or professionally, point them out both forcefully and unapologetically. They see Campbell and his legacy abstractly, and also as an obstacle to be overcome. That’s deeply uncomfortable.

It’s also a reality. Nearly five decades separate us today from Campbell. It’s impossible for new writers today to have the same relationship to him as their predecessors in the field did, even if the influence he had on the field works to their advantage. Moreover, and especially in the last few years, the landscape of science fiction and fantasy has changed, and Campbell and the writers and forms he championed simply don’t loom as large as they did. Nor should they — if they did, the genre would be stultifying, and, yes, sterile. Campbell and his cohort will never go away, and they still rise over the plain. But they’re the Appalachians, familiar and worn, known and explored. Meanwhile, the Rockies are bursting out of the ground and rising. Tectonically speaking, that’s where the action is. And that’s where so many of the new writers — Jeannette Ng and the other writers on the Campbell Award ballot this year, for starters — are to be found. Which is as it should be.

I’m proud of having won the Campbell Award. It was given to me by fans and it was in many ways my welcome into the fannish community, and the community of science fiction and fantasy at large. My love and honor for the award doesn’t change who John W. Campbell was, and doesn’t change because of who John W. Campbell was. I accept that the namesake of the award was foundational, and imperfect, and wrong in a number of his views. I accept that other people have the right to, and will, criticize who he was, even as they claim an award named for him, and, through the work which earned them that award, make a definitive mark in the genre. Jeannette Ng has done the work, and made her mark, and in her speech, gave me a lot to think about that I hadn’t thought about before. She gave us all lot to think about. I hope we will.

In the meantime, as a former Campbell Award winner, I congratulate Jeannette Ng on her win, and support her right to have said what she said, where she said it. I’m glad to share the field with her, and I look forward to her being in it, and shaping it, in the years to come.


Security updates for Tuesday []

Security updates have been issued by Debian (flask), openSUSE (clementine, dkgpg, libTMCG, openexr, and zstd), Oracle (kernel, mysql:8.0, redis:5, and subversion:1.10), SUSE (nodejs6, python-Django, and rubygem-rails-html-sanitizer), and Ubuntu (cups, docker, docker-credential-helpers, kconfig, kde4libs, libreoffice, nova, and openldap).


1176: Die or Die Trying [Order of the Stick]


Nicolas Hafner: The End of Daily Gamedev - Confession 87 [Planet Lisp]

It's been two months now since I started to do daily game development streams. I've been trying my best, but it is time for this to come to a close. In this article I'll talk about the various things that happened, why I'm stopping, and the future of the Leaf game. Strap in!

It's actually been slightly longer than two months, but since I missed some days due to being sick, and some others because I didn't feel like streaming - more on that later - I'll just count it as two months. In any case, in this time I've done 56 streams, almost all of them two hours long. That's a lot of hours, and I'm truly impressed that some people stuck around for almost all of them. Thank you very much! A lot happened in that time too, and I think it would be interesting to go over some of the major features and talk about them briefly.

New Features in Leaf

Slopes and Collision

Collision detection was heavily revised from the previous version. The general procedure is to scan the current chunk for hits until there are no more hits to be found. If we have more than ten hits we assume that the player is in a wall somehow and just die. The number ten is obviously arbitrary, but somehow it seems sufficient and I haven't had any accidental deaths yet.

When a hit is detected, it dispatches on the type of tile or entity that was collided with. It does so in two steps, the first is a test whether the collision will happen at all, to allow sub-tile precision, and the second is the actual collision resolution, should a full hit have been detected. The first test can be used to elide collisions with jump-through platforms or slopes if the player moves above the actual slope surface. The actual collision resolution is typically comprised of moving the player to the collision point, updating velocity along the hit normal, and finally zipping out of the ground if necessary to avoid floating point precision issues.

The collision detection of the slopes itself is surprisingly simple and works on the same principle as swept AABB tests: we can enlarge the slope triangle by simply moving the line towards the player by the player's half-size. Once this shift is done we only need to do a ray-line collision test. During resolution there's some slight physics cheating going on to make the player stick to the ground when going down a slope, rather than flying off, but that's it.

Packets and File Formats

Leaf defines a multitude of file formats. These formats are typically all defined around the idea of a packet - a collection of files in a directory hierarchy. The idea of a packet allows me to define these formats as both directly on disk, in-memory as some data structure, or encapsulated within an archive. The packet protocol isn't that complicated and I intend on either at least putting it into Trial, or putting it into its own library altogether. Either way, it allows the transparent implementation of these formats regardless of backing storage.

The actual formats themselves also follow a very similar file structure: a meta.lisp file for a brief metadata header, which identifies the format, the version, and some authoring metadata fields. This file is in typical s-expression form and can be used to create a version object, which controls the loading and writing process of the rest of the format. In the current v0, this usually means an extra data.lisp payload file, and a number of other associated payload files like texture images.

The beauty of using generic functions with methods that specialise both on the version and object at the same time is that it allows me to define new versions in terms of select overrides, so that I can specify new behaviour for select classes, rather than having to redo the entire de/serialisation process, or breaking compatibility altogether.

Dialogue and Quests

The dialogue and quests are implemented as very generic systems that should have the flexibility (I hope) to deal with all the story needs I might have in the future. Dialogue is written in an extended dialect of Markless. For instance, the following is a valid dialogue snippet:

~ Fi
| (:happy) Well isn't this a sight for sore eyes!
| Finally a bit of sunshine!

- I don't like rain
  ~ Player
  | I don't mind the rain, actually.
  | Makes it easier to think.
- Yeah!
  ~ Player
  | Yeah, it's been too long! Hopefully this isn't announcing the coming of a sandstorm.
  ! incf (favour 'fi)
- ...
  ! decf (favour 'fi)

~ Fi
| ? (< 3 (favour 'fi))
| | So, what's our next move?
| |?
| | Alright, good luck out there!

The list is translated into a choice for the player to make, which can impact the dialogue later. The way this is implemented is through a syntax extension in the cl-markless parser, followed by a compiler from the Markless AST to an assembly language, and a virtual machine to execute the assembly. The user of the dialogue system only needs to implement the evaluation of commands, the display of text, and the presentation of choices.

The quest system on the other hand is based on node graphs. Each quest is represented as a directed graph of task nodes, each describing a task the player must fulfil through an invariant and a success condition. On success, one or more successor tasks can be unlocked. Tasks can also spawn dialogue pieces to become available as interactions with NPCs or items. The system is smart enough to allow different, competing branches, as well as parallel branches to complete a quest. I intend on building a graph editor UI for this once Alloy is further along.

Both of these systems are, again, detached enough that I'll either put them into Trial, or put them into a completely separate library altogether. I'm sure I'll need to adjust things once I actually have some written story on hand to use these systems with.

Platforming AI

The platforming AI allows characters to move along the terrain just like the player would. This is extremely useful for story reasons, so that characters can naturally move to select points, or idle around places rather than just standing still. The way this is implemented is through a node graph that describes the possible movement options from one valid position to the next. This graph is built through a number of scanline passes over the tile map that either add new nodes or connect existing nodes together in new ways.

The result is a graph with nodes that can connect through walk, crawl, fall, or jump edges. A character can be moved along this graph by first running A* to find a shortest path to the target node, and then performing a real-time movement through the calculated path. Generally the idea is to always move the player in the direction of the next target node until that node has been reached, in which case it's popped off the path. The jump edges already encode the necessary jump parameters to use, so when reaching a jump node the character just needs to assume the initial velocity and let standard physics do the rest.

The implementation includes a simple visualiser so that you can see how characters would move across the chunk terrain. When the chunk terrain changes, the node graph is currently just recomputed from scratch which isn't fast, but then again during gameplay the chunk isn't going to change anyway so it's only really annoying during editing. I'll think about whether I want to implement incremental updates.


Leaf has gone through two lighting systems. The old one worked through signed distance fields that were implicitly computed through a light description. New light types required new shader code to evaluate the SDF, and each light required many operations in the fragment stage, which is costly.

The new system uses two passes, in the first lights are rendered to a separate buffer. The lights are rendered like regular geometry, so we can use discrete polygons to define light areas, and use other fancy tricks like textured lights. In the second pass the fragment shader simply looks up the current fragment position in the light texture and mixes the colours together.

In effect this new system is easier to implement, more expressive, and much faster to run. Overall it's a massive win in almost every way I can imagine. There's further improvements I want to make still, such as shadow casting, dynamic daylights, and light absorption mapping to allow the light to dissipate into the ground gradually.


Alloy is a new user interface toolkit that I've been working on as part of Leaf's development. I've been in need for a good UI toolkit that I can use within GL (and otherwise) for a while, and a lot of Leaf's features had to be stalled because I didn't have one yet. However, a lot of Alloy's development is also only very distantly related to game development itself, and hardly at all related to the game itself. Thus I think I'll talk more about Alloy in other articles sometime.

Why I'm Stopping

I initially started this daily stuff to get myself out of a rut. At the time I wasn't doing much at all, and that bothered me a lot, so committing to a daily endeavour seemed like a good way to kick myself out of it. And it was! For a long time it worked really well. I enjoyed the streams and made good progress with the game.

Unfortunately I have the tendency to turn things like this into enormous burdens for myself. The stream turned from something I wanted to do into something I felt I had to do, and then ultimately into something I dreaded doing. This has happened before with all of my projects, especially streaming ones. With streams I quickly feel a lot of pressure because I get the idea that people aren't enjoying the content, that it's just a boring waste of time. Maybe it is, or maybe it isn't, I don't know. Either way, having to worry about the viewers and not just the project I'm working on, especially trying to constrain tasks to interesting little features that can fit into two hours turns into a big constraint that I can't keep up anymore.

There's a lot of interesting work left to be done, sure, but I just can't bear things anymore at the moment. Dreading the stream poisoned a lot of the rest of my days and ultimately started to hurt my productivity and well-being over the past two weeks. Maybe I'll do more streams again at some point in the future, but for now I need a break for an indeterminate amount of time.

The Future of Leaf

Leaf isn't dead, though. I intend to keep working on it on my own, and I really do want to see it finished one day, however far away that day may be. Currently I feel like I need to focus on writing, which is a big challenge for me. I'm a very, very inexperienced writer, especially when it comes to long-form stories and world-building. There I have practically no idea on how to do anything. If you are a writer, or are interested in talking shop about stories, please contact me.

Other than writing I'm probably going to mostly work on Alloy in the immediate future. I hope to have a better idea of the writing once I'm done, and that should give rise to more features to implement in Leaf directly. I'll try to keep posting updates on the blog here as things progress in any case, and there's a few systems I'd like to elaborate on in technical articles as well.

Thanks to everyone who read my summaries, watched the streams or recordings, and chatted live during this time. It means a lot to me to see people genuinely interested in what I do.


Ash to Ashes [George Monbiot]

Thanks to shocking failures of government, every tree, almost everywhere, is now threatened by killer plagues

By George Monbiot, published in the Guardian 14th August 2019

As Dutch elm disease spread across Britain in the 1970s, the country fell into mourning. When the sentinel trees that framed our horizons were felled, their loss was a constant topic of sad and angry conversation. Today, just a few years into the equally devastating ash dieback epidemic, and as the first great trees are toppled, most of us appear to have forgotten all about it. I’ve travelled around much of Britain this summer, and seen the disease almost everywhere. A survey published this spring found infected trees across roughly three-quarters of England and Wales: the spread has been as rapid and devastating as ecologists predicted. But in this age of hypernormalisation, only a few people still seem to care. Ash to ashes: our memories wither as quickly as the trees. 

And almost nothing has been learnt. Our disease prevention rules, whose scope is restricted by the European Union and the World Trade Organisation, and whose enforcement is restricted by the British government’s austerity, do little to prevent similar plagues afflicting our remaining trees. Several deadly pathogens are marching across Europe. While it is hard to prevent some of these plagues from spreading across land, there is a simple measure that would stop most of them from spreading across water: a ban on the import of all live plants except those grown from tissue cultures, in sterile conditions.

But bans are more or less banned. Nothing must be allowed to obstruct free trade. Instead, the world’s governments rely on hand flapping. Take, for example, a lethal plague called Xylella, that is ravaging olive groves in Italy, and threatens a remarkable variety of trees and shrubs, including oak, sycamore, plane and cherry. The system for preventing its spread depends on inspections of random consignments of known host plants, and a passport scheme to ensure they aren’t imported from infected areas.

This system is likely to be useless. The EU keeps a list of plants that can carry Xylella. It has been updated 12 times in four years, as new carriers emerge. No one knows how many more host species there might be. Visual inspections won’t reveal plants that carry the disease without symptoms. Random sampling won’t protect us from a plague that can be introduced by a single plant.

Nor do we know whether Xylella is the most urgent risk to our remaining trees, or whether an entirely new contagion will hit them instead. Many plant pathogens evolve at extraordinary speed, jump unexpectedly from one host to another, suddenly hybridise with each other, and behave in radically different ways in different environments. A system that regulates only known risks is bound to fail.

Even in economic terms, the live plant trade is senseless. Ash dieback alone, according to a paper in Current Biology, will cost this country around £15 billion. But the UK’s import and export of all live plants amounts to £300 million a year – 2% of the costs of this disease. The paper estimates that another 47 major tree pests and diseases now threaten to arrive in this country, and these are just the known plagues.

In ecological terms, this legislative failure is a total disaster. For the sake of deregulatory machismo, we face the prospect of tree species everywhere eventually meeting their deadly pathogens. Where logging and climate breakdown have so far failed to eliminate the world’s forests, imported diseases threaten to complete the job.

What will come next? Will our beech trees succumb to Phytophthora kernoviae, a disease that appears to have been imported to Cornwall on infected shrubs from New Zealand? Will Sitka spruce, on which commercial forestry in this country relies to an extraordinary extent, be hammered by the larger eight-toothed spruce bark beetle, found for the first time this year in a Kent woodland? Will it be hit by another marvellously-named plague, Neonectria fuckeliana? Or by something else entirely? As the trade in live plants reaches almost every corner of the earth, nothing and nowhere is safe.

Just as we need a precautionary approach, every lid is being ripped off, every barrier smashed, facilitating trade in everything, including pathogens. In response to a parliamentary question about Xylella, the environment minister Therese Coffey claimed that Brexit creates an opportunity to introduce “stricter biosecurity measures”. It does, but will it be used? Given that, for the monomaniacs who now run this country, the main purpose of leaving the EU is to escape its public protections, the chances of Brexit leading to the stricter regulation of plant imports seem remote.

Never mind that this trade makes neither ecological nor economic sense. Our government, like many others, favours a global trade regime that places the free movement of goods above all other values (while imposing ever tighter restrictions on the free movement of people).

There’s nothing good about ash dieback, but there is one useful thing that could be done: wherever possible, leave the dead trees to stand. There is more life in a dead tree than in a living tree: around 2000 animal species in the UK rely on dead or dying wood for their survival. But (except in politics) there’s a dearth of dead wood in this country. Many species, such as the lesser spotted woodpecker, the pied flycatcher and the stag beetle, are severely restricted by the shortage of decay, caused by our tidy-minded forestry.

And there’s another reason to let the dead giants stand: as memorials to the repeated failures of government. Let us remember our losses, and learn from them.


Surveillance as a Condition for Humanitarian Aid [Schneier on Security]

Excellent op-ed on the growing trend to tie humanitarian aid to surveillance.

Despite the best intentions, the decision to deploy technology like biometrics is built on a number of unproven assumptions, such as, technology solutions can fix deeply embedded political problems. And that auditing for fraud requires entire populations to be tracked using their personal data. And that experimental technologies will work as planned in a chaotic conflict setting. And last, that the ethics of consent don't apply for people who are starving.


Electric Geek Transportation Systems [Coding Horror]

I've never thought of myself as a "car person". The last new car I bought (and in fact, now that I think about it, the first new car I ever bought) was the quirky 1998 Ford Contour SVT. Since then we bought a VW station wagon in 2011 and a Honda minivan in 2012 for family transportation duties. That's it. Not exactly the stuff The Stig's dreams are made of.

The station wagon made sense for a family of three, but became something of a disappointment because it was purchased before — surprise! — we had twins. As Mark Twain once said:

Sufficient unto the day is one baby. As long as you are in your right mind don't you ever pray for twins. Twins amount to a permanent riot. And there ain't any real difference between triplets and an insurrection.

I'm here to tell you that a station wagon doesn't quite cut it as a permanent riot abatement tool. For that you need a full sized minivan.

I'm with Philip Greenspun. Like black socks and sandals, minivans are actually … kind of awesome? Don't believe all the SUV propaganda. Minivans are flat out superior vehicle command centers. Swagger wagons, really.


The A-Team drove a van, not a freakin' SUV. I rest my case.

After 7 years, the station wagon had to go. We initially looked at hybrids because, well, isn't that required in California at this point? But if you know me at all, you know I'm a boil the sea kinda guy at heart. I figure if you're going to flirt with partially electric cars, why not put aside these half measures and go all the way?

Do you remember that rapturous 2014 Oatmeal comic about the Tesla Model S? Even for a person who has basically zero interest in automobiles, it did sound really cool.


It's been 5 years, but from time to time I'd see some electric vehicle on the road and I'd think about that Intergalactic SpaceBoat of Light and Wonder. Maybe it's time for our family to jump on the electric car trend, too, and just late enough that we can avoid the bleeding edge and end up merely on the … leading edge?

That's why we're now the proud owners of a fully electric 2019 Kia Niro.


I've somehow gone from being a person who basically doesn't care about cars at all … to being one of those insufferable electric car people who won't shut up about them. I apologize in advance. If you suddenly feel an overwhelming urge to close this browser tab, I don't blame you.

I was expecting another car, like the three we bought before. What I got, instead, was a transformation:

  • Yes, yes, electric cars are clean, but it's a revelation how clean everything is in an electric. You take for granted how dirty and noisy gas based cars are in daily operation – the engine noise, the exhaust fumes, the brake dust on the rims, the oily residues and thin black film that descends on everything, the way you have to wash your hands every time you use the gas station pumps. You don't fully appreciate how oppressive those little dirty details were until they're gone.

  • Electric cars are (almost) completely silent. I guess technically in 2019 electric cars require artificial soundmakers at low speed for safety, and this car has one. But The Oatmeal was right. Electric cars feel like spacecraft because they move so effortlessly. There's virtually no delay from action to reaction, near immediate acceleration and deceleration … with almost no sound or vibration at all, like you're in freakin' space! It's so immensely satisfying!

  • Electric cars aren't just electric, they're utterly digital to their very core. Gas cars always felt like the classic 1950s Pixar Cars world of grease monkeys and machine shop guys, maybe with a few digital bobbins added here and there as an afterthought. This electric car, on the other hand, is squarely in the post-iPhone world of everyday digital gadgets. It feels more like a giant smartphone than a car. I am a programmer, I'm a digital guy, I love digital stuff. And electric cars are part of my world, rather than the other way around. It feels good.

  • Electric cars are mechanically much simpler than gasoline cars, which means they are inherently more reliable and cheaper to maintain. An internal combustion engine has hundreds of moving parts, many of which require regular maintenance, fluids, filters, and tune ups. It also has a complex transmission to translate the narrow power band of a gas powered engine. None of this is necessary on an electric vehicle, whose electric motor is basically one moving part with simple 100% direct drive from the motor to the wheels. This newfound simplicity is deeply appealing to a guy who always saw cars as incredibly complicated (but computers, not so much).

  • Being able to charge at home overnight is perhaps the most radical transformation of all. Your house is now a "gas station". Our Kia Niro has a range of about 250 miles on a full battery. With any modern electric car, provided you drive less than 200 miles a day round trip (who even drives this much?), it's very unlikely you'll ever need to "fill the tank" anywhere but at home. Ever. It's so strange to think that in 50 years, gas stations may eventually be as odd to see in public as telephone booths now are. Our charger is, conveniently enough, right next to the driveway since that's where the power breaker box was. With the level 2 charger installed, it literally looks like a gas pump on the side of the house, except this one "pumps" … electrons.


This electric car is such a great experience. It's so much better than our gas powered station wagon that I swear, if there was a fully electric minivan (there isn't) I would literally sell our Honda minivan tomorrow and switch over. Without question. And believe me, I had no plans to sell that vehicle two months ago. The electric car is that much better.

I was expecting "yet another car", but what I got instead was a new, radical worldview. Driving a car powered by barely controlled liquid fuel detonations used to be normal. But in an world of more and more viable electric vehicles this status quo increasingly starts to feel … deeply unnatural. Electric is so much better of an overall experience that you begin to wonder: why did we ever do it that way?

Gas cars seem, for lack of a better word, obsolete.


How did this transformation happen, from my perspective, so suddenly? When exactly did electric cars go from "expensive, experimental thing for crazy people" to "By God, I'll never buy another old fashioned gasoline based car if I can help it"?

I was vaguely aware of the early electric cars. I even remember one coworker circa 2001 who owned a bright neon green Honda Insight. I ignored it all because, like I said, I'm not a car guy. I needed to do the research to understand the history, and I started with the often recommended documentary Who Killed the Electric Car?

This is mostly about the original highly experimental General Motors EV1 from 1996 to 1999. It's so early the first models had lead-acid batteries! 😱 There's a number of conspiracy theories floated in the video, but I think the simple answer to the implied question in the title is straight up price. The battery tech was nowhere near ready, and per the Wikipedia article the estimated actual cost of the car was somewhere between $100,000 and $250,000 though I suspect it was much closer to the latter. It is interesting to note how much the owners (well, leasers) loved their EV1s. Having gone through that same conversion myself, I empathize!

I then watched the sequel, Revenge of the Electric Car. This one is essential, because it covers the dawn of the modern electric car we have today.

This chronicles the creation of three very influential early electric cars — the Nissan Leaf, the Chevy Volt, and of course the Tesla Roadster from 2005 - 2008. The precise moment that Lithium-Ion batteries were in play – that's when electric cars started to become viable. Every one of these three electric cars was well conceived and made it to market in volume, though not without significant challenges, both internal and external. None of them were perfect electric vehicles by any means: the Roadster was $100k, the Leaf had limited range, and the Volt was still technically a hybrid, albeit only using the gasoline engine to charge the battery.

Ten years later, Tesla has the model 3 at $38,000 and we bought our Kia Niro for about the same price. After national and state tax incentives and rebates, that puts the price at around $30,000. It's not as cheap as it needs to be … yet. But it's getting there. And it's already competitive with gasoline vehicles in 2019.


It's still early, but the trend lines are clear. And I'm here to tell you that right now, today, I'd buy any modern electric car over a gasoline powered car.

If you too are intrigued by the idea of owning an electric car, you should be. It's freaking awesome! Bring your skepticism, as always; I highly recommend the above Matt Ferrell explainer video on electric vehicle myths.

As for me, I have seen the future, and it is absolutely, inexorably, and unavoidably … electric. ⚡

Bits from Debian: Postmortem of failed Docker registry move [Planet Debian]

The Salsa admin team provides the following report about the failed migration of the Docker container registry. The Docker container registry stores Docker images, which are for example used in the Salsa CI toolset. This migration would have moved all data off to Google Cloud Storage (GCS) and would have lowered the used file system space on Debian systems significantly.

The Docker container registry is part of the Docker distribution toolset. This system supports multiple backends for file storage: local, Amazon Simple Storage Service (Amazon S3) and Google Cloud Storage (GCS). As Salsa already uses GCS for data storage, the Salsa admin team decided to move all the Docker registry data off to GCS too.

Migration and rollback

On 2019-08-06 the migration process was started. The migration itself went fine, although it took a bit longer than anticipated. However, as not all parts of the migration had been properly tested, a test of the garbage collection triggered a bug in the software.

On 2019-08-10 the Salsa admins started to see problems with garbage collection. The job running it timed out after one hour. Within this timeframe it not even managed to collect information about all used layers to see what it can cleanup. A source code analysis showed that this design flaw can't be fixed.

On 2019-08-13 the change was rolled back to storing data on the file system.

Docker registry data storage

The Docker registry stores all of the data sans indexing or reverse references in a file system-like structure comprised of 4 separate types of information: Manifests of images and contents, tags for the manifests, deduplicaed layers (or blobs) which store the actual data, and lastly links which show which deduplicated blogs belong to their respective images, all of this does not allow for easy searching within the data.

The file system structure is built as append-only which allows for adding blobs and manifests, addition, modification, or deletion of tags. However cleanup of items other than tags is not achievable within the maintenance tools.

There is a garbage collection process which can be used to clean up unreferenced blobs, however according to the documentation the process can only be used while the registry is set to read-only and unfortunately it cannot be used to clean up unused links.

Docker registry garbage collection on external storage

For the garbage collection the registry tool needs to read a lot of information as there is no indexing of the data. The tool connects to the storage medium and proceeds to download … everything, every single manifest and information about the referenced blobs, which now takes up over 1 second to process a single manifest. This process will take up a significant amount of time, which in the current configuration of external storage would make the clean up nearly impossible.

Leasons learned

The Docker registry is a data storage tool that can only properly be used in append-only mode. If you never cleanup, it works well.

As soon as you want to actually remove data, it goes bad. For Salsa clean up of old data is actually a necessity, as the registry currently grows about 20GB per day.

Next steps

Sadly there is not much that can be done using the existing Docker container registry. Maybe GitLab or someone else would like to contribute a new implementation of a Docker registry, either integrated into GitLab itself or stand-alone?


Four short links: 20 August 2019 [All - O'Reilly Media]

Content Moderation, Robust Learning, Archiving Floppies, and xkcd Charting

  1. Information Operations Directed at Hong Kong (Twitter) -- Today we are adding archives containing complete tweet and user information for the 936 accounts we’ve disclosed to our archive of information operations—the largest of its kind in the industry. This is a goldmine for researchers, as you can see from Renee DiResta's notes. Facebook also removed accounts for the same reason but hasn't shared the data. Google has not taken a position yet, which prompted Alex Stamos to say, "Two of the three relevant companies have made public statements. Neither have realistic prospects in the PRC, the other does. Lots of lessons from this episode, but one might be a reinforcement of how Russia represents “easy mode” for platforms doing state attribution. It’s a lot harder when the actor is financially critical, like the PRC or India." We're in interesting times, and research around content moderation are the most interesting things I've seen on the internet since SaaS. This work cuts to human truths, technical capability, and the limits of openness.
  2. Robust Learning from Untrusted Sources (Morning Paper) -- designed to let you incorporate data from multiple "weakly supervised" (i.e., noisy) data sources. Snorkel replaces labels with probability-weighted labels, and then trains the final classifier using those.
  3. Imaging Floppies (Jason Scott) -- recording the magnetic strength everywhere on the disk so you archive all the data not just the data you can read once. The result of this hardware is that it takes a 140 kilobyte floppy disk (140k) and reads it into a 20 megabyte (20,000 kilobyte) disk image. This means a LOT of the magnetic aspects of the floppy are read in for analysis. [...] This doesn't just dupe the data, but the copy protection, unique track setup, and a bunch of variance around each byte on the floppy to make it easier to work with. The software can then do all sorts of analysis to give us excellent, bootable disk images. Don't ever think that archiving is easy, or problems are solved.
  4. Chart.xkcd -- a chart library plots “sketchy,” “cartoony,” or “hand-drawn” styled charts. The world needs more whimsy.

Continue reading Four short links: 20 August 2019.


FACT Confirms Premier League Anti-Piracy Action Against IPTV Suppliers [TorrentFreak]

Last month, the North West Regional Organised Crime Unit (NWROCU) said it had targeted people involved in the supply of ‘pirate’ IPTV subscriptions and the sale of modified set-top boxes.

Its ‘disruption team’ reported working with GAIN (Government Agency Intelligence Network) and the Federation Against Copyright Theft, targeting people in Wrexham and Blackburn. It now transpires that a broader operation took place.

This morning, FACT revealed that following a collaboration with the Premier League, aimed at disrupting the availability of illegal sports streams ahead of the new 2019/2020 football season, it had teamed up with law enforcement agencies to serve cease-and-desist notices.

FACT’s Eddy Leviten, who has just returned to the anti-piracy outfit following a period at the Alliance for Intellectual Property as its Director-General, informs TorrentFreak that actions were “taken across the country”.

In total, 16 premises were targeted in the operation, with cease-and-desist notices served on individuals suspected of supplying illegal sports streams.

Leviten declined to be more precise on the exact nature of the targets at this stage, but confirmed that “those involved were all engaged at a level sufficient to attract our interest.”

However, FACT does note that those targeted were all “promoting unauthorized access to premium television content” which combined with NWROCU’s earlier comments about IPTV could be compatible with lower-level IPTV subscription re-sellers.

These are individuals who operate no service of their own but buy ‘credits’ from bigger players in order to offer packages to the public. NWROCU previously mentioned “cracked online television boxes” too, which are potentially Android-style devices configured for piracy. Again, no further details are currently available.

Nevertheless, the involvement of the Regional Organised Crime Unit (ROCU) Disruption Teams may raise alarm bells with those operating in a similar niche. FACT, in conjunction with its Premier League partner, hopes that the cease-and-desist notices will stop the activity in hand while “deterring others from getting involved.”

Kieron Sharp, FACT Chief Executive says that last month’s activity is just one of the tactics being deployed against people committing offenses that affect both rightsholders and broadcasters.

“We have a program of continuous activity targeting different elements of the global piracy landscape, with consideration given to the scale of the offending so that the most effective and proportionate response is deployed,” Sharp says.

“The message is clear. If you are involved in any way in providing illegal streaming services, on any scale, you are not invisible or immune from action from FACT, rights owners and law enforcement.”

National GAIN Coordinator Lesley Donovan adds that the serving of cease-and-desist notices is intended to send a message to those “trying to make a quick buck” out of illegal streaming.

“Their actions are feeding a wider illicit industry which not only denies the economy of millions both in copyright theft and undeclared income but poses a direct risk to our communities due to their lack of parental controls and fire safety,” Donovan says.

“This type of activity is also often a cog in a larger criminal machine, often ultimately funding drugs, weapons and people trafficking.”

The claims of higher-tier offending such as those detailed by Donovan are often cited in connection with all forms of piracy. However, it is extremely rare (perhaps unheard of) for those claims to be backed up with publicly-available evidence. There have been claims in the media that paramilitary groups are involved in some way in Ireland, but no evidence beyond that.

Just recently, TorrentFreak spoke with one IPTV provider who contested the notion that most players in the market are high-level criminals involved in anything other than the supply of unlicensed streams. Since the matter has now been raised again, we’ll reestablish contact to see if they are prepared to respond to the allegations.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Raphaël Hertzog: Promoting Debian LTS with stickers, flyers and a video [Planet Debian]

With the agreement of the Debian LTS contributors funded by Freexian, earlier this year I decided to spend some Freexian money on marketing: we sponsored DebConf 19 as a bronze sponsor and we prepared some stickers and flyers to give out during the event.

The stickers only promote the Debian LTS project with the semi-official logo we have been using and a link to the wiki page. You can see them on the back of a laptop in the picture below. As you can see, we have made two variants with different background colors:

The flyers and the video are meant to introduce the Debian LTS project and to convince companies to sponsor the Debian LTS project through the Freexian offer. Those are short documents and they can’t explain the precise relationship between Debian LTS and Freexian. We try to show that Freexian is just an intermediary between contributors and companies, but some persons will still have the feeling that a commercial entity is organizing Debian LTS.

Check out the video on YouTube:

The inside of the flyer looks like this:

Click on the picture to see it full size

Note that due to some delivery issues, we have left-over flyers and stickers. If you want some to give out during a free software event, feel free to reach out to me.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.


CodeSOD: I'm Sooooooo Random, LOL [The Daily WTF]

There are some blocks of code that require a preamble, and an explanation of the code and its flow. Often you need to provide some broader context. Sometimes, you get some code like Wolf found,...

Widest common denominator [Seth's Blog]

If you’re creating something where widespread inputs, usage and adoption lead to significant benefits, it’s worth considering who you’re excluding.

The curb cut turned out not simply to be a boon for wheelchair users. At low cost, it opened the sidewalk to a significantly larger audience of strollers, delivery people and skateboarders, too.

Often, we make the mistake of focusing on too broad an audience. Obsessing about the minimum viable audience forces us to make something that’s truly better. But once we identify those we seek to serve, broadening access is a powerful way to add impact.

This isn’t a matter of high or low, more or less. It’s the power of thinking hard about who it’s for and what it’s for.


Raphaël Hertzog: Freexian’s report about Debian Long Term Support, July 2019 [Planet Debian]

A Debian LTS logoLike each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In July, 199 work hours have been dispatched among 13 paid contributors. Their reports are available:

  • Adrian Bunk got 8h assigned but did nothing (plus 10 extra hours from June), thus he is carrying over 18h to August.
  • Ben Hutchings did 18.5 hours (out of 18.5 hours allocated).
  • Brian May did 10 hours (out of 10 hours allocated).
  • Chris Lamb did 18 hours (out of 18 hours allocated).
  • Emilio Pozuelo Monfort did 21 hours (out of 18.5h allocated + 17h remaining, thus keeping 14.5 extra hours for August).
  • Hugo Lefeuvre did 9.75 hours (out of 18.5 hours, thus carrying over 8.75h to Augustq).
  • Jonas Meurer did 19 hours (out of 17 hours allocated plus 2h extra hours June).
  • Markus Koschany did 18.5 hours (out of 18.5 hours allocated).
  • Mike Gabriel did 15.75 hours (out of 18.5 hours allocated plus 7.25 extra hours from June, thus carrying over 10h to August.).
  • Ola Lundqvist did 0.5 hours (out of 8 hours allocated plus 8 extra hours from June, then he gave 7.5h back to the pool, thus he is carrying over 8 extra hours to August).
  • Roberto C. Sanchez did 8 hours (out of 8 hours allocated).
  • Sylvain Beucler did 18.5 hours (out of 18.5 hours allocated).
  • Thorsten Alteholz did 18.5 hours (out of 18.5 hours allocated).

Evolution of the situation

July was different than other months. First, some people have been on actual vacations, while 4 of the above 14 contributors met in Curitiba, Brazil, for DebConf19. There, a talk about LTS (slides, video) was given, followed by a Q&ligA session. Also a new promotional video about Debian LTS, aimed at potential sponsors was shown there for the first time.

DebConf19 was also a success in respect to on-boarding of new contributors, we’ve found three potential new contributors, one of them is already in training.

The security tracker (now for oldoldstable as Buster has been released and thus Jessie became oldoldstable) currently lists 51 packages with a known CVE and the dla-needed.txt file has 35 packages needing an update.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.


Jaskaran Singh: GSoC Final Report [Planet Debian]


The Debian Patch Porting System aims to systematize and partially automate the security patch porting process.

In this Google Summer of Code (2019), I wrote a webcrawler to extract security patches for a given security vulnerability identifier. This webcrawler or patch-finder serves as the first step of the Debian Patch Porting System.

The Patch-finder should recognize numerous vulnerability identifiers. These identifiers can be security advisories (DSA, GLSA, RHSA), vulnerability identifiers (OVAL, CVE), etc. So far, it can identify CVE, DSA (Debian Security Advisory), GLSA (Gentoo Linux Security Advisory) and RHSA (Red Hat Security Advisory).

Each vulnerability identifier has a list of entrypoint URLs associated with it. These URLs are used to initiate the patch finding.

Vulnerabilities that are not CVEs are generic vulnerabilities. If a generic vulnerability is given, its “aliases” (i.e. CVEs that are related to the generic vulnerability) are determined. This method was chosen because CVEs are quite possibly the most widely used security vulnerability and thus would have the most number of patches associated to them. Once the aliases are determined, the entrypoint URLs of the aliases are crawled for the patch-finding.

The Patch-finder is based on the web crawling and scraping framework Scrapy.

What was done:

During these three months, I have:

  • Used Scrapy to implement a spider to collect patch links.
  • Implemented a recursive patch-finding process. Any links that the patch-finder finds on a page (in a certain area of interest, of course) that are not patch links are followed.
  • Implemented a crawler to extract patches from Debian Packages.
  • Implemented a crawler to extract patches from a given GitHub repository.

Here’s a link to the patch-finder’s Github Repository which I have used for GSoC.


There is a lot more stuff to be done, from solving small bugs to implementing major features. Some of these issues are on the project’s GitHub issue tracker here. Following is a summary of these issues and a few more ideas:

  • A way to uniquely identify patches. This is so that the same patches are not scraped and collected.
  • A Database, and a corresponding database API.
  • Store patches in the database, along with any other information.
  • Collect not only patches but other information relevant to the vulnerability.
  • Integrate the Github crawler/parser in the crawling process.
  • A way to check the relevancy of the patch to the vulnerability. A naive solution is, of course, to simply check for mention of the vulnerability ID in the patch description.
  • Efficient page filters. Certain links should not be crawled because it is obvious they will not yield any patches, for example homepages.
  • A better way to scrape links, rather than using a URL’s corresponding xpath.
  • A more efficient testing framework.
  • More crawlers/parsers.

Personal Notes:

Google Summer of Code has been a super comfortable and fun experience for me. I’ve learnt tonnes about Python, Open Source and Software Development. My mentors Luciano Bello and László Böszörményi have been immensely helpful and have guided me through these three months.

I plan to continue working on this project and hopefully develop it to a state where Debian and everyone who needs it can use it conveniently.


B Balls Duo by Lucy Bellwood [Oh Joy Sex Toy]

B Balls Duo by Lucy Bellwood

And just like that, Lucys BACK. If you’ve been keeping up to date with Erikas Instagram or Patreon, you’ll know that things haven’t been going great over here. Lots of stress and mental health trubs, and long story short – we’re struggling! Who’d have thought doing two books at the same time in one year […]


Research du Temps Perdu – DORK TOWER 07.09.19 [Dork Tower]

Dork Tower is updated Tuesdays and Thursdays. For as little as $1 a month, you can join the Army of Dorkness – and help bring more Dork Tower to the world! We’re nearly 80% of the way towards three comics a week! Become a Dork Tower Patreon backer – you get everlasting gratitude. Oh, wait – and also swag!


YouTube Sues Alleged Scammer Over DMCA Extortion Scheme [TorrentFreak]

Obtaining multiple unresolved copyright complaints on a YouTube account can prove fatal to those who rely on the platform to make a living.

For those obtaining “three strikes”, it can mean the closure of an entire channel and along with it, access to potentially hundreds of otherwise revenue-generating videos.

Back in January, it was reported that a YouTuber known as ‘Obbyraidz’, who focuses on Minecraft content, was having this system turned against him.

After receiving two bogus strikes against his account, he took to Twitter to complain that he was being extorted by a scammer identifying as ‘Vengeful Flame’, who threatened a third and debilitating strike unless money was paid via PayPal or bitcoin.

A second YouTuber, known online as ‘Kenzo’ was also given similar treatment, with the scammer demanding money not to file complaints that could terminate his account.

Now, however, the tables are being turned after YouTube itself filed a complaint in federal court against Nebraska-resident Christopher Brady, the person who allegedly attempted to defraud both Obbyraidz and Kenzo.

“Defendant, Christopher L. Brady (‘Brady’), has repeatedly attempted to harass and extort money from YouTube content creators through bogus allegations of copyright infringement,” the complaint filed Monday begins.

“This lawsuit seeks to hold him accountable for that misconduct, and for the damage he has caused to YouTube.”

Detailing the DMCA takedown process in general and noting that notices can be used “maliciously to secure the removal of content that was not legitimately claimed to be infringing”, YouTube states it’s in a position to bring an action against a sender of bogus notices for damages.

“This is such an action,” the complaint reads.

According to YouTube, Brady sent the video platform dozens of DMCA notices falsely claiming that content posted by YouTube users infringed his supposed copyrights. He did this as part of a scheme to “harass and extort” innocent users, YouTube continues, in order to pressure them into payment and the avoidance of account closures.

Citing the work of three YouTubers – Obbyraidz, Kenzo and Cxlvxn – YouTube notes that between them they have uploaded around 1,000 videos related to video gaming. All are members of the YouTube Partner program, earning revenue from their work.

Brady allegedly targeted Kenzo and Obbyraidz “among others” by sending false DMCA notices to YouTube, claiming that he was the original creator of their videos, certifying as much “under penalty of perjury.” YouTube said it acted on these false claims, removing the videos.

However, when Kenzo and Obbyraidz went public with the extortion attempts, YouTube launched an investigation, restored the videos, and removed the strikes against their accounts.

The complaint alleges that in June and July 2019, Brady sent four more fraudulent notices, this time targeting Cxlvxn. However, this appears to have been an attempt to have Cxlvxn file a DMCA counter-notification, something that exposed his home address to Brady.

On July 10, 2019, six days after the counter-notification was filed, Cxlvxn reported he’d been ‘swatted’, something which YouTube describes as “the act of making a bogus call to emergency services in an attempt to bring about the dispatch of a large number of armed police officers to a particular address.” YouTube believes Brady was responsible.

As a result of the above actions, YouTube states that Brady is responsible for violations of 17 U.S.C. § 512(f). The company says it successfully traced back at least 15 online identities to the man, an investigation which caused it to expend “substantial sums” in an effort to bring the behavior to a halt.

The company is demanding preliminary and permanent injunctions against Brady, compensation to be decided at trial, costs, attorneys fees, and any further relief the court deems proper.

The YouTube complaint filed in Nebraska can be downloaded here (pdf)

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.


Don't Hurt Me No More [QC RSS]



Dirk Eddelbuettel: RcppQuantuccia 0.0.3 [Planet Debian]

A maintenance release of RcppQuantuccia arrived on CRAN earlier today.

RcppQuantuccia brings the Quantuccia header-only subset / variant of QuantLib to R. At the current stage, it mostly offers date and calendaring functions.

This release was triggered by some work CRAN is doing on updating C++ standards for code in the repository. Notably, under C++11 some constructs such ptr_fun, bind1st, bind2nd, … are now deprecated, and CRAN prefers the code base to not issue such warnings (as e.g. now seen under clang++-9). So we updated the corresponding code in a good dozen or so places to the (more current and compliant) code from QuantLib itself.

We also took this opportunity to significantly reduce the footprint of the sources and the installed shared library of RcppQuantuccia. One (unexported) feature was pricing models via Brownian Bridges based on quasi-random Sobol sequences. But the main source file for these sequences comes in at several megabytes in sizes, and allocates a large number of constants. So in this version the file is excluded, making the current build of RcppQuantuccia lighter in size and more suitable for the (simpler, popular and trusted) calendar functions. We also added a new holiday to the US calendar.

The complete list changes follows.

Changes in version 0.0.3 (2019-08-19)

  • Updated Travis CI test file (#8)).

  • Updated US holiday calendar data with G H Bush funeral date (#9).

  • Updated C++ use to not trigger warnings [CRAN request] (#9).

  • Comment-out pragmas to suppress warnings [CRAN Policy] (#9).

  • Change build to exclude Sobol sequence reducing file size for source and shared library, at the cost of excluding market models (#10).

Courtesy of CRANberries, there is also a diffstat report relative to the previous release. More information is on the RcppQuantuccia page. Issues and bugreports should go to the GitHub issue tracker.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.


Eat. Sleep. Build. Repeat.The Humble Book Bundle: Build It... [Humble Bundle Blog]

Eat. Sleep. Build. Repeat.

The Humble Book Bundle: Build It Yourself by Chronicle Books has over $490 worth of ebooks filled with projects to keep you busy! Turn the printed page into a work of art with Art Made From Books, stitch a Morse code quilt with World of Geekcraft, and learn best practices on making a living by making things with How to Make It. Pay what you want and support charity. You choose where your money goes!

Assets for Press & Partners


[$] On-disk format robustness requirements for new filesystems []

The "Extendable Read-Only File System" (or "EROFS") was first posted by Gao Xiang in May 2018; it was merged into the staging tree for the 4.19 release. There has been a steady stream of work on EROFS since then, and its author now thinks that it is ready to move out of staging and join the other official filesystems in the kernel. It would seem, though, that there is one final hurdle that it may have to clear: robustness in the face of a corrupted on-disk filesystem image. That raises an interesting question: to what extent do new filesystems have to exhibit a level of robustness that is not met by the filesystems that are currently in heavy use?

Podcast: A cycle of renewal, broken: How Big Tech and Big Media abuse copyright law to slay competition [Cory Doctorow – Boing Boing]

In my latest podcast (MP3), I read my essay "A Cycle of Renewal, Broken: How Big Tech and Big Media Abuse Copyright Law to Slay Competition", published today on EFF's Deeplinks; it's the latest in my ongoing series of case-studies of "adversarial interoperability," where new services unseated the dominant companies by finding ways to plug into existing products against those products' manufacturers. This week's installment recounts the history of cable TV, and explains how the legal system in place when cable was born was subsequently extinguished (with the help of the cable companies who benefitted from it!) meaning that no one can do to cable what cable once did to broadcasters.

In 1950, a television salesman named Robert Tarlton put together a consortium of TV merchants in the town of Lansford, Pennsylvania to erect an antenna tall enough to pull down signals from Philadelphia, about 90 miles to the southeast. The antenna connected to a web of cables that the consortium strung up and down the streets of Lansford, bringing big-city TV to their customers — and making TV ownership for Lansfordites far more attractive. Though hobbyists had been jury-rigging their own "community antenna television" networks since 1948, no one had ever tried to go into business with such an operation. The first commercial cable TV company was born.

The rise of cable over the following years kicked off decades of political controversy over whether the cable operators should be allowed to stay in business, seeing as they were retransmitting broadcast signals without payment or permission and collecting money for the service. Broadcasters took a dim view of people using their signals without permission, which is a little rich, given that the broadcasting industry itself owed its existence to the ability to play sound recordings over the air without permission or payment.

The FCC brokered a series of compromises in the years that followed, coming up with complex rules governing which signals a cable operator could retransmit, which ones they must retransmit, and how much all this would cost. The end result was a second way to get TV, one that made peace with—and grew alongside—broadcasters, eventually coming to dominate how we get cable TV in our homes.

By 1976, cable and broadcasters joined forces to fight a new technology: home video recorders, starting with Sony's Betamax recorders. In the eyes of the cable operators, broadcasters, and movie studios, these were as illegitimate as the playing of records over the air had been, or as retransmitting those broadcasts over cable had been. Lawsuits over the VCR continued for the next eight years. In 1984, the Supreme Court finally weighed in, legalizing the VCR, and finding that new technologies were not illegal under copyright law if they were "capable of substantial noninfringing uses."



Podcast: A cycle of renewal, broken: How Big Tech and Big Media abuse copyright law to slay competition [Cory Doctorow's]

In my latest podcast (MP3), I read my essay “A Cycle of Renewal, Broken: How Big Tech and Big Media Abuse Copyright Law to Slay Competition”, published today on EFF’s Deeplinks; it’s the latest in my ongoing series of case-studies of “adversarial interoperability,” where new services unseated the dominant companies by finding ways to plug into existing products against those products’ manufacturers. This week’s installment recounts the history of cable TV, and explains how the legal system in place when cable was born was subsequently extinguished (with the help of the cable companies who benefitted from it!) meaning that no one can do to cable what cable once did to broadcasters.

In 1950, a television salesman named Robert Tarlton put together a consortium of TV merchants in the town of Lansford, Pennsylvania to erect an antenna tall enough to pull down signals from Philadelphia, about 90 miles to the southeast. The antenna connected to a web of cables that the consortium strung up and down the streets of Lansford, bringing big-city TV to their customers — and making TV ownership for Lansfordites far more attractive. Though hobbyists had been jury-rigging their own “community antenna television” networks since 1948, no one had ever tried to go into business with such an operation. The first commercial cable TV company was born.

The rise of cable over the following years kicked off decades of political controversy over whether the cable operators should be allowed to stay in business, seeing as they were retransmitting broadcast signals without payment or permission and collecting money for the service. Broadcasters took a dim view of people using their signals without permission, which is a little rich, given that the broadcasting industry itself owed its existence to the ability to play sound recordings over the air without permission or payment.

The FCC brokered a series of compromises in the years that followed, coming up with complex rules governing which signals a cable operator could retransmit, which ones they must retransmit, and how much all this would cost. The end result was a second way to get TV, one that made peace with—and grew alongside—broadcasters, eventually coming to dominate how we get cable TV in our homes.

By 1976, cable and broadcasters joined forces to fight a new technology: home video recorders, starting with Sony’s Betamax recorders. In the eyes of the cable operators, broadcasters, and movie studios, these were as illegitimate as the playing of records over the air had been, or as retransmitting those broadcasts over cable had been. Lawsuits over the VCR continued for the next eight years. In 1984, the Supreme Court finally weighed in, legalizing the VCR, and finding that new technologies were not illegal under copyright law if they were “capable of substantial noninfringing uses.”


Monday, 19 August


New Hampshire court to patent troll: it's not libel when someone calls you a "patent troll" [Cory Doctorow – Boing Boing]

New Hampshire's Supreme Court has ruled that calling someone a "patent troll" is not defamatory because "patent troll" is a statement of opinion and can neither be factually proved nor disproved.

The case was brought by Automated Transactions Limited, who claims a broad patent on machines that dispense cash (ATL founder David Barcelou invented some unsuccessful gaming machines in the 1990s and received several patents they say cover the normal operations of ATMs and other common machines). ATL has made millions demanding patent license fees.

Bob Stier, a lawyer who represented some of ATL's targets, was quoted in a 2013 interview in which he called ATL a "patent troll" and repeated the characterization on his firm's website. The Credit Union National Association also called ATL a "patent troll" in presentations they gave about demand letters served to their members.

This prompted ATL to file suit against Stier, CUNA, the ABA and others in 2016, claiming that the terms "troll," "shakedown" and "extortion" were all defamatory. The case made it to the New Hampshire Supreme Court, who dismissed ATL's case and said that ""The challenged statement, that ATL is a well-known patent troll, is one of opinion rather than fact. The statement is an assertion that, among other things, ATL is a patent troll because its patent-enforcement activity is 'aggressive.' This statement cannot be objectively verified."

Officially, the ruling by the New Hampshire Supreme Court only applies in New Hampshire. But state courts do pay attention to rulings in other states. If trolls sue for defamation in other states, the defendants will be able to cite the New Hampshire precedent in their defense, making it more likely that they'll ultimately prevail.

They called you a troll, deal with it—court slaps down libel lawsuit [Timothy B Lee/Ars Technica]


A Brief Note on Mary Robinette Kowal, On the Day After Her Hugo Best Novel Win [Whatever]

A number of years ago, and during one of those occasional mud-flinging spats that happen in science fiction, a person who I will mercifully not name now tried to dismiss and minimize Mary Robinette Kowal as “no one you should have heard of, and no one of consequence.” This was when Mary Robinette had already become not just a writer of note, but someone widely admired and respected by her peers and colleagues for the work she had done for the community of writers and creators.

The intent behind this person’s words was cruel, and I believe intended to insult and to wound. After no small outcry, this person apologized, and Mary Robinette, who is one of the most gracious people I know, accepted it. But I for one never forgot either the insult to her, or the dismissive intent behind it.

Last night, Mary Robinette Kowal won the Hugo Award for her novel The Calculating Stars. This follows her and her novel also winning the Nebula and Locus Awards. Mary Robinette wrote a tremendous book, and right now she stands at the pinnacle of her field, with all the esteem that it could offer to her, all of which she has absolutely and definitively earned. I could not be prouder of my friend if I tried, not only because she is my friend, but because of her talent, her grace, her strength and her perseverance. I admire her more than I can say.

She has given the best answer to anyone who ever dared to say she was no one you should have heard of: She kept speaking. She kept speaking, and the world listened. And then, having listened, it celebrated what she had to say.

Congratulations, Mary Robinette. Keep speaking.


No pythons were harmed in the making of this bundle.Python... [Humble Bundle Blog]

No pythons were harmed in the making of this bundle.

Python programmers rejoice for No Starch Press is back for our latest bundle! Get ebooks like Python Playground, Mission Python, and Invent Your Own Computer Games with Python. Plus, your purchase will support The No Starch Press Foundation and Python Software Foundation!

Assets for Press & Partners

Court Denies Default Judgment Against ‘Cheating’ Fortnite Kid, In Spite of Mom’s ‘Defense’ [TorrentFreak]

Two years ago, Epic Games decided to take several Fortnite cheaters to court, accusing them of copyright infringement.

Several of these lawsuits have been settled but there is one that proved to be somewhat of a challenge.

One of the alleged cheaters turned out to be a minor who’s also accused of demonstrating, advertising and distributing the cheat via his YouTube channel. The game publisher wasn’t aware of this when it filed the lawsuit, but the kid’s mother let the company know in clear terms.

“This company is in the process of attempting to sue a 14-year-old child,” the mother informed the Court back in 2017.

The letter was widely publicized in the press but Epic Games didn’t back off. Due to his young age, the Carolina District Court ordered that the kid, who operated the “Sky Orbit” YouTube channel, should only be referred to by his initials C.R. The case itself continued, however, albeit slowly.

Since C.R. didn’t retain an attorney or otherwise respond in court, Epic filed a motion for default judgment. The court didn’t accept this right away, however, instead deciding that the mother’s letter should be treated as a motion to dismiss the case.

Among other defenses, the mother highlighted that the EULA, which the game publisher relies heavily upon in the complaint, isn’t legally binding. The EULA states that minors require permission from a parent or legal guardian, which was not the case here.

The court reviewed these arguments but concluded that they were not sufficient to dismiss the case. After that ruling things went quiet. Neither C.R. nor his mom responded, which prompted Epic Games to file a motion for a default judgment again.

Epic isn’t looking for any massive damages, but it mainly wants C.R. to refrain from any future infringing activities. This includes cheating as well as posting videos on YouTube where this type of activity is promoted.

Generally speaking, such motions are easily granted, since there is no opposing party to dispute any claims. However, in this case, the court decided differently, with the age of the alleged cheater playing an important role.

The Federal Rules of Civil Procedure do not allow default judgments against minors who haven’t been represented. Epic tried to cover this by arguing that the mother’s letter counted as representation, but the North Carolina Court disagrees.

In his order denying the motion for default judgment, US District Court Judge Malcolm J. Howard mentions that the court previously emphasized that the letter in question was not seen as an “official appearance by anyone on behalf of the minor defendant.”

“In light of the circumstances herein, based on the facts currently before the court, and pursuant to Rule 55 of the Federal Rules of Civil Procedure, the court must deny plaintiff’s motion for default judgment,” Judge Malcolm J. Howard concludes.

This means that after roughly two years, Epic is back to square one and that the accused cheater will ‘walk’ free.

Whether C.R. is still involved in any cheating activity is unknown. His original “Sky Orbit” YouTube account is no longer active though, and a backup was deleted as well, due to “multiple third-party claims of copyright infringement.”

A copy of the order denying the motion for a defauly judgment is available here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Antoine Beaupré: KNOB attack: Is my Bluetooth device insecure? [Planet Debian]

A recent attack against Bluetooth, called KNOB, has been making waves last week. In essence, it allows an attacker to downgrade the security of a Bluetooth so much that it's possible for the attacker to break the encryption key and spy on all the traffic. The attack is so devastating that some have described it as the "stop using bluetooth" flaw.

This is my attempt at answering my own lingering questions about "can I still use Bluetooth now?" Disclaimer: I'm not an expert in Bluetooth at all, and just base this analysis on my own (limited) knowledge of the protocol, and some articles (including the paper) I read on the topic.

Is Bluetooth still safe?

It really depends what "safe" means, and what your threat model is. I liked how the Ars Technica article put it:

It's also important to note the hurdles—namely the cost of equipment and a surgical-precision MitM—that kept the researchers from actually carrying out their over-the-air attack in their own laboratory. Had the over-the-air technique been easy, they almost certainly would have done it.

In other words, the active attack is really hard to do, and the researchers didn't actually do one at all! It's a theoretical flaw, at this point, and while it's definitely possible, it's not what the researchers did:

The researchers didn't carry out the man-in-the-middle attack over the air. They did, however, root a Nexus 5 device to perform a firmware attack. Based on the response from the other device—a Motorola G3—the researchers said they believe that both attacks would work.

This led some researchers to (boldy) say they would still use a Bluetooth keyboard:

Dan Guido, a mobile security expert and the CEO of security firm Trail of Bits, said: "This is a bad bug, although it is hard to exploit in practice. It requires local proximity, perfect timing, and a clear signal. You need to fully MitM both peers to change the key size and exploit this bug. I'm going to apply the available patches and continue using my bluetooth keyboard."

So, what's safe and what's not, in my much humbler opinion?

Keyboards: bad

The attack is a real killer for Bluetooth keyboards. If an active attack is leveraged, it's game over: everything you type is visible to the attacker, and that includes, critically, passwords. In theory, one could even input keyboard events into the channel, which allows basically arbitrary code execution on the host.

Some, however, made the argument that it's probably easier to implant a keylogger in the device than actually do that attack, but I disagree: this requires physical access, while the KNOB attack can be done remotely.

How far this can be done, by the way, is still open to debate. The Telegraph claimed "a mile" in a click-bait title, but I think such an attacker would need to be much closer for this to work, more in the range of "meters" than "kilometers". But it still means "a black van sitting outside your house" instead of "a dude breaking into your house", which is a significant difference.

Other input devices: hum

I'm not sure mice and other input devices are such a big deal, however. Extracting useful information from those mice moving around the screen is difficult without seeing what's behind that screen.

So unless you use an on-screen keyboard or have special input devices, I don't think those are such a big deal when spied upon.

They could be leveraged with other attacks to make you "click through" some things an attacker would otherwise not be able to do.

Speakers: okay

I think I'll still keep using my Bluetooth speakers. But that's because I don't have much confidential audio I listen to. I listen to music, movies, and silly cat videos; not confidential interviews with victims of repression that should absolutely have their identities protected. And if I ever come across such material, I now know that I should not trust that speaker..

Otherwise, what's an attacker going to do here: listen to my (ever decreasing) voicemail (which is transmitted in cleartext by email anyways)? Listen to that latest hit? Meh.

Do keep in mind that some speakers have microphones in them as well, so that's not the entire story...

Headsets and microphones: hum

Headsets and microphones are another beast, as they can listen to other things in your environment. I do feel much less comfortable using those devices now. What makes the entire thing really iffy is some speakers do have microphones in them and all of a sudden everything around you can listen on your entire life.

(It seems like a given, with "smart home assistants" these days, but I still like to think my private conversations at home are private, in general. And I generally don't want to be near any of those "smart" devices, to be honest.)

One mitigating circumstance here is that the attack needs to happen during the connection (or pairing? still unclear) negociation, which doesn't happen that often if everything works correctly. Unfortunately, this happens more than often exactly with speakers and headsets. That's because many of those devices stupidly have low limits on the number of devices they can pair with. For example, the Bose Soundlink II can only pair with 8 other devices. If you count three device by person (laptop, workstation, phone), you quickly hit the limit when you move the device around. So I end up repairing that device quite often.

And that would be if the attack takes place during the pairing phase. As it turns out, the attack window is much wider: the attack happens during the connexion stage (see Figure 1, page 1049 in the paper), after devices have paired. This actually happens way more often than just during pairing. Any time your speaker or laptop will go to sleep, it will disconnect. Then to start using the device again, the BT layer will renegociate that keysize, and the attack can happen again.

(I have written the authors of the paper to clarify at which stage the attack happens and will update this post when/if they reply. Update: Daniele Antonioli has confirmed the attack takes place at connect phase.)

Fortunarely, the Bose Soundlink II has no microphone, which I'm thankful of. But my Bluetooth headset does have a microphone, which makes me less comfortable.

File and contact transfers: bad

Bluetooth, finally, is also used to transfer stuff other than audio of course. It's clunky, weird and barely working, but it's possible to send files over Bluetooth, and some headsets and car controllers will ask you permission to list your contacts so that "smart" features like "OK Google, call dad please" will work.

This attack makes it possible for an attacker to steal your contacts, when connecting devices. It can also intercept file transfers and so on.

That's pretty bad, to say the least.

Unfortunately, the "connection phase" mitigation described above is less relevant here. It's less likely you'll be continuously connecting two phones (or your phone and laptop) together for the purpose of file transfers. What's more likely is you'll connect the devices for explicit purpose of the file transfer, and therefore an attacker has a window for attack at every transfer.

I don't really use the "contacts" feature anyways (because it creeps me the hell out in the first place), so that's not a problem for me. But the file transfer problem will certainly give me pause the next time I ever need to feel the pain of transfering files over Bluetooth again, which I hope is "never".

It's interesting to note the parallel between this flaw, which will mostly affect Android file transfers, and the recent disclosure of flaws with Apple's Airdrop protocol which was similarly believed to be secure, even though it was opaque and proprietary. Now, think a bit about how Airdrop uses Bluetooth to negociate part of the protocol, and you can feel like I feel that everything in security just somewhat keeps crashes down and we don't seem to be able to make any progress at all.

Overall: meh

I've always been uncomfortable with Bluetooth devices: the pairing process has no sort of authentication whatsoever. The best you get is to enter a pin, and it's often "all zeros" or some trivially easy thing to bruteforce. So Bluetooth security has always felt like a scam, and I especially never trusted keyboards with passwords, in particular.

Like many branded attacks, I think this one might be somewhat overstated. Yes, it's a powerful attack, but Bluetooth implementations are already mostly proprietary junk that is undecipherable from the opensource world. There are no or very few open hardware implementations, so it's somewhat of expected we find things like this.

I have also found the response from the Bluetooth SIG is particularly alarming:

To remedy the vulnerability, the Bluetooth SIG has updated the Bluetooth Core Specification to recommend a minimum encryption key length of 7 octets for BR/EDR connections.

7 octets is 56 bits. That's the equivalent of DES, which was broken in 56 hours back, over 20 years ago. That's far from enough. But what's more disturbing is that this key size negociation protocol might be there "because 'some' governments didn't want other governments to have stronger encryption", ie. it would be a backdoor.

The 7-byte lower bound might also be there because of Apple lobbying. Their AirPods were implemented as not-standards-compliant and already have that lower 7-byte bound, so by fixing the standard to match one Apple implementation, they would reduce the cost of their recall/replacements/upgrades.

Overally, this behavior of the standards body is what should make us suspicious of any Bluetooth device going forward, and question the motivations of the entire Bluetooth standardization process. We can't use 56 bits keys anymore, and I can't believe I need to explicitely say so, but it seems it's where we're at with Bluetooth these days.


An appreciation for Samuel Delany [Cory Doctorow – Boing Boing]

Samuel R "Chip" Delany is a science fiction pioneer: a brilliant literary stylist with dazzling ideas who was one of the field's first openly queer writers, and one of the first Black writers accepted into the field. He is one of the fathers of afrofuturism.

Delany's work is hugely influential on generations of writers, and I've been writing about his work here for nearly two decades (!). In 2013, he received a well-deserved honor when the Science Fiction Writers of America named him one of the field's Grand Masters. More recently, journals were published after a crowdfunder. I got to have drinks with Chip last fall after a speech at Swarthmore, and was charmed and delighted by his company.

So it's very exciting to see him profiled at length in the New York Times by the queer novelist Jordy Rosenberg, who describes the influence that Delany's work exerted over their career and provides a brief guide to reading Delany.

The emotional dynamism of Delany’s sentences has been perhaps less acknowledged than his world-building, or the sweep of his vision. But when asked to speak about writing as a practice, Delany himself often turns to the art of sentences, and of how to imbue words with such “ekphrastic force” that they summon the material presence of an imagined world. When Korga and Marq return to themselves they are awe-struck, struggling to narrate the intensity of their own transformative experience. It is impossible not to hear in that a metatextual echo of the obsession of Delany’s practice: that of creating the most immersive possible aesthetic experience for us, his readers and devoted enthusiasts.

“I was a dragon,” Korga wonders aloud. And then, struck with the impossibility of communicating the exquisiteness of having been a dragon in flight, Korga reaches for the most apt simile he can imagine. “I was a dragon? I was a dragon!” he cries. “It’s like reading.”

In Praise of Samuel R. Delany [Jordy Rosenberg/New York Times]

(Thanks, DJ Spooky!)

(Image: Houari B, CC BY-SA)

More than 20 Texas cities and towns have been taken hostage by ransomware [Cory Doctorow – Boing Boing]

The American ransomware epidemic shows no signs of slowing, as the confluence of underinvestment in IT and information security and the NSA's reckless stockpiling of computer vulnerabilities means that petty criminals can extort vast sums from distant municipalities by seizing their entire networked infrastructure.

Currently, more than 20 towns and cities in Texas are being held ransom by criminals. The state believes all of the hijackings -- which all landed Saturday morning -- are the work of a single "threat actor."

Update on August 2019 Texas Cyber Incident [Texas Department of Information Resources]


Jonathan Dowland: Shared notes and TODO lists [Planet Debian]

When it comes to organising myself, I've long been anachronistic. I've relied upon paper notebooks for most of my life. In the last 15 years I've stuck to a particular type of diary/notebook hybrid, with a week-to-view on the left-hand side of pages and lined notebook pages on the right.

This worked well for me for my own personal stuff but obviously didn't work well for family things that need to be shared. Trying to find systems that work for both my wife and I has proven really challenging. The best we've come up with so far is a shared (IMAP) account and Apple's notes apps.

On iOS, Apple's low-frills note-taking app lets you synchronise your notes with a mail account (over IMAP). It stores them individually in HTML format, one email per note page, in a mailbox called "Notes". You can set up note syncing to the same account from multiple devices, and so we have a "family" mailbox set up on both my phone and my wife's. I can also get at the notes using any other mail client if I need to.

This works surprisingly well, but not perfectly. In particular synchronising changes to notes can go wrong if we both have the same note page open at the same time. The failure mode is not the worst: it duplicates the note into two; but it's still a problem.

Can anyone recommend a simple, more robust system for sharing notes — and task lists — between people? For task lists, it would be lovely (but not essential) if we could tick things off. At the moment we manage that just as free-form text.


Owner of Phoenix apartment building serves eviction notices to every tenant so he can turn their homes into unlicensed hotel rooms [Cory Doctorow – Boing Boing]

Like many large US cities, Phoenix has a housing shortage and like many US cities, that crisis is worsened by the conversion of much of its housing stock into unlicensed hotel rooms through companies like Airbnb, VRBO, and Wanderjaunt, a new competitor in the unlicensed hotel room industry.

Arizona's legislature has banned cities from putting limits on the process of converting homes to unlicensed hotel rooms, and that has created a local eviction epidemic. For example, tenants in Thomas McPherson's Historic Westminster Apartments (owned through his Fenix Private Capital Group) say they were all served with eviction notices that announced that McPherson had "elected to take the experience into a progressive new direction and turn the entire building into short term rentals."

(McPherson told the Phoenix New Times that they were incorrect in asserting that he was evicting all of his tenants, but declined to elaborate; units in the building are already listed on Wanderjaunt for $75/night/guest)

Wanderjaunt's founder, Michael Chen, has publicly explained that his company's growth strategy is to convince owners of rental properties to evict their tenants from their homes and turn a larger profit through short-term rentals; he called it "an arbitrage opportunity between long- and short-term rentals."

For one tenant of Westminster, who requested anonymity out of fear of retaliation from her landlord, there's no question that McPherson is planning to turn her home into a hotel room. About a month or two ago, the tenant said she emailed her property manager asking if it would be possible to shorten her lease. The property manager told her that McPherson had new plans for the building, but assured her that she didn't have to worry about moving out for another two to four months.

A month later, the tenant received the same lease termination letter as Christensen.

"My entire life has changed. My fiancé lost his job and the day after, one of our good friends passed away. A week from that day we were being evicted," she said. "It's been really tough and stressful."

Downtown Apartment Building Cleared of Tenants to Open Short-Term Rental Hotel [Steven Hsieh/Phoenix New Times]

(Thanks, Ryon Moody!)

Ecofascism isn't new: white supremacy and exterminism have always lurked in the environmental movement [Cory Doctorow – Boing Boing]

It's easy to think of climate denial as a right-wing phenomenon, but a growing and ultra-violent strain of white-nationalism also embraces climate science, in the worst way possible.

Several of the recent white nationalist mass killers have described themselves as "ecofascists" and/or have deployed ecofascist rhetoric in their manifestos. The short version of ecofascism is that it's the belief that our planet has a "carrying capacity" that has been exceeded by the humans alive today and that we must embrace "de-growth" in the form of mass extermination of billions of humans, in order to reduce our population to a "sustainable" level.

In some ways, ecofascism is just a manifestation of "peak indifference": the idea that denial eventually self-corrects, as the debt built up by a refusal to pay attention to a real problem mounts and mounts, until it can no longer be denied. Eventually, the wildfires, floods, diseases (and ensuing refugee crises) overcome all but the most dedicated forms of bad-faith motivated reasoning and self-deception, and people start to switch sides from denying science to embracing it.

But there's an ugly side to peak indifference: that denialism can give way to nihilism. As activists seek to engage people with the urgent crisis, they describe it (correctly) as an existential threat whose time is drawing nigh. Once people acknowledge the threat, it's easy for them to conclude that it's too late to do anything about it ("Well, you were right, those cigarettes did give me lung-cancer, but now that I've got it, I might as well enjoy my last few years on earth with a cigarette between my lips").

Ecofascism is a form of nihilism, one that holds that it's easier to murder half the people on Earth than it is to reform our industrial practices to make our population sustainable. Leaving aside the obvious moral objections to this posture, there's also an important technical sense in which it is very wrong: we will need every mind and every body our species have to toil for generations to come, building seawalls, accommodating refugees, treating pandemic sufferers, working in more labor-intensive (and less resource-intensive) forms of agriculture, etc. etc. The exterminst doctrine assumes that we can know before the fact which humans are "surplus" and which ones might have the insight that lets us sequester carbon, cure a disease, or store renewable energy at higher densities.

But ecofascism isn't an entirely new phenomenon. Pastoralist and environmental thinking has always harbored a strain of white supremacy (the Nazi doctrine of Lebensraum was inextricably bound up with an environmental ideology of preserving habits from "excess" people -- as well as the wrong kind of people, whose inferior blood made them poor stewards of the land.

The connection between eugenics and environmentalism runs deep. One of the fathers of ecofascist thought is Madison Grant, who worked with Teddy Roosevelt to establish the US system of national parks, and also to establish a whiteness requirement for prospective US immigrants. This thread of thinking -- that there are too many people, and the wrong people are breeding -- carries forward with the environmental movement, with figures like John Tanton, who started his career as a local Sierra Club official, but went on to found the Federation for American Immigration Reform and co-found the Center for Immigration Studies, warning Americans to defend against a coming "Latin onslaught," revealing himself to be a full-blown white nationalist who is revered today as the ideological father of the ecofascist movement.

Meanwhile, the eco-left kept having its own brushes with xenophobia. In the early 2000s, the Sierra Club underwent an internecine struggle to reform its official anti-immigration stance and purge the white nationalists and xenophobes from its ranks. In the early 2010, Earth First had to oust co-founder Dave Foreman as his pro-environmental activism was overtaken by his anti-immigrant activism, with splinter groups like "Apply the Brakes" taking hard lines on borders and immigration.

Today, the ecofascist movement is closely aligned with the Trump administration, through links to Steven Miller and Jeff Sessions. The former executive director of FAIR is now serving as Trump's citizenship and immigration services ombudsman. Ann Coulter demands that Americans choose between either "greening or browning" their future. Richard Spencer wraps white nationalism in green rhetoric, and Gavin McInnes has directly linked environmentalism to anti-immigration ideology.

Pushing back against this are two complementary strains of environmental thought: the bright greens who see democratically managed, urbanized, high technology as the way through the climate crisis (dense cities enable a circular economy, heal the metabolic rift, and leave more land free for habitat and carbon-sequestering trees); and the climate justice movement, which recognizes that poor, racialized people are the least responsible parties for carbonization, and the most vulnerable to the climate emergency, and emphasizes climate remediation steps that are led by, and responsive to, the priorities of indigenous people and the Global South.

In 2013, the Sierra Club and the environmental activist group threw their support behind immigration. Earth First! has gone a step further, calling national borders “scar[s] on the earth”.

“The entire earth is a closed system, and the idea that people crossing borders is going to have a major impact on any individual environment is absurd,” said Veery Marten, an Earth First! Journal collective member. “The culprits of this white supremacist violence citing alleged environmental interests are almost always middle class white men who are not lowering their own carbon footprint. It’s not really about consumption, it’s about who’s allowed to consume and gate-keeping these resources.”

Environmental activism is still far from fully making amends for its history, though, and indigenous groups and people of color still feel sidelined in the movement.

“The environmental movement is increasingly reconciling with what it means to try to protect land on this continent when this land is stolen,” said Marten.

'Bees, not refugees': the environmentalist roots of anti-immigrant bigotry [Susie Cagle/The Guardian]


News Post: #sponsored [Penny Arcade]

Tycho: Dreaded Continuity - a corrupted phrase from a haunted era - has reared its ugly head yet again, plunging all into darkness.  Or jpegs, at the very least. I saw somebody asking in the responses to the strip on Twitter if we were being paid by Blue Mountain, a digital greeting card site that most people aren’t even aware of. I’d love for that to be true. In fact, we did all of this for nothing! It is not a compelling business case.  In the same way about that rock bands seem to enter a portion of their trajectory that passes exclusively through Casinos, I would be…


Holger Levsen: 20190818-cccamp [Planet Debian]

Home again

Two days ago I finally arrived home again and was greeted with this very nice view when entering the area:

(These images were taken yesterday from inside the venue.)

To give an idea of scale, the Pesthörnchen flag on top is 2m wide :)

Since today, there's also a rainbow flag next to the Pesthörnchen one. I'm very much looking forward to the next days, though buildup is big fun already.


Page 37 [Flipside]

Page 37 is done.


Link [Scripting News]

How about a working group chartered to find the elegant and simple language hiding in the mess that JavaScript has become? It's there, I can feel it. First you'd have to get rid of the runtime assumption that functions return before they're finished. It should be possible for a function to explicitly fork off a new process, but it shouldn't do it just because the function made an I/O call. In practice 99.9999 percent of those simply want to wait for the I/O to be done. In all my years programming before JS, I never wanted a language to do flow the way JS does. That in itself is important data, imho. I've implemented a lot of different kinds of software.

News Post: PAX West Merch! [Penny Arcade]

Gabe: PAX West is just around the corner and the Penny Arcade design team has been hard at work making a bunch of cool stuff. If you’re curious about what you’ll be able to pick up at the show this year I’ve got you covered. But that’s not all! You won’t find deals like this at any other PAX West in 2019! I’ll be at PAX with my friend Jerry. I hope to see you there! -Gabe out


Girl Genius for Monday, August 19, 2019 [Girl Genius]

The Girl Genius comic for Monday, August 19, 2019 has been posted.


“What’s their phone number again?” [Seth's Blog]

For more than a decade, I’ve been working with the fine folks at 800 CEO READ (and yes, that’s their phone number, and yes, people have asked me how to reach them.)

It’s where I exclusively sell my book What To Do When It’s Your Turn.

It’s the place my project ChangeThis is still happily thriving.

And it’s the place that makes it easy to buy a big box of books for an event or organization. If you need more than three copies of a title, give them a call and ask. You already know the number.

I’ve never said ‘thank you’ to them here on the blog, so today’s a fine day to do that. They’re changing their name and their website today: PorchlightBooks.

Thanks to the entire team for making the world of books a whole lot easier and more friendly.


Link [Scripting News]

JavaScript drives program writers to outliners, because outliners are adept with dealing with nested callbacks, as opposed to the way most (all?) other languages do program flow through a flat list. In a normal language, the runtime proceeds from statement 1 to 2 and so on. I guess most JS programmers don't know about outliners, or devtools developers don't or whatever -- so instead of letting the "problem" be solved by editors, the language designers try to hack the language to make it better fit the editors we already use, but it just creates a hierarchic chaos that's meant not to look hierarchic. It's a total mess. So now JS is evolving to be Perl whose motto is "There's more than one way to do it." It's far better for the ecosystem if there was only one way to do everything, but something as fundamental as program flow, I doubt if even Perl has more than one way to do that. So Brendan Eich is not an outliner person, and neither are the people who work on the design of the language. You pretty much have to program JS in an outliner (as I do) to see this, imho. But before they hack the language again, see if the problem isn't the dev tools, not the language. Nothing wrong with callback hell imho if you have the right editor.

Link [Scripting News]

BTW, here's a demo I did of an outliner in a scripting context.


Vsevolod Dyomkin: Programming Algorithms: Linked Lists [Planet Lisp]

Linked data structures are in many ways the opposite of the contiguous ones that we have explored to some extent in the previous chapter using the example of arrays. In terms of complexity, they fail where those ones shine (first of all, at random access) — but prevail at scenarios when a repeated modification is necessary. In general, they are much more flexible and so allow the programmer to represent almost any kind of a data structure, although the ones that require such level of flexibility may not be too frequent. Usually, they are specialized trees or graphs.

The basic linked data structure is a singly-linked list.

Just like arrays, lists in Lisp may be created both with a literal syntax for constants and by calling a function — make-list — that creates a list of a certain size filled with nil elements. Besides, there's a handy list utility that is used to create lists with the specified content (the analog of vec).

CL-USER> '("hello" world 111)
("hello" WORLD 111)
CL-USER> (make-list 3)
CL-USER> (list "hello" 'world 111)
("hello" WORLD 111)

An empty list is represented as () and, interestingly, in Lisp, it is also a synonym of logical falsehood (nil). This property is used very often, and we'll have a chance to see that.

If we were to introduce our own lists, which may be quite a common scenario in case the built-in ones' capabilities do not suit us, we'd need to define the structure "node", and our list would be built as a chain of such nodes. We might have wanted to store the list head and, possibly, tail, as well as other properties like size. All in all, it would look like the following:

(defstruct list-cell

(defstruct our-own-list
(head nil :type (or list-cell null))
(tail nil :type (or list-cell null)))

CL-USER> (let ((tail (make-list-cell :data "world")))
:head (make-list-cell
:data "hello"
:next tail)
:tail tail))
:DATA "hello"

Lists as Sequences

Alongside arrays, list is the other basic data structure that implements the sequence abstract data type. Let's consider the complexity of basic sequence operations for linked lists:

  • so-called random access, i.e. access by index of a random element, requires O(n) time as we have to traverse all the preceding elements before we can reach the desired one (n/2 operations on average)
  • yet, once we have reached some element, removing it or inserting something after it takes O(1)
  • subsequencing is also O(n)

Getting the list length, in the basic case, is also O(n) i.e. it requires full list traversal. It is possible, though, to store list length as a separate slot, tracking each change on the fly, which means O(1) complexity. Lisp, however, implements the simplest variant of lists without size tracking. This is an example of a small but important decision that real-world programming is full of. Why is such a solution the right thing™, in this case? Adding the size counter to each list would have certainly made this common length operation more effective, but the cost of doing that would've included: increase in occupied storage space for all lists, a need to update size in all list modification operations, and, possibly, a need for a more complex cons cell implementation[1]. These considerations make the situation with lists almost opposite to arrays, for which size tracking is quite reasonable because they change much less often and not tracking the length historically proved to be a terrible security decision. So, what side to choose? A default approach is to prefer the solution which doesn't completely rule out the alternative strategy. If we were to choose a simple cons-cell sans size (what the authors of Lisp did) we'll always be able to add the "smart" list data structure with the size field, on top of it. Yet, stripping the size field from built-in lists won't be possible. Similar reasoning is also applicable to other questions, such as: why aren't lists, in Lisp, doubly-linked. Also, it helps that there's no security implication as lists aren't used as data exchange buffers, for which the problem manifests itself.

For demonstration, let's add the size field to our-own-list (and, meanwhile, consider all the functions that will need to update it...):

(defstruct our-own-list
(head nil :type (or list-cell nil))
(tail nil :type (or list-cell nil))
(size 0 :type (integer 0)))

Given that obtaining the length of a list, in Lisp, is an expensive operation, a common pattern in programs that require multiple requests of the length field is to store its value in some variable at the beginning of the algorithm and then use this cached value, updating it if necessary.

As we see, lists are quite inefficient in random access scenarios. However, many sequences don't require random access and can satisfy all the requirements of a particular use case using just the sequential one. That's one of the reasons why they are called sequences, after all. And if we consider the special case of list operations at index 0 they are, obviously, efficient: both access and addition/removal is O(1). Also, if the algorithm requires a sequential scan, list traversal is rather efficient too, although not as good as array traversal for it still requires jumping over the memory pointers. There are numerous sequence operations that are based on sequential scans. The most common is map, which we analyzed in the previous chapter. It is the functional programming alternative to looping, a more high-level operation, and thus simpler to understand for the common cases, although less versatile.

map is a function that works with different types of built-in sequences. It takes as the first argument the target sequence type (if nil is supplied it won't create the resulting sequence and so will be used just for side-effects). Here is a polymorphic example involving lists and vectors:

CL-USER> (map 'vector '+
'(1 2 3 4 5)
#(1 2 3))
#(2 4 6)

map applies the function provided as its second argument (here, addition) sequentially to every element of the sequences that are supplied as other arguments, until one of them ends, and records the result in the output sequence. map would have been even more intuitive, if it just had used the type of the first argument for the result sequence, i.e. be a "do what I mean" dwim-map, while a separate advanced variant with result-type selection might have been used in the background. Unfortunately, the current standard scheme is not for change, but we can define our own wrapper function:

(defun dwim-map (fn seq &rest seqs)
"A thin wrapper over MAP that uses the first SEQ's type for the result."
(apply 'map (type-of seq) fn seqs))

map in Lisp is, historically, used for lists. So there's also a number of list-specific map variants that predated the generic map, in the earlier versions of the language, and are still in wide use today. These include mapcar, mapc, and mapcan (replaced in RUTILS by a safer flat-map). Now, let's see a couple of examples of using mapping. Suppose that we'd like to extract odd numbers from a list of numbers. Using mapcar as a list-specific map we might try to call it with an anonymous function that tests its argument for oddity and keeps them in such case:

CL-USER> (mapcar (lambda (x) (when (oddp x) x))
(range 1 10))
(1 NIL 3 NIL 5 NIL 7 NIL 9)

However, the problem is that non-odd numbers still have their place reserved in the result list, although it is not filled by them. Keeping only the results that satisfy (or don't) certain criteria and discarding the others is a very common pattern that is known as "filtering". There's a set of Lisp functions for such scenarios: remove, remove-if, and remove-if-not, as well as RUTILS' complements to them keep-if and keep-if-not. We can achieve the desired result adding remove to the picture:

CL-USER> (remove nil (mapcar (lambda (x) (when (oddp x) x))
(range 1 10)))
(1 3 5 7 9)

A more elegant solution will use the remove-if(-not) or keep-if(-not) variants. remove-if-not is the most popular among these functions. It takes a predicate and a sequence and returns the sequence of the same type holding only the elements that satisfy the predicate:

CL-USER> (remove-if-not 'oddp (range 1 10))
(1 3 5 7 9)

Using such high-level mapping functions is very convenient, which is why there's a number of other -if(-not) operations, like find(-if(-not)), member(-if(-not)), position(-if(-not)), etc.

The implementation of mapcar or any other list mapping function, including your own task-specific variants, follows the same pattern of traversing the list accumulating the result into another list and reversing it, in the end:

(defun simple-mapcar (fn list)
(let ((rez ()))
(dolist (item list)
(:= rez (cons (call fn item) rez)))
(reverse rez)))

The function cons is used to add an item to the beginning of the list. It creates a new list head that points to the previous list as its tail.

From the complexity point of view, if we compare such iteration with looping over an array we'll see that it is also a linear traversal that requires twice as many operations as with arrays because we need to traverse the result fully once again, in the end, to reverse it. Its advantage, though, is higher versatility: if we don't know the size of the resulting sequence (for example, in the case of remove-if-not) we don't have to change anything in this scheme and just add a filter line ((when (oddp item) ...), while for arrays we'd either need to use a dynamic array (that will need constant resizing and so have at least the same double number of operations) or pre-allocate the full-sized result sequence and then downsize it to fit the actual accumulated number of elements, which may be problematic when we deal with large arrays.

Lists as Functional Data Structures

The distinction between arrays and linked lists in many ways reflects the distinction between the imperative and functional programming paradigms. Within the imperative or, in this context, procedural approach, the program is built out of low-level blocks (conditionals, loops, and sequentials) that allow for the most fine-tuned and efficient implementation, at the expense of abstraction level and modularization capabilities. It also heavily utilizes in-place modification and manual resource management to keep overhead at a minimum. An array is the most suitable data-structure for such a way of programming. Functional programming, on the contrary, strives to bring the abstraction level higher, which may come at a cost of sacrificing efficiency (only when necessary, and, ideally, only for non-critical parts). Functional programs are built by combining referentially transparent computational procedures (aka "pure functions") that operate on more advanced data structures (either persistent ones or having special access semantics, e.g. transactional) that are also more expensive to manage but provide additional benefits.

Singly-linked lists are a simple example of functional data structures. A functional or persistent data structure is the one that doesn't allow in-place modification. In other words, to alter the contents of the structure a fresh copy with the desired changes should be created. The flexibility of linked data structures makes them suitable for serving as functional ones. We have seen the cons operation that is one of the earliest examples of non-destructive, i.e. functional, modification. This action prepends an element to the head of a list, and as we're dealing with the singly-linked list the original doesn't have to be updated: a new cons cell is added in front of it with its next pointer referencing the original list that becomes the new tail. This way, we can preserve both the pointer to the original head and add a new head. Such an approach is the basis for most of the functional data structures: the functional trees, for example, add a new head and a new route from the head to the newly added element, adding new nodes along the way — according to the same principle.

It is interesting, though, that lists can be used in destructive and non-destructive fashion likewise. There are both low- and high-level functions in Lisp that perform list modification, and their existence is justified by the use cases in many algorithms. Purely functional lists render many of the efficient list algorithms useless. One of the high-level list modification function is nconc. It concatenates two lists together updating in the process the next pointer of the last cons cell of the first list:

CL-USER> (let ((l1 (list 1 2 3))
(l2 (list 4 5 6)))
(nconc l1 l2) ; note no assignment to l1
l1) ; but it is still changed
(1 2 3 4 5 6)

There's a functional variant of this operation, append, and, in general, it is considered distasteful to use nconc for two reasons:

  • the risk of unwarranted modification
  • funny enough, the implementation of nconc, actually, isn't mandated to be more efficient than that of append

So, forget nconc, append all the lists!

Using append we'll need to modify the previous piece of code because otherwise the newly created list will be garbage-collected immediately:

CL-USER> (let ((l1 (list 1 2 3))
(l2 (list 4 5 6)))
(:= l1 (append l1 l2))
(1 2 3 4 5 6)

The low-level list modification operations are rplaca and rplacd. They can be combined with list-specific accessors nth and nthcdr that provide indexed access to list elements and tails respectively. Here's, for example, how to add an element in the middle of a list:

CL-USER> (let ((l1 (list 1 2 3)))
(rplacd (nthcdr 0 l1)
(cons 4 (nthcdr 1 l1)))
(1 4 2 3)

Just to re-iterate, although functional list operations are the default choice, for efficient implementation of some algorithms, you'll need to resort to the ugly destructive ones.

Different Kinds of Lists

We have, thus far, seen the most basic linked list variant — a singly-linked one. It has a number of limitations: for instance, it's impossible to traverse it from the end to the beginning. Yet, there are many algorithms that require accessing the list from both sides or do other things with it that are inefficient or even impossible with the singly-linked one, hence other, more advanced, list variants exist.

But first, let's consider an interesting tweak to the regular singly-linked list — a circular list. It can be created from the normal one by making the last cons cell point to the first. It may seem like a problematic data structure to work with, but all the potential issues with infinite looping while traversing it are solved if we keep a pointer to any node and stop iteration when we encounter this node for the second time. What's the use for such structure? Well, not so many, but there's a prominent one: the ring buffer. A ring or circular buffer is a structure that can hold a predefined number of items and each item is added to the next slot of the current item. This way, when the buffer is completely filled it will wrap around to the first element, which will be overwritten at the next modification. By our buffer-filling algorithm, the element to be overwritten is the one that was written the earliest for the current item set. Using a circular linked list is one of the simplest ways to implement such a buffer. Another approach would be to use an array of a certain size moving the pointer to the next item by incrementing an index into the array. Obviously, when the index reaches array size it should be reset to zero.

A more advanced list variant is a doubly-linked one, in which all the elements have both the next and previous pointers. The following definition, using inheritance, extends our original list-cell with a pointer to the previous element. Thanks to the basic object-oriented capabilities of structs, it will work with the current definition of our-own-list as well, and allow it to function as a doubly-linked list.

(defstruct (list-cell2 (:include list-cell))

Yet, we still haven't shown the implementation of the higher-level operations of adding and removing an element to/from our-own-list. Obviously, they will differ for singly- and doubly-linked lists, and that distinction will require us to differentiate the doubly-linked list types. That, in turn, will demand invocation of a rather heavy OO-machinery, which is beyond the subject of this book. Instead, for now, let's just examine the basic list addition function, for the doubly-linked list:

(defun our-cons2 (data list)
(when (null list) (:= list (make-our-own-list)))
(let ((new-head (make-list-cell2
:data data
:next (when list @list.head))))
(:= @list.head.prev new-head)
:head new-head
:tail @list.tail
:size (1+ @list.size))))

The first thing to note is the use of the @ syntactic sugar, from RUTILS, that implements the mainstream dot notation for slot-value access (i.e. @list.head.prev refers to the prev field of the head field of the provided list structure of the assumed our-own-list type, which in a more classically Lispy, although cumbersome, variants may look like one of the following: (our-cons2-prev (our-own-list-head list)) or (slot-value (slot-value list 'head) 'prev)[2]).

More important here is that, unlike for the singly-linked list, this function requires an in-place modification of the head element of the original list: setting its prev pointer. Immediately making doubly-linked lists non-persistent.

Finally, the first line is the protection against trying to access the null list (that will result in a much-feared, especially in Java-land, null-pointer exception class of error).

At first sight, it may seem that doubly-linked lists are more useful than singly-linked ones. But they also have higher overhead so, in practice, they are used quite sporadically. We may see just a couple of use cases on the pages of this book. One of them is presented in the next part — a double-ended queue.

Besides doubly-linked, there are also association lists that serve as a variant of key-value data structures. At least 3 types may be found in Common Lisp code, and we'll briefly discuss them in the chapter on key-value structures. Finally, a skip list is a probabilistic data structure based on singly-linked lists, that allows for faster search, which we'll also discuss in a separate chapter on probabilistic structures. Other more esoteric list variants, such as self-organized list and XOR-list, may also be found in the literature — but very rarely, in practice.


The flexibility of lists allows them to serve as a common choice for implementing a number of popular abstract data structures.


A queue or FIFO has the following interface:

  • enqueue an item at the end
  • dequeue the first element: get it and remove it from the queue

It imposes a first-in-first-out (FIFO) ordering on the elements. A queue can be implemented directly with a singly-linked list like our-own-list. Obviously, it can also be built on top of a dynamic array but will require permanent expansion and contraction of the collection, which, as we already know, isn't the preferred scenario for their usage.

There are numerous uses for the queue structures for processing items in a certain order (some of which we'll see in further chapters of this book).


A stack or LIFO (last-in-first-out) is even simpler than a queue, and it is used even more widely. Its interface is:

  • push an item on top of the stack making it the first element
  • pop an item from the top: get it and remove it from the stack

A simple Lisp list can serve as a stack, and you can see such uses in almost every file with Lisp code. The most common pattern is result accumulation during iteration: using the stack interface, we can rewrite simple-mapcar in an even simpler way (which is idiomatic Lisp):

(defun simple-mapcar (fn list)
(let ((rez ()))
(dolist (item list)
(push (call fn item) rez))
(reverse rez)))

Stacks hold elements in reverse-chronological order and can thus be used to keep the history of changes to be able to undo them. This feature is used in procedure calling conventions by the compilers: there exists a separate segment of program memory called the Stack segment, and when a function call happens (beginning from the program's entry point called the main function in C) all of its arguments and local variables are put on this stack as well as the return address in the program code segment where the call was initiated. Such an approach allows for the existence of local variables that last only for the duration of the call and are referenced relative to the current stack head and not bound to some absolute position in memory like the global ones. After the procedure call returns, the stack is "unwound" and all the local data is forgotten returning the context to the same state in which it was before the call. Such stack-based history-keeping is a very common and useful pattern that may be utilized in userland code likewise.

Lisp itself also uses this trick to implement global variables with a capability to have context-dependent values through the extent of let blocks: each such variable also has a stack of values associated with it. This is one of the most underappreciated features of the Lisp language used quite often by experienced lispers. Here is a small example with a standard global variable (they are called special in Lisp parlance due to this special property) *standard-output* that stores a reference to the current output stream:

CL-USER> (print 1)
CL-USER> (let ((*standard-output* (make-broadcast-stream)))
(print 1))

In the first call to print, we see both the printed value and the returned one, while in the second — only the return value of the print function, while it's output is sent, effectively, to /dev/null.

Stacks can be also used to implement queues. We'll need two of them to do that: one will be used for enqueuing the items and another — for dequeuing. Here's the implementation:

(defstruct queue

(defun enqueue (item queue)
(push item @queue.head))

(defun dequeue (queue)
;; Here and in the next condition, we use the property that an empty list
;; is also logically false. This is discouraged by many Lisp style-guides,
;; but, in many cases, such code is not only more compact but also more clear.
(unless @queue.tail
(do ()
((null @queue.head)) ; this loop continues until head becomes empty
(push (pop @queue.head) @queue.tail)))
;; By pushing all the items from the head to the tail we reverse
;; their order — this is the second reversing that cancels the reversing
;; performed when we push the items to the head and it restores the original order.
(when @queue.tail
(values (pop @queue.tail)
t))) ; this second value is used to indicate that the queue was not empty

CL-USER> (let ((q (make-queue)))
(print q)
(enqueue 1 q)
(enqueue 2 q)
(enqueue 3 q)
(print q)
(print q)
(dequeue q)
(print q)
(enqueue 4 q)
(print q)
(dequeue q)
(print q)
(dequeue q)
(print q)
(dequeue q)
(print q)
(dequeue q))
#S(QUEUE :HEAD (4) :TAIL (2 3))
#S(QUEUE :HEAD (4) :TAIL (3))
NIL ; no second value indicates that the queue is now empty

Such queue implementation still has O(1) operation times for enqueue/dequeue. Each element will experience exactly 4 operations: 2 pushs and 2 pops (for the head and tail).

Another stack-based structure is the stack with a minimum element, i.e. some structure that not only holds elements in LIFO order but also keeps track of the minimum among them. The challenge is that if we just add the min slot that holds the current minimum, when this minimum is popped out of the stack we'll need to examine all the remaining elements to find the new minimum. We can avoid this additional work by adding another stack — a stack of minimums. Now, each push and pop operation requires us to also check the head of this second stack and, in case the added/removed element is the minimum, push it to the stack of minimums or pop it from there, accordingly.

A well-known algorithm that illustrates stack usage is fully-parenthesized arithmetic expressions evaluation:

(defun arith-eval (expr)
"EXPR is a list of symbols that may include:
square brackets, arithmetic operations, and numbers."
(let ((ops ())
(vals ())
(op nil))
(dolist (item expr)
(case item
([ ) ; do nothing
((+ - * /) (push item ops))
(] (:= op (pop ops)
val (pop vals))
(case op
(+ (:+ val (pop vals)))
(- (:- val (pop vals)))
(* (:* val (pop vals)))
(/ (:/ val (pop vals))))
(push val vals))
(otherwise (push item vals))))
(pop vals)))

CL-USER> (arith-eval '([ 1 + [ [ 2 + 3 ] * [ 4 * 5 ] ] ] ]))


A deque is a short name for a double-ended queue, which can be traversed in both orders: FIFO and LIFO. It has 4 operations: push-front and push-back (also called shift), pop-front and pop-back (unshift). This structure may be implemented with a doubly-linked list or likewise a simple queue with 2 stacks. The difference for the 2-stacks implementation is that now items may be pushed back and forth between head and tail depending on the direction we're popping from, which results in worst-case linear complexity of such operations: when there's constant alteration of front and back directions.

The use case for such structure is the algorithm that utilizes both direct and reverse ordering: a classic example being job-stealing algorithms, when the main worker is processing the queue from the front, while other workers, when idle, may steal the lowest priority items from the back (to minimize the chance of a conflict for the same job).

Stacks in Action: SAX Parsing

Custom XML parsing is a common task for those who deal with different datasets, as many of them come in XML form, for example, Wikipedia and other Wikidata resources. There are two main approaches to XML parsing:

  • DOM parsing reads the whole document and creates its tree representation in memory. This technique is handy for small documents, but, for huge ones, such as the dump of Wikipedia, it will quickly fill all available memory. Also, dealing with the deep tree structure, if you want to extract only some specific pieces from it, is not very convenient.
  • SAX parsing is an alternative variant that uses the stream approach. The parser reads the document and, upon completing the processing of a particular part, invokes the relevant callback: what to do when an open tag is read, when a closed one, and with the contents of the current element. These actions happen for each tag, and we can think of the whole process as traversing the document tree utilizing the so-called "visitor pattern": when visiting each node we have a chance to react after the beginning, in the middle, and in the end.

Once you get used to SAX parsing, due to its simplicity, it becomes a tool of choice for processing XML, as well as JSON and other formats that allow for a similar stream parsing approach. Often the simplest parsing pattern is enough: remember the tag we're looking at, and when it matches a set of interesting tags, process its contents. However, sometimes, we need to make decisions based on the broader context. For example, let's say, we have the text marked up into paragraphs, which are split into sentences, which are, in turn, tokenized. To process such a three-level structure, with SAX parsing, we could use the following outline (utilizing CXML library primitives):

(defclass text-sax (sax:sax-parser-mixin)
((parags :initform nil :accessor sax-parags)
(parag :initform nil :accessor sax-parag)
(sent :initform nil :accessor sax-sent)
(tag :initform nil :accessor sax-tag)))

(defmethod sax:start-element ((sax text-sax)
namespace-uri local-name qname attrs)
(declare (ignore namespace-uri qname attrs))
(:= (sax-tag sax) (mkeyw local-name))

(defmethod sax:end-element ((sax text-sax)
namespace-uri local-name qname)
(declare (ignore namespace-uri qname))
(with-slots (tag parags sent) sax
(case tag
(:paragraph (push (reverse parag) parags)
(:= parag nil))
(:sentence (push (reverse sent) parag)
(:= sent nil)))))

(defmethod sax:characters ((sax text-sax) text)
(when (eql :token (sax-tag sax))
(push text (sax-sent sax)))

(defmethod sax:end-document ((sax text-sax))
(reverse (sax-parags sax)))

This code will return the accumulated structure of paragraphs from the sax:end-document method. And two stacks: the current sentence and the current paragraph are used to accumulate intermediate data while parsing. In a similar fashion, another stack of encountered tags might have been used to exactly track our position in the document tree if there were such necessity. Overall, the more you'll be using SAX parsing, the more you'll realize that stacks are enough to address 99% of the arising challenges.

Lists as Sets

Another very important abstract data structure is a Set. It is a collection that holds each element only once no matter how many times we add it there. This structure may be used in a variety of cases: when we need to track the items we have already seen and processed, when we want to calculate some relations between groups of elements,s and so forth.

Basically, its interface consists of set-theoretic operations:

  • add/remove an item
  • check whether an item is in the set
  • check whether a set is a subset of another set
  • union, intersection, difference, etc.

Sets have an interesting aspect that an efficient implementation of element-wise operations (add/remove/member) and set-wise (union/...) require the use of different concrete data-structures, so a choice should be made depending on the main use case. One way to implement sets is by using linked lists. Lisp has standard library support for this with the following functions:

  • adjoin to add an item to the list if it's not already there
  • member to check for item presence in the set
  • subsetp for subset relationship query
  • union, intersection, set-difference, and set-exclusive-or for set operations

This approach works well for small sets (up to tens of elements), but it is rather inefficient, in general. Adding an item to the set or checking for membership will require O(n) operations, while, in the hash-set (that we'll discuss in the chapter on key-value structures), these are O(1) operations. A naive implementation of union and other set-theoretic operations will require O(n^2) as we'll have to compare each element from one set with each one from the other. However, if our set lists are in sorted order set-theoretic operations can be implemented efficiently in just O(n) where n is the total number of elements in all sets, by performing a single linear scan over each set in parallel. Using a hash-set will also result in the same complexity.

Here is a simplified implementation of union for sets of numbers built on sorted lists:

(defun sorted-union (s1 s2)
(let ((rez ()))
(do ()
((and (null s1) (null s2)))
(let ((i1 (first s1))
(i2 (first s2)))
(cond ((null i1) (dolist (i2 s2)
(push i2 rez))
((null i2) (dolist (i1 s1)
(push i1 rez))
((= i1 i2) (push i1 rez)
(:= s1 (rest s1)
s2 (rest s2)))
((< i1 i2) (push i1 rez)
(:= s1 (rest s1)))
;; just T may be used instead
;; of the following condition
((> i1 i2) (push i2 rez)
(:= s2 (rest s2))))))
(reverse rez)))

CL-USER> (sorted-union '(1 2 3)
'(0 1 5 6))
(0 1 2 3 5 6)

This approach may be useful even for unsorted list-based sets as sorting is a merely O(n * log n) operation. Even better though, when the use case requires primarily set-theoretic operations on our sets and the number of changes/membership queries is comparatively low, the most efficient technique may be to keep the lists sorted at all times.

Merge Sort

Speaking about sorting, the algorithms we discussed for array sorting in the previous chapter do not work as efficient for lists for they are based on swap operations, which are O(n), in the list case. Thus, another approach is required, and there exist a number of efficient list sorting algorithms, the most prominent of which is Merge sort. It works by splitting the list into two equal parts until we get to trivial one-element lists and then merging the sorted lists into the bigger sorted ones. The merging procedure for sorted lists is efficient as we've seen in the previous example. A nice feature of such an approach is its stability, i.e. preservation of the original order of the equal elements, given the proper implementation of the merge procedure.

(defun merge-sort (list comp)
(if (null (rest list))
(let ((half (floor (length list) 2)))
(merge-lists (merge-sort (subseq seq 0 half) comp)
(merge-sort (subseq seq half) comp)

(defun merge-lists (l1 l2 comp)
(let ((rez ())
(do ()
((and (null l1) (null l2)))
(let ((i1 (first l1))
(i2 (first l2)))
(cond ((null i1) (dolist (i l2)
(push i rez))
((null i2) (dolist (i l1)
(push i rez))
((call comp i1 i2) (push i1 rez)
(:= l1 (rest l1)))
(t (push i2 rez)
(:= l2 (rest l2))))))
(reverse rez)))

The same complexity analysis as for binary search applies to this algorithm. At each level of the recursion tree, we perform O(n) operations: each element is pushed into the resulting list once, reversed once, and there are at most 4 comparison operations: 3 null checks and 1 call of the comp function. We also need to perform one copy per element in the subseq operation and take the length of the list (although it can be memorized and passed down as the function call argument) on the recursive descent. This totals to not more than 10 operations per element, which is a constant. And the height of the tree is, as we already know, (log n 2). So, the total complexity is O(n * log n).

Let's now measure the real time needed for such sorting, and let's compare it to the time of prod-sort (with optimal array accessors) from the Arrays chapter:

CL-USER> (with ((lst (random-list 10000))
(vec (make-array 10000 :initial-contents lst)))
(print-sort-timings "Prod" 'prod-sort vec)
(print-sort-timings "Merge " 'merge-sort lst))
= Prodsort of random vector =
Evaluation took:
0.048 seconds of real time
= Prodsort of sorted vector =
Evaluation took:
0.032 seconds of real time
= Prodsort of reverse sorted vector =
Evaluation took:
0.044 seconds of real time
= Merge sort of random vector =
Evaluation took:
0.007 seconds of real time
= Merge sort of sorted vector =
Evaluation took:
0.007 seconds of real time
= Merge sort of reverse sorted vector =
Evaluation took:
0.008 seconds of real time

Interestingly enough, Merge sort turned out to be around 5 times faster, although it seems that the number of operations required at each level of recursion is at least 2-3 times bigger than for quicksort. Why we got such result is left as an exercise to the reader: I'd start from profiling the function calls and looking where most of the time is wasted...

It should be apparent that the merge-lists procedure works in a similar way to set-theoretic operations on sorted lists that we've discussed in the previous part. It is, in fact, provided in the Lisp standard library. Using the standard merge, Merge sort may be written in a completely functional and also generic way to support any kind of sequences:

(defun merge-sort (seq comp)
(if (or (null seq) ; avoid expensive length calculation for lists
(<= (length seq) 1))
(let ((half (floor (length seq) 2)))
(merge (type-of seq)
(merge-sort (subseq seq 0 half) comp)
(merge-sort (subseq seq half) comp)

There's still one substantial difference of Merge sort from the array sorting functions: it is not in-place. So it also requires the O(n * log n) additional space to hold the half sublists that are produced at each iteration. Sorting and merging them in-place is not possible. There are ways to somewhat reduce this extra space usage but not totally eliminate it.

Parallelization of Merge Sort

The extra-space drawback of Merge sort may, however, turn irrelevant if we consider the problem of parallelizing this procedure. The general idea of parallelized implementation of any algorithm is to split the work in a way that allows reducing the runtime proportional to the number of workers performing those jobs. In the ideal case, if we have m workers and are able to spread the work evenly the running time should be reduced by a factor of m. For the Merge sort, it will mean just O(n/m * log n). Such ideal reduction is not always achievable, though, because often there are bottlenecks in the algorithm that require all or some workers to wait for one of them to complete its job.

Here's a trivial parallel Merge sort implementation that uses the eager-future2 library, which adds high-level data parallelism capabilities based on the Lisp implementation's multithreading facilities:

(defun parallel-merge-sort (seq comp)
(if (or (null seq) (<= (length seq) 1))
(with ((half (floor (length seq) 2))
(thread1 (eager-future2:pexec
(merge-sort (subseq seq 0 half) comp)))
(thread2 (eager-future2:pexec
(merge-sort (subseq seq half) comp))))
(merge (type-of seq)
(eager-future2:yield thread1)
(eager-future2:yield thread2)

The eager-future2:pexec procedure submits each merge-sort to the thread pool that manages multiple CPU threads available in the system and continues program execution not waiting for it to return. While eager-future2:yield pauses execution until the thread performing the appropriate merge-sort returns.

When I ran our testing function with both serial and parallel merge sorts on my machine, with 4 CPUs, I got the following result:

CL-USER> (with ((lst1 (random-list 10000))
(lst2 (copy-list lst1)))
(print-sort-timings "Merge " 'merge-sort lst1)
(print-sort-timings "Parallel Merge " 'parallel-merge-sort lst2))
= Merge sort of random vector =
Evaluation took:
0.007 seconds of real time
114.29% CPU
= Merge sort of sorted vector =
Evaluation took:
0.006 seconds of real time
116.67% CPU
= Merge sort of reverse sorted vector =
Evaluation took:
0.007 seconds of real time
114.29% CPU
= Parallel Merge sort of random vector =
Evaluation took:
0.003 seconds of real time
266.67% CPU
= Parallel Merge sort of sorted vector =
Evaluation took:
0.003 seconds of real time
266.67% CPU
= Parallel Merge sort of reverse sorted vector =
Evaluation took:
0.005 seconds of real time
220.00% CPU

A speedup of approximately 2x, which is also reflected by the rise in CPU utilization from around 100% (i.e. 1 CPU) to 250%. These are correct numbers as the merge procedure is still executed serially and remains the bottleneck. There are more sophisticated ways to achieve optimal m times speedup, in Merge sort parallelization, but we won't discuss them here due to their complexity.

Lists and Lisp

Historically, Lisp's name originated as an abbreviation of "List Processing", which points both to the significance that lists played in the language's early development and also to the fact that flexibility (a major feature of lists) was always a cornerstone of its design. Why are lists important to Lisp? Maybe, originally, it was connected with the availability and the good support of this data structure in the language itself. But, quickly, the focus shifted to the fact that, unlike other languages, Lisp code is input in the compiler not in a custom string-based format but in the form of nested lists that directly represent the syntax tree. Coupled with superior support for the list data structure, it opens numerous possibilities for programmatic processing of the code itself, which are manifest in the macro system, code walkers and generators, etc. So, "List Processing" turns out to be not about lists of data, but about lists of code, which perfectly describes the main distinctive feature of this language...


[1] While, in the Lisp machines, cons cells even had special hardware support, and such change would have made it useless.

[2] Although, for structs, it is implementation-dependent if this will work. In the major implementations, it will.


A cycle of renewal, broken: How Big Tech and Big Media abuse copyright law to slay competition [Cory Doctorow – Boing Boing]

As long we've had electronic mass media, audiences and creators have benefited from periods of technological upheaval that force old gatekeepers to compete with brash newcomers with new ideas about what constitutes acceptable culture and art. Those newcomers eventually became gatekeepers themselves, who then faced their own crop of revolutionaries. But today, the cycle is broken: as media, telecoms, and tech have all grown concentrated, the markets have become winner-take-all clashes among titans who seek to dominate our culture, our discourse and our communications.

How did the cycle end? Can we bring it back? To understand the answers to these questions, we need to consider how the cycle worked — back when it was still working.

How Things Used to Work

In 1950, a television salesman named Robert Tarlton put together a consortium of TV merchants in the town of Lansford, Pennsylvania to erect an antenna tall enough to pull down signals from Philadelphia, about 90 miles to the southeast. The antenna connected to a web of cables that the consortium strung up and down the streets of Lansford, bringing big-city TV to their customers — and making TV ownership for Lansfordites far more attractive. Though hobbyists had been jury-rigging their own "community antenna television" networks since 1948, no one had ever tried to go into business with such an operation. The first commercial cable TV company was born.

The rise of cable over the following years kicked off decades of political controversy over whether the cable operators should be allowed to stay in business, seeing as they were retransmitting broadcast signals without payment or permission and collecting money for the service. Broadcasters took a dim view of people using their signals without permission, which is a little rich, given that the broadcasting industry itself owed its existence to the ability to play sound recordings over the air without permission or payment.

The FCC brokered a series of compromises in the years that followed, coming up with complex rules governing which signals a cable operator could retransmit, which ones they must retransmit, and how much all this would cost. The end result was a second way to get TV, one that made peace with—and grew alongside—broadcasters, eventually coming to dominate how we get cable TV in our homes.

By 1976, cable and broadcasters joined forces to fight a new technology: home video recorders, starting with Sony's Betamax recorders. In the eyes of the cable operators, broadcasters, and movie studios, these were as illegitimate as the playing of records over the air had been, or as retransmitting those broadcasts over cable had been. Lawsuits over the VCR continued for the next eight years. In 1984, the Supreme Court finally weighed in, legalizing the VCR, and finding that new technologies were not illegal under copyright law if they were "capable of substantial noninfringing uses."

It's hard to imagine how controversial the VCR was in its day. MPAA president Jack Valenti made history by attending a congressional hearing where he thundered ,"I say to you that the VCR is to the American film producer and the American public as the Boston Strangler is to the woman home alone."

Despite that unequivocal condemnation, home recording is so normal today that your cable operator likely offers to bundle a digital recorder with your subscription. Just as the record companies made peace with broadcasters, and broadcasters made peace with cable, cable has made its peace with home recording.

It's easy to imagine that this is the general cycle of technology: a new technology comes along and rudely shoulders its way into the marketplace, pouring the old wine of the old guard into its shiny new bottles. The old guard insist that these brash newcomers are mere criminals, and demand justice.

The public flocks to the new technology, and, before you know it, the old guard and the newcomers are toasting one another at banquets and getting ready to sue the next vulgarian who has the temerity to enter their market and pour their old wine into even newer bottles.

That's how it used to work, but the cycle has been interrupted.

The Cycle is Broken

In 1998, Congress passed the Digital Millennium Copyright Act, whose Section 1201 bans bypassing a "technical measure" that “controls access” to copyrighted works. The statute does not make an exemption for people who need to bypass a copyright lock to do something legal, so traditional acts of "adversarial interoperability" (making a new thing that plugs into an old thing without asking for permission) can be headed off before they even get started. Once a company adds a digital lock to its products, it can scare away other companies that want to give it the broadcasters vs records/cable vs broadcasters/VCRs vs cable treatment. These challengers will have to overcome their fear that "trafficking” in a “circumvention device" could trigger DMCA 1201's civil damages or even criminal penalties—$500,000 and 5 years in prison...for a first offense.

When companies like Sony made the first analog TV recorders, they focused on what their customer wanted, not what the winners of last year's technological battle thought was proper. That's how we got VCRs that could record off the air or cable (so you could record any show, even major Hollywood movies getting their first broadcast airing) and that allowed recordings made on one VCR to be played on another recorder (so you could bring that movie over to a friend's house to watch with a bowl of popcorn).

Today's digital video products are different. Cable TV, satellite TV, DVDs/HD DVDs/Blu-Ray, and streaming services all use digital locks that scramble their videos. This allows them to threaten any would-be adversarial interoperators with legal reprisals under DMCA 1201, should they have the temerity to make a user-focused recorder for their products. That stifles a lot of common-sense ideas: for example, a recorder that works on all the programs your cable delivers (even pay-per-views and blockbusters); a recorder that lets you store the Christmas videos that Netflix and Amazon Prime take out of rotation at Christmastime so that you have to pay an upcharge to watch them when they're most relevant; or a recorder that lets you record a video and take it over to a friend's house or transfer it to an archival drive so you can be sure you can watch it ten years (or even ten minutes from now.

Since the first record players, every generation of entertainment technology has been overtaken by a new generation—a generation that allowed new artists to find new audiences, a new generation that overturned the biases and preconceptions of the executives that controlled the industry and allowed for new modes of expression and new ideas.

Today, as markets concentrate—cable, telecoms, movie studios, and tech platforms—the competition is shifting from the short-lived drive to produce the best TV possible to a long-term strategy of figuring out how to use a few successful shows to sell bundles of mediocre ones.

In a world where the cycle that led to the rise of cable and streaming was still in effect, you could record your favorite shows before they were locked behind a rival's paywalls. You could search all the streaming services' catalogs from a single interface and figure out how to make your dollar go farther by automatically assembling a mix of one-off payments and subscriptions. You could stream the videos your home devices received to your phone while you were on the road...and more.

And just as last year's pirates — the broadcasters, the cable operators, the VCR makers — became this year's admirals, the companies that got their start by making new services that centered your satisfaction instead of the goodwill of the entrenched industries would someday grow to be tomorrow's Goliaths, facing a new army of Davids.

Fatalistic explanations for the unchecked rise of today's monopolized markets—things like network effects and first-mover advantage—are not the whole story. They are not unstoppable forces of nature. The cycle of concentration and renewal in media-tech shows us that, whatever role the forces of first-mover advantage and network effects are playing in market concentration, they are abetted by some badly written and oft-abused legal rules.

DMCA 1201 let companies declare certain kinds of competition illegal: adversarial interoperability, one of the most historically tried-and-true methods for challenging dominant companies, can be made into a crime simply by designing products so that connecting to them requires you to bypass a copyright lock. Since DMCA 1201 bans this "circumvention," it also bans any competition that requires circumvention.

That's why we're challenging DMCA 1201 in court: we don't think that companies should be able to make up their own laws, because inevitably, these turn into "Felony Contempt of Business Model."

DMCA 1201 is just one of the laws and policies that have created the thicket that would-be adversarial interoperators run up against when they seek to upend the established hierarchy: software patents, overreaching license agreements, and theories of tortious interference with contractual relations are all so broadly worded and interpreted that they can be used to intimidate would-be competitors no matter how exciting their products are and no matter how big the market for them would be.

(Crossposted from EFF Deeplinks)


Jonathan Dowland: NAS upgrade [Planet Debian]

After 5 years of continuous service, the mainboard in my NAS recently failed (at the worst possible moment). I opted to replace the mainboard with a more modern version of the same idea: ASRock J4105-ITX featuring the Intel J4105, an integrated J-series Celeron CPU, designed to be passively cooled, and I've left the rest of the machine as it was.

In the process of researching which CPU/mainboard to buy, I was pointed at the Odroid-H2: a single-board computer (SBC) designed/marketed at a similar sector to things like the Raspberry PI (but featuring the exact same CPU as the mainboard I eventually settled on). I've always felt that the case I'm using for my NAS is too large, but didn't want to spend much money on a smaller one. The ODroid-H2 has a number of cheap, custom-made cases for different use-cases, including one for NAS-style work, which is in a very small footprint: the "Case 1". Unfortunately this case positions two disk drives flat, one vertically above the other, and both above the SBC. I was too concerned that one drive would be heating the other, and cumulatively both heating the SBC at that orientation. The case is designed with a fan but I want to avoid requiring one. I had too many bad memories of trying to control the heat in my first NAS, the Thecus n2100, which (by default) oriented the drives in the same way (and for some reason it never occurred to me to rotate that device into the "toaster" orientation).

I've mildly revised my NAS page to reflect the change. Interestingly most of the niggles I was experiencing were all about the old mainboard, so I've moved them on a separate page (J1900N-D3V) in case they are useful to someone.

At some point in the future I hope to spend a little bit of time on the software side of things, as some of the features of my set up are no longer working as they should: I can't remote-decrypt the main disk via SSH on boot, and the first run of any backup fails due to some kind of race condition in the systemd unit dependencies. (The first attempt does not correctly mount the backup partition; the second attempt always succeeds).


The SuperH-3, part 11: Atomic operations [The Old New Thing]

The SH-3 has a very limited number of read-modify-write operations. To recap:

    AND.B #imm, @(r0, GBR)  ; @(r0 + gbr) &= 8-bit immediate
    OR.B  #imm, @(r0, GBR)  ; @(r0 + gbr) |= 8-bit immediate
    XOR.B #imm, @(r0, GBR)  ; @(r0 + gbr) ^= 8-bit immediate
    TAS.B @Rn               ; T = (@Rn == 0), @Rn |= 0x80

These instructions are “atomic” in the sense that they occur within a single instruction and are hence non-interruptible. Technically, only the last one is truly atomic in the sense that the processor holds the data bus locked for the duration of the instruction.

Let’s not quibble about such details. Let’s just say we’re looking for non-interruptible instructions.

The SH-3 does not support symmetric multiprocessing, so we don’t have to worry about competing accesses from other main processors (although there may be competing accesses from coprocessors or hardware devices). But how are we going to build atomic increment, decrement, and exchange out of these guys?

Let’s be honest. We can’t.

We’ll have to fake it.

Windows CE takes a different approach from how Windows 98 created atomic operations on a processor that didn’t support them.

On Windows CE, the kernel is in cahoots with the implementations of the interlocked operations. If it discovers that it interrupted a special uninterruptible sequence, it resets the program counter back to the start of the uninterruptible sequence before allowing user mode to resume.¹ In this way, the kernel manufactures multi-instruction uninterruptible sequences.

These sequences have to be carefully written so that they are restartable. This means that they cannot mutate any input parameters, and there are no memory updates until the final instruction in the sequence.

For example, we could try to implement our fake Interlocked­Increment like this:

; on entry:
;   r4 = address to increment
; on exit:
;   r0 = incremented value

    mov.l   @r4, r0     ; load current value    ; (1)
    add     #1, r0      ; increment it          ; (2)
    mov.l   r0, @r4     ; store updated value   ; (3)
    rts                 ; return                ; (4)

We load the current value from memory, add 1, store it back, and return. If this sequence is interrupt at any point, the kernel moves the program counter back to the first instruction and restarts the entire operation.

Let’s walk through the possible interrupts.

  • If interrupted prior to the first instruction, then moving the program counter back to the first instruction has no effect because that’s where it already was. So no problems there.
  • If interrupted prior to the second instruction, then we will perform the mov.l @r4, r0 a second time. Since we haven’t changed r4, this will read the desired memory location. It’s a redundant read, but at least it’s not harmful.
  • If interrupted prior to the third instruction, then we will reload and re-increment the existing value. Again, since we haven’t changed r4, this will read the correct location.
  • If interrupted prior to the fourth instruction, then we’re in trouble. We have already written the updated value back to memory, and restarting the operation will increment it a second time! This code is broken.

Aha, but we forgot about the branch delay slot of the rts instruction, and in fact it’s the branch delay slot that provides our escape hatch: Move the final store into the branch delay slot.

; on entry:
;   r4 = address to increment
; on exit:
;   r0 = incremented value

    mov.l   @r4, r0     ; load current value    ; (1)
    add     #1, r0      ; increment it          ; (2)
    rts                 ; return                ; (3)
    mov.l   r0, @r4     ; store updated value   ; (4)

Okay, let’s run our analysis again.

  • If interrupted prior to the first instruction, our analysis from above is still correct.
  • If interrupted prior to the second instruction, our analysis from above is still correct.
  • If interrupted prior to the third instruction, our analysis from above is still correct.
  • An interrupt between the third and fourth instruction is not possible because the processor disables interrupts between a delayed branch instruction and its delay slot. But if an exception occurred (say, because the memory was copy-on-write), we can safely restart the operation because we haven’t modified r4 or the value in memory at r4
  • If interrupted after the fourth instruction, then the program counter isn’t in our special code region, so the kernel won’t restart the sequence.

The branch delay slot saved us!

You never thought you’d see the day when you’d be thankful for a branch delay slot.

The kernel puts these special uninterruptible sequences in a contiguous region of memory. Let’s say that it starts each special uninterruptible sequence on a 16-byte boundary. This means that the “special uninterruptible sequence detector” can go something like this:

    mov.l   @(usermode_pc), r0          ; see where we're returning to
    mov.l   #start_of_sequences, r1     ; the start of our special sequences
    mov     #length_of_sequences, r2    ; the size in bytes
    sub     r1, r0
    cmp/hs  r0, r2                      ; is it in the magic region?
    bf      fixme                       ; Y: then go fix it
    ... continue as usual ...

    mov     #-15, r2                    ; mask out the bottom 4 bits
    and     r2, r0                      ; to go back to start of special sequence
    add     r1, r0                      ; convert from offset back to address
    bra     return_to_user_mode
    mov.l   r0, @(usermode_pc)          ; update user mode program counter

This is not actually how it goes, but it gives you the basic idea. In reality, the special uninterruptible sequences start on 8-byte boundaries, in order to pack them more tightly. Sequences that are longer than 4 instructions need to be arranged so that every 8 bytes is a valid restart point. I just used 16-byte sequences to make the explanation simpler.

For example, Interlocked­Compare­Exchange really went like this:

; on entry:
;   r4 = address of value to test
;   r5 = replacement value (if current value matches expected value)
;   r6 = expected value
; on exit:
;   r0 = previous value

    mov.l   @r4, r0     ; load current value
    cmp/eq  r0, r6      ; is it the expected value?
    bf      nope        ; Nope, just return current value
    mov.l   r5, @r4     ; Store the replacement value

There is a second restart point after four instructions, at the rts, and it’s okay to restart there because the operation is complete. All we’re doing is returning to our caller.

This trick for creating restartable multi-instruction sequences was not unique to the SH-3. Windows CE employed it to synthesize pseudo-atomic operations for other processors, too.

One curious side effect of this design for restartable multi-instruction sequences is that you can’t debug them! If you try to single-step through these multi-instruction sequences, you’ll get stuck on the first instruction: The breakpoint will fire, and the kernel will reset the program counter back to the first instruction.

Next time, we’ll look at the Windows CE calling convention.

Bonus chatter: The SH-4A processor added load-locked and store-conditional instructions, bringing it in line with other RISC processors.

    MOVLI.L @Rm,r0          ; Load from @Rm, remember lock
    MOVCO.L r0,@Rn          ; Store to @Rn provided lock is still valid
                            ; T = 1 if store succeeded, 0 if failed

Bonus chatter 2: What about the TEB? Where does Windows keep per-thread information?

Turn out this is easier than it sounds. The SH-3 doesn’t support symmetric multiprocessing, so there is only one processor, which therefore can be executing only one thread at a time. A pointer to the per-thread information is stored at a fixed location, and that pointer is updated at each thread switch.

¹ Fast Mutual Exclusion for Uniprocessors. Brian Bershad, David Redell, and John Ellis, Proceedings of the fifth international conference on Architectural support for programming languages and operating systems, 1992.

² Suppose an exception occurs in the delay slot because the memory isn’t writable, and the exception handler fixes the problem (by making the memory writable on demand). Resuming execution will rewind the instruction pointer back to the start of the sequence because the memory value may have changed as part of handling the exception.


The post The SuperH-3, part 11: Atomic operations appeared first on The Old New Thing.


The TSA strip searched a grandmother on Mother's Day and now says that she's overreacting because it's no different from a locker room [Cory Doctorow – Boing Boing]

Last Mother's Day, grandmother Rhonda Mengert was subjected to a pat-down search at Tulsa airport, wherein a TSA agent felt a panty-liner in her underwear; she was then forced to strip down and show her panty-liner to a female TSA agent. Naturally, she filed suit against the TSA.

Now, the TSA has filed a reply brief in which they assert that Mengert is overreacting because "the intrusion on her privacy was no more severe than what could be routinely experienced in a women’s locker room, where states of partial undress and feminine hygiene products are subject to observation by other members of the same gender."

Mengert's lawyer is Jon Corbett, who first came to the public eye with a high profile demonstration that he could smuggle lethal metal weapons through the TSA's full-body scanners. He writes, "Is a rape victim’s trauma is no greater than they would have had during consensual sex? Can peeping toms now use this same defense? If not, then how can one possibly argue that having 2 uniformed federal employees force my client into a back room to show them her most intimate areas is in any way comparable to one voluntarily using a locker room?"

The difference between “extreme and outrageous” and “just locker room embarrassment,” Ms. Zintgraff, is consent. And respectfully, while I don’t personally have a lot of experience with women’s locker room etiquette, I must assume that inspecting each other’s pads is generally not a part of the experience. At least DOJ attorneys have moved on from arguing that kids detained for weeks don’t need blankets or toothbrushes… it’s just unfortunate that they’ve now taken up selling out on women’s rights in order to avoid paying a woman who they violated.

TSA: Forced Strip-Search No More Offensive Than Voluntarily Using a Locker Room [Jon Corbett/Professional Troublemaker]

(Thanks, Sai and Jonathan!)

(Image: Maandverband.jpg, CC BY-SA)


Link [Scripting News]

I think private Facebook groups are filling in the gaps more than most professional news people are aware.


Stapelberg: distri: a Linux distribution to research fast package management []

Michael Stapelberg has announced the first release of "distri", a distribution focused on simplifying and accelerating package management. "distri’s package manager is extremely fast. Its main bottleneck is typically the network link, even at high speed links (I tested with a 100 Gbps link). Its speed comes largely from an architecture which allows the package manager to do less work."

Security updates for Monday []

Security updates have been issued by CentOS (kernel and openssl), Debian (ffmpeg, golang-1.11, imagemagick, kde4libs, openldap, and python3.4), Fedora (gradle, hostapd, kdelibs3, and mgetty), Gentoo (adobe-flash, hostapd, mariadb, patch, thunderbird, and vlc), Mageia (elfutils, mariadb, mythtv, postgresql, and redis), openSUSE (chromium, kernel, LibreOffice, and zypper, libzypp and libsolv), Oracle (ghostscript), Red Hat (rh-php71-php), SUSE (bzip2, evince, firefox, glib2, glibc, java-1_8_0-openjdk, polkit, postgresql10, python3, and squid), and Ubuntu (firefox).

A new chair for the openSUSE board []

Richard Brown has announced that he is stepping down as the chair of the openSUSE board. "I have absolute confidence in the openSUSE Board; Indeed, I don't think I would be able to make this decision at this time if I wasn't certain that I was leaving openSUSE in good hands. On that note, SUSE has appointed Gerald Pfeifer as my replacement as Chair. Gerald is SUSE's EMEA-based CTO, with a long history as a Tumbleweed user, an active openSUSE Member, and upstream contributor/maintainer in projects like GCC and Wine."

Kernel prepatch 5.3-rc5 []

Linus has released the 5.3-rc5 kernel prepatch, saying: "It's been calm, and nothing here stands out, except perhaps some of the VM noise where we un-reverted some changes wrt node-local vs hugepage allocations."

The Rise of “Bulletproof” Residential Networks [Krebs on Security]

Cybercrooks increasingly are anonymizing their malicious traffic by routing it through residential broadband and wireless data connections. Traditionally, those connections have been mainly hacked computers, mobile phones, or home routers. But this story is about so-called “bulletproof residential VPN services” that appear to be built by purchasing or otherwise acquiring discrete chunks of Internet addresses from some of the world’s largest ISPs and mobile data providers.

In late April 2019, KrebsOnSecurity received a tip from an online retailer who’d seen an unusual number of suspicious transactions originating from a series of Internet addresses assigned to a relatively new Internet provider based in Maryland called Residential Networking Solutions LLC.

Now, this in itself isn’t unusual; virtually every provider has the occasional customers who abuse their access for fraudulent purposes. But upon closer inspection, several factors caused me to look more carefully at this company, also known as “Resnet.”

An examination of the IP address ranges assigned to Resnet shows that it maintains an impressive stable of IP blocks — totaling almost 70,000 IPv4 addresses — many of which had until quite recently been assigned to someone else.

Most interestingly, about ten percent of those IPs — more than 7,000 of them — had until late 2018 been under the control of AT&T Mobility. Additionally, the WHOIS registration records for each of these mobile data blocks suggest Resnet has been somehow reselling data services for major mobile and broadband providers, including AT&T, Verizon, and Comcast Cable.

The WHOIS records for one of several networks associated with Residential Networking Solutions LLC.

Drilling down into the tracts of IPs assigned to Resnet’s core network indicates those 7,000+ mobile IP addresses under Resnet’s control were given the label  “Service Provider Corporation” — mostly those beginning with IPs in the range 198.228.x.x.

An Internet search reveals this IP range is administered by the Wireless Data Service Provider Corporation (WDSPC), a non-profit formed in the 1990s to manage IP address ranges that could be handed out to various licensed mobile carriers in the United States.

Back when the WDSPC was first created, there were quite a few mobile wireless data companies. But today the vast majority of the IP space managed by the WDSPC is leased by AT&T Mobility and Verizon Wireless — which have gradually acquired most of their competing providers over the years.

A call to the WDSPC revealed the nonprofit hadn’t leased any new wireless data IP space in more than 10 years. That is, until the organization received a communication at the beginning of this year that it believed was from AT&T, which recommended Resnet as a customer who could occupy some of the company’s mobile data IP address blocks.

“I’m afraid we got duped,” said the person answering the phone at the WDSPC, while declining to elaborate on the precise nature of the alleged duping or the medium that was used to convey the recommendation.

AT&T declined to discuss its exact relationship with Resnet  — or if indeed it ever had one to begin with. It responded to multiple questions about Resnet with a short statement that said, “We have taken steps to terminate this company’s services and have referred the matter to law enforcement.”

Why exactly AT&T would forward the matter to law enforcement remains unclear. But it’s not unheard of for hosting providers to forge certain documents in their quest for additional IP space, and anyone caught doing so via email, phone or fax could be charged with wire fraud, which is a federal offense that carries punishments of up to $500,000 in fines and as much as 20 years in prison.


The WHOIS registration records for Resnet’s main Web site, resnetworking[.]com, are hidden behind domain privacy protection. However, a cursory Internet search on that domain turned up plenty of references to it on Hackforums[.]net, a sprawling community that hosts a seemingly never-ending supply of up-and-coming hackers seeking affordable and anonymous ways to monetize various online moneymaking schemes.

One user in particular — a Hackforums member who goes by the nickname “Profitvolt” — has spent several years advertising resnetworking[.]com and a number of related sites and services, including “unlimited” AT&T 4G/LTE data services, and the immediate availability of more than 1 million residential IPs that he suggested were “perfect for botting, shoe buying.”

The Hackforums user “Profitvolt” advertising residential proxies.

Profitvolt advertises his mobile and residential data services as ideal for anyone who wishes to run “various bots,” or “advertising campaigns.” Those services are meant to provide anonymity when customers are doing things such as automating ad clicks on platforms like Google Adsense and Facebook; generating new PayPal accounts; sneaker bot activity; credential stuffing attacks; and different types of social media spam.

For readers unfamiliar with this term, “shoe botting” or “sneaker bots” refers to the use of automated bot programs and services that aid in the rapid acquisition of limited-release, highly sought-after designer shoes that can then be resold at a profit on secondary markets. All too often, it seems, the people who profit the most in this scheme are using multiple sets of compromised credentials from consumer accounts at online retailers, and/or stolen payment card data.

To say shoe botting has become a thorn in the side of online retailers and regular consumers alike would be a major understatement: A recent State of The Internet Security Report (PDF) from Akamai (an advertiser on this site) noted that such automated bot activity now accounts for almost half of the Internet bandwidth directed at online retailers. The prevalance of shoe botting also might help explain Footlocker‘s recent $100 million investment in, the largest secondary shoe resale market on the Web.

In other discussion threads, Profitvolt advertises he can rent out an “unlimited number” of so-called “residential proxies,” a term that describes home or mobile Internet connections that can be used to anonymously relay Internet traffic for a variety of dodgy deals.

From a ne’er-do-well’s perspective, the beauty of routing one’s traffic through residential IPs is that few online businesses will bother to block malicious or suspicious activity emanating from them.

That’s because in general the pool of IP addresses assigned to residential or mobile wireless connections cycles intermittently from one user to the next, meaning that blacklisting one residential IP for abuse or malicious activity may only serve to then block legitimate traffic (and e-commerce) from the next user who gets assigned that same IP.


In one early post on Hackforums, Profitvolt laments the untimely demise of various “bulletproof” hosting providers over the years, from the Russian Business Network and Atrivo/Intercage, to McColo, 3FN and Troyak, among others.

All of these Internet providers had one thing in common: They specialized in cultivating customers who used their networks for nefarious purposes — from operating botnets and spamming to hosting malware. They were known as “bulletproof” because they generally ignored abuse complaints, or else blamed any reported abuse on a reseller of their services.

In that Hackforums post, Profitvolt bemoans that “mediums which we use to distribute [are] locking us out and making life unnecessarily hard.”

“It’s still sketchy, so I am not going all out to reveal my plans, but currently I am starting off with a 32 GB RAM server with a 1 GB unmetered up-link in a Caribbean country,” Profitvolt told forum members, while asking in different Hackforums posts whether there are any other users from the dual-island Caribbean nation of Trinidad and Tobago on the forum.

“To be quite honest, the purpose of this is to test how far we can stretch the leniency before someone starts asking questions, or we start receiving emails,” Profitvolt continued.

Hackforums user Profitvolt says he plans to build his own “bulletproof” hosting network catering to fellow forum users who might want to rent his services for a variety of dodgy activities.

KrebsOnSecurity started asking questions of Resnet after stumbling upon several indications that this company was enabling different types of online abuse in bite-sized monthly packages. The site resnetworking[.]com appears normal enough on the surface, but a review of the customer packages advertised on it suggests the company has courted a very specific type of client.

“No bullshit, just proxies,” reads one (now hidden or removed) area of the site’s shopping cart. Other promotions advertise the use of residential proxies to promote “growth services” on multiple social media platforms including CraigslistFacebook, Google, Instagram, Spotify, Soundcloud and Twitter.

Resnet also peers with or partners with several other interesting organizations, including:

residential-network[.]com, also known as “IAPS Security Services” (formerly intl-alliance[.]com), which advertises the sale of residential VPNs and mobile 4G/IPv6 proxies aimed at helping customers avoid being blocked while automating different types of activity, from mass-creating social media and email accounts to bulk message sending on platforms like WhatsApp and Facebook.

Laksh Cybersecurity and Defense LLC, which maintains Hexproxy[.]com, another residential proxy service that largely courts customers involved in shoe botting.

-Several chunks of IP space from a Russian provider variously known by the names “SERVERSGET” and “Men Danil Valentinovich,” which has been associated with numerous instances of hijacking vast swaths of IP addresses from other organizations quite recently.

Some of Profitvolt’s discussion threads on Hackforums.


Resnetworking[.]com lists on its home page the contact phone number 202-643-8533. That number is tied to the registration records for several domains, including resnetworking[.]com, residentialvpn[.]info, and residentialvpn[.]org. All of those domains also have in their historic WHOIS records the name Joshua Powder and Residential Networking Solutions LLC.

Running a reverse WHOIS lookup via on “Joshua Powder” turns up almost 60 domain names — most of them tied to the email address Among those are resnetworking[.]info, resvpn[.]com/net/org/info, tobagospeaks[.]com, tthack[.]com and profitvolt[.]com. Recall that “Profitvolt” is the nickname of the Hackforums user advertising resnetworking[.]com.

The email address was used to register an account on the scammer-friendly site blackhatworld[.]com under the nickname “BulletProofWebHost.” Here’s a list of domains registered to this email address.

A search on the Joshua Powder and tthack email addresses at Hyas, a startup that specializes in combining data from a number of sources to provide attribution of cybercrime activity, further associates those to and to the phone number 868-360-9983, which is a mobile number assigned by Digicel Trinidad and Tobago Ltd. A full list of domains tied to that 868- number is here.

Hyas’s service also pointed to this post on the Facebook page of the Prince George’s County Economic Development Corporation in Maryland, which appears to include a 2017 photo of Mr. Powder posing with county officials.


Roughly three weeks ago, KrebsOnSecurity called the 202 number listed at the top of resnetworking[.]com. To my surprise, a man speaking in a lovely Caribbean-sounding accent answered the call and identified himself as Josh Powder. When I casually asked from where he’d acquired that accent, Powder said he was a native of New Jersey but allowed that he has family members who now live in Trinidad and Tobago.

Powder said Residential Networking Solutions LLC is “a normal co-location Internet provider” that has been in operation for about three years and employs some 65 people.

“You’re not the first person to call us about residential VPNs,” Powder said. “In the past, we did have clients that did host VPNs, but it’s something that’s been discontinued since 2017. All we are is a glorified solutions provider, and we broker and lease Internet lines from different companies.”

When asked about the various “botting” packages for sale on Resnetworking[.]com, Powder replied that the site hadn’t been updated in a while and that these were inactive offers that resulted from a now-discarded business model.

“When we started back in 2016, we were really inexperienced, and hired some SEO [search engine optimization] firms to do marketing,” he explained. “Eventually we realized that this was creating a shitstorm, because it started to make us look a specific way to certain people. So we had to really go through a process of remodeling. That process isn’t complete, and the entire web site is going to retire in about a week’s time.”

Powder maintains that his company does have a contract with AT&T to resell LTE and 4G data services, and that he has a similar arrangement with Sprint. He also suggested that one of the aforementioned companies which partnered with Resnet — IAPS Security Services — was responsible for much of the dodgy activity that previously brought his company abuse complaints and strange phone calls about VPN services.

“That guy reached out to us and he leased service from us and nearly got us into a lot of trouble,” Powder said. “He was doing a lot of illegal stuff, and I think there is an ongoing matter with him legally. That’s what has caused us to be more vigilant and really look at what we do and change it. It attracted too much nonsense.”

Interestingly, when one visits IAPS Security Services’ old domain — intl-alliance[.]com — it now forwards to resvpn[.]com, which is one of the domains registered to Joshua Powder.

Shortly after our conversation, the monthly packages I asked Powder about that were for sale on resnetworking[.]com disappeared from the site, or were hidden behind a login. Also, Resnet’s IPv6 prefixes (a la IAPS Security Services) were removed from the company’s list of addresses. At the same time, a large number of Profitvolt’s posts prior to 2018 were deleted from Hackforums.


It appears that the future of low-level abuse targeting some of the most popular Internet destinations is tied to the increasing willingness of the world’s biggest ISPs to resell discrete chunks of their address space to whomever is able to pay for them.

Earlier this week, I had a Skype conversation with an individual who responded to my requests for more information from residential-network[.]com, and this person told me that plenty of mobile and land-line ISPs are more than happy to sell huge amounts of IP addresses to just about anybody.

“Mobile providers also sell mass services,” the person who responded to my Skype request offered. “Rogers in Canada just opened a new package for unlimited 4G data lines and we’re currently in negotiations with them for that service as well. The UK also has 4G providers that have unlimited data lines as well.”

The person responding to my Skype messages said they bought most of their proxies from a reseller at customproxysolutions[.]com, which advertises “the world’s largest network of 4G LTE modems in the United States.”

He added that “Rogers in Canada has a special offer that if you buy more than 50 lines you get a reduced price lower than the $75 Canadian Dollar price tag that they would charge for fewer than 50 lines. So most mobile ISPs want to sell mass lines instead of single lines.”

It remains unclear how much of the Internet address space claimed by these various residential proxy and VPN networks has been acquired legally or through other means. But it seems that Resnet and its business associates are in fact on the cutting edge of what it means to be a bulletproof Internet provider today.


Link [Scripting News]

As a commercial software developer I had heard about InfoWorld's review guidelines, written in 1994, but had not seen them until yesterday when Harry McCracken, a former member of their review board, posted an excerpt to Twitter. I asked if I could have a copy of the full manual so I could get them into the archive of my blog, and he kindly provided them. If anyone wants to reboot software reviews this would be a good place to start. In any case it's good to have this archived for future reference.


Top 10 Most Pirated Movies of The Week on BitTorrent – 08/19/19 [TorrentFreak]

This week we have three newcomers in our chart.

Aladdin is the most downloaded movie.

The data for our weekly download chart is estimated by TorrentFreak, and is for informational and educational reference only. All the movies in the list are Web-DL/Webrip/HDRip/BDrip/DVDrip unless stated otherwise.

RSS feed for the articles of the recent weekly movie download charts.

This week’s most downloaded movies are:
Movie Rank Rank last week Movie name IMDb Rating / Trailer
Most downloaded movies via torrents
1 (1) Aladdin 7.3 / trailer
2 (8) Godzilla: King of the Monsters 6.5 / trailer
3 (3) Avengers: Endgame 8.7 / trailer
4 (2) The Hustle 5.3 / trailer
5 (…) The Secret Life of Pets 2 6.5 / trailer
6 (5) Rocketman 7.6 / trailer
7 (4) Brightburn 6.2 / trailer
8 (…) Ma 5.8 / trailer
9 (…) John Wick: Chapter 3 – Parabellum 7.8 / trailer
10 10) Shazam! 7.3 / trailer

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.


Four short links: 19 August 2019 [All - O'Reilly Media]

Developer Tool, Deep Fakes, DNA Tests, and Retro Coding Hacks

  1. CROKAGE: A New Way to Search Stack Overflow -- a paper about a service [that] takes the description of a programming task as a query and then provides relevant, comprehensive programming solutions containing both code snippets and their succinct explanations. There's a replication package on GitHub. Follows in the footsteps of Douglas Adams's Electric Monk (which people bought to pay for them) and DVRs (which people use to watch TV for them), now we have software that'll copy dodgy code from the web for you. Programmers, software is coming for your jobs.
  2. Cheap Fakes Beat Deep Fakes -- One of the fundamental rules of information warfare is that you never lie (except when necessary.) Deepfakes are detectable as artificial content, which reveals the lie. This discredits the source of the information and the rest of their argument. For an information warfare campaign, using deepfakes is a high-risk proposition.
  3. I Took 9 Different Commercial DNA Tests and Got 6 Different Results -- refers to the dubious ancestry measures. "Ancestry itself is a funny thing, in that humans have never been these distinct groups of people," said Alexander Platt, an expert in population genetics at Temple University in Philadelphia. "So, you can't really say that somebody is 92.6 percent descended from this group of people when that's not really a thing."
  4. Dirty Tricks 6502 Programmers Use -- wonderfully geeky disection of a simple task rendered in as few bytes as possible.

Continue reading Four short links: 19 August 2019.


Influence Operations Kill Chain [Schneier on Security]

Influence operations are elusive to define. The Rand Corp.'s definition is as good as any: "the collection of tactical information about an adversary as well as the dissemination of propaganda in pursuit of a competitive advantage over an opponent." Basically, we know it when we see it, from bots controlled by the Russian Internet Research Agency to Saudi attempts to plant fake stories and manipulate political debate. These operations have been run by Iran against the United States, Russia against Ukraine, China against Taiwan, and probably lots more besides.

Since the 2016 US presidential election, there have been an endless series of ideas about how countries can defend themselves. It's time to pull those together into a comprehensive approach to defending the public sphere and the institutions of democracy.

Influence operations don't come out of nowhere. They exploit a series of predictable weaknesses -- and fixing those holes should be the first step in fighting them. In cybersecurity, this is known as a "kill chain." That can work in fighting influence operations, too­ -- laying out the steps of an attack and building the taxonomy of countermeasures.

In an exploratory blog post, I first laid out a straw man information operations kill chain. I started with the seven commandments, or steps, laid out in a 2018 New York Times opinion video series on "Operation Infektion," a 1980s Russian disinformation campaign. The information landscape has changed since the 1980s, and these operations have changed as well. Based on my own research and feedback from that initial attempt, I have modified those steps to bring them into the present day. I have also changed the name from "information operations" to "influence operations," because the former is traditionally defined by the US Department of Defense in ways that don't really suit these sorts of attacks.

Step 1: Find the cracks in the fabric of society­ -- the social, demographic, economic, and ethnic divisions. For campaigns that just try to weaken collective trust in government's institutions, lots of cracks will do. But for influence operations that are more directly focused on a particular policy outcome, only those related to that issue will be effective.

Countermeasures: There will always be open disagreements in a democratic society, but one defense is to shore up the institutions that make that society possible. Elsewhere I have written about the "common political knowledge" necessary for democracies to function. That shared knowledge has to be strengthened, thereby making it harder to exploit the inevitable cracks. It needs to be made unacceptable -- or at least costly -- for domestic actors to use these same disinformation techniques in their own rhetoric and political maneuvering, and to highlight and encourage cooperation when politicians honestly work across party lines. The public must learn to become reflexively suspicious of information that makes them angry at fellow citizens. These cracks can't be entirely sealed, as they emerge from the diversity that makes democracies strong, but they can be made harder to exploit. Much of the work in "norms" falls here, although this is essentially an unfixable problem. This makes the countermeasures in the later steps even more important.

Step 2: Build audiences, either by directly controlling a platform (like RT) or by cultivating relationships with people who will be receptive to those narratives. In 2016, this consisted of creating social media accounts run either by human operatives or automatically by bots, making them seem legitimate, gathering followers. In the years following, this has gotten subtler. As social media companies have gotten better at deleting these accounts, two separate tactics have emerged. The first is microtargeting, where influence accounts join existing social circles and only engage with a few different people. The other is influencer influencing, where these accounts only try to affect a few proxies (see step 6) -- either journalists or other influencers -- who can carry their message for them.

Countermeasures: This is where social media companies have made all the difference. By allowing groups of like-minded people to find and talk to each other, these companies have given propagandists the ability to find audiences who are receptive to their messages. Social media companies need to detect and delete accounts belonging to propagandists as well as bots and groups run by those propagandists. Troll farms exhibit particular behaviors that the platforms need to be able to recognize. It would be best to delete accounts early, before those accounts have the time to establish themselves.

This might involve normally competitive companies working together, since operations and account names often cross platforms, and cross-platform visibility is an important tool for identifying them. Taking down accounts as early as possible is important, because it takes time to establish the legitimacy and reach of any one account. The NSA and US Cyber Command worked with the FBI and social media companies to take down Russian propaganda accounts during the 2018 midterm elections. It may be necessary to pass laws requiring Internet companies to do this. While many social networking companies have reversed their "we don't care" attitudes since the 2016 election, there's no guarantee that they will continue to remove these accounts -- especially since their profits depend on engagement and not accuracy.

Step 3: Seed distortion by creating alternative narratives. In the 1980s, this was a single "big lie," but today it is more about many contradictory alternative truths -- a "firehose of falsehood" -- that distort the political debate. These can be fake or heavily slanted news stories, extremist blog posts, fake stories on real-looking websites, deepfake videos, and so on.

Countermeasures: Fake news and propaganda are viruses; they spread through otherwise healthy populations. Fake news has to be identified and labeled as such by social media companies and others, including recognizing and identifying manipulated videos known as deepfakes. Facebook is already making moves in this direction. Educators need to teach better digital literacy, as Finland is doing. All of this will help people recognize propaganda campaigns when they occur, so they can inoculate themselves against their effects. This alone cannot solve the problem, as much sharing of fake news is about social signaling, and those who share it care more about how it demonstrates their core beliefs than whether or not it is true. Still, it is part of the solution.

Step 4: Wrap those narratives in kernels of truth. A core of fact makes falsehoods more believable and helps them spread. Releasing stolen emails from Hillary Clinton's campaign chairman John Podesta and the Democratic National Committee, or documents from Emmanuel Macron's campaign in France, were both an example of that kernel of truth. Releasing stolen emails with a few deliberate falsehoods embedded among them is an even more effective tactic.

Countermeasures: Defenses involve exposing the untruths and distortions, but this is also complicated to put into practice. Fake news sows confusion just by being there. Psychologists have demonstrated that an inadvertent effect of debunking a piece of fake news is to amplify the message of that debunked story. Hence, it is essential to replace the fake news with accurate narratives that counter the propaganda. That kernel of truth is part of a larger true narrative. The media needs to learn skepticism about the chain of information and to exercise caution in how they approach debunked stories.

Step 5: Conceal your hand. Make it seem as if the stories came from somewhere else.

Countermeasures: Here the answer is attribution, attribution, attribution. The quicker an influence operation can be pinned on an attacker, the easier it is to defend against it. This will require efforts by both the social media platforms and the intelligence community, not just to detect influence operations and expose them but also to be able to attribute attacks. Social media companies need to be more transparent about how their algorithms work and make source publications more obvious for online articles. Even small measures like the Honest Ads Act, requiring transparency in online political ads, will help. Where companies lack business incentives to do this, regulation will be the only answer.

Step 6: Cultivate proxies who believe and amplify the narratives. Traditionally, these people have been called "useful idiots." Encourage them to take action outside of the Internet, like holding political rallies, and to adopt positions even more extreme than they would otherwise.

Countermeasures: We can mitigate the influence of people who disseminate harmful information, even if they are unaware they are amplifying deliberate propaganda. This does not mean that the government needs to regulate speech; corporate platforms already employ a variety of systems to amplify and diminish particular speakers and messages. Additionally, the antidote to the ignorant people who repeat and amplify propaganda messages is other influencers who respond with the truth -- in the words of one report, we must "make the truth louder." Of course, there will always be true believers for whom no amount of fact-checking or counter-speech will suffice; this is not intended for them. Focus instead on persuading the persuadable.

Step 7: Deny involvement in the propaganda campaign, even if the truth is obvious. Although since one major goal is to convince people that nothing can be trusted, rumors of involvement can be beneficial. The first was Russia's tactic during the 2016 US presidential election; it employed the second during the 2018 midterm elections.

Countermeasures: When attack attribution relies on secret evidence, it is easy for the attacker to deny involvement. Public attribution of information attacks must be accompanied by convincing evidence. This will be difficult when attribution involves classified intelligence information, but there is no alternative. Trusting the government without evidence, as the NSA's Rob Joyce recommended in a 2016 talk, is not enough. Governments will have to disclose.

Step 8: Play the long game. Strive for long-term impact over immediate effects. Engage in multiple operations; most won't be successful, but some will.

Countermeasures: Counterattacks can disrupt the attacker's ability to maintain influence operations, as US Cyber Command did during the 2018 midterm elections. The NSA's new policy of "persistent engagement" (see the article by, and interview with, US Cyber Command Commander Paul Nakasone here) is a strategy to achieve this. So are targeted sanctions and indicting individuals involved in these operations. While there is little hope of bringing them to the United States to stand trial, the possibility of not being able to travel internationally for fear of being arrested will lead some people to refuse to do this kind of work. More generally, we need to better encourage both politicians and social media companies to think beyond the next election cycle or quarterly earnings report.

Permeating all of this is the importance of deterrence. Deterring them will require a different theory. It will require, as the political scientist Henry Farrell and I have postulated, thinking of democracy itself as an information system and understanding "Democracy's Dilemma": how the very tools of a free and open society can be subverted to attack that society. We need to adjust our theories of deterrence to the realities of the information age and the democratization of attackers. If we can mitigate the effectiveness of influence operations, if we can publicly attribute, if we can respond either diplomatically or otherwise -- we can deter these attacks from nation-states.

None of these defensive actions is sufficient on its own. Steps overlap and in some cases can be skipped. Steps can be conducted simultaneously or out of order. A single operation can span multiple targets or be an amalgamation of multiple attacks by multiple actors. Unlike a cyberattack, disrupting will require more than disrupting any particular step. It will require a coordinated effort between government, Internet platforms, the media, and others.

Also, this model is not static, of course. Influence operations have already evolved since the 2016 election and will continue to evolve over time -- especially as countermeasures are deployed and attackers figure out how to evade them. We need to be prepared for wholly different kinds of influencer operations during the 2020 US presidential election. The goal of this kill chain is to be general enough to encompass a panoply of tactics but specific enough to illuminate countermeasures. But even if this particular model doesn't fit every influence operation, it's important to start somewhere.

Others have worked on similar ideas. Anthony Soules, a former NSA employee who now leads cybersecurity strategy for Amgen, presented this concept at a private event. Clint Watts of the Alliance for Securing Democracy is thinking along these lines as well. The Credibility Coalition's Misinfosec Working Group proposed a "misinformation pyramid." The US Justice Department developed a "Malign Foreign Influence Campaign Cycle," with associated countermeasures.

The threat from influence operations is real and important, and it deserves more study. At the same time, there's no reason to panic. Just as overly optimistic technologists were wrong that the Internet was the single technology that was going to overthrow dictators and liberate the planet, so pessimists are also probably wrong that it is going to empower dictators and destroy democracy. If we deploy countermeasures across the entire kill chain, we can defend ourselves from these attacks.

But Russian interference in the 2016 presidential election shows not just that such actions are possible but also that they're surprisingly inexpensive to run. As these tactics continue to be democratized, more people will attempt them. And as more people, and multiple parties, conduct influence operations, they will increasingly be seen as how the game of politics is played in the information age. This means that the line will increasingly blur between influence operations and politics as usual, and that domestic influencers will be using them as part of campaigning. Defending democracy against foreign influence also necessitates making our own political debate healthier.

This essay previously appeared in Foreign Policy.


Antitrust regulators are using the wrong tools to break up Big Tech [All - O'Reilly Media]

What we really need is disclosure of information about the growth and health of the supply side of Big Tech's marketplaces.

It’s a nerve-wracking time to be a Big Tech company. Yesterday, a US subcommittee on antitrust grilled representatives from Amazon, Google, Facebook, and Apple in Congress, and presidential candidates have gone so far as to suggest that these behemoths should be broken up. In the European Union, regulation is already happening: in March, the EU levied its third multibillion-dollar fine against Googlefor anti-competitive behavior.

In his 2018 letter to shareholders, published this past April, Jeff Bezos was already prepping for conversations with regulators. He doesn’t think Amazon is a monopoly. Instead, the company’s founder argues it is “just a small player in global retail.”

In Bezos’s defense, for many of the products Amazon sells, there are indeed many alternative sources, suggesting plenty of competition. Despite Amazon’s leadership in online retail, Walmart is more than double Amazon’s size as a general retailer, with Costco not far behind Amazon. Specialty retailers like Walgreens and CVS in the pharmacy world and Kroger and Albertson’s in groceries also dwarf Amazon’s presence in their categories.

But Amazon does not just compete with Walmart, CVS, Kroger, and other retailers—it also competes with the merchants who sell products through its platform.

This competition isn’t just the obvious kind, such as the Amazon Basics-branded batteries that by 2016 represented one third of all online battery sales, as well as similar Amazon products in audio, home electronics, baby wipes, bed sheets, and kitchenware. Amazon also competes with its merchants for visibility on its platform, and charges them additional fees for favored placement. And because Amazon is now leading with featured products rather than those its customers think are the best, its merchants are incentivized to advertise on the platform. Amazon’s fast-growing advertising business is thus a kind of tax on its merchants.

Likewise, Google does not just compete with other search engines like Bing and DuckDuckGo, but with everyone who produces content on the world wide web. Apple’s iPhone and Google’s Android don’t just compete with each other as smartphone platforms, but also with the app vendors who rely on smartphones to sell their products.

This kind of competition is taken for granted by antitrust regulators, who are generally more concerned with the end cost for consumers. And as anyone who has shopped online will know, Amazon is nearly always the cheaper option. (In fact, surveys have suggested that between seven and nine out of 10 Americans will check Amazon to compare the price of a purchase.) As long as the monopoly doesn’t lead to us forking out more money, then antitrust regulators traditionally leave it alone.

However, this view of antitrust leaves out some unique characteristics of digital platforms and marketplaces. These giants don’t just compete on the basis of product quality and price—they control the market through the algorithms and design features that decide which products users will see and be able to choose from. And these choices are not always in consumers’ best interests.

A fresh approach to antitrust

All of the internet giants—Amazon, Google, Facebook, and insofar as app stores are considered, Apple—provide the illusion of free markets, in which billions of consumers choose among millions of suppliers’ offerings, which compete on the basis of price, quality, and availability.

But if you recognize that what consumers really choose from is not the universe of all possible products, but those that are offered up to them either on the homepage or the search screen, the “shelf space” provided by these platforms is in fact far more limited than the tiniest of local markets—and what is placed on that shelf is uniquely under the control of the platform owner. And with mobile playing a larger and larger role, that digital shelf space is visibly shrinking rather than growing.

In short, the designers of marketplace-platform algorithms and screen layouts can arbitrarily allocate value to whom they choose. The marketplace is designed and controlled by its owners, and that design shapes “who gets what and why” (to use the marvelous phrase from Alvin E. Roth, who received a Nobel prize in economics for his foundational work in the field of market design.)

When it comes to antitrust, the question of market power must be answered by analyzing the effect of these marketplace designs on both buyers and sellers, and how they change over time. How much of the value goes to the platform, how much to consumers, and how much to suppliers?

The platforms have the power to take advantage of either side of their marketplace. Any abuse of market power is likely to show up first on the supply side. A dominant platform can squeeze its suppliers while continuing to pass along part of the benefit to consumers—but keeping more and more of it for themselves.

Over time, though, consumers feel the bite. Power over sellers ultimately translates into power over customers as well. As the platform owner favors its own offerings over those of its suppliers, choice is reduced, though it is only in the endgame that consumer pricing—the typical measure of a monopoly—begins to be affected.

The control that the platforms have over placement and visibility puts them in a unique position to collect what economists call rents: that is, value extracted through the ownership of a limited resource. These rents may come in the form of additional advantage given to the marketplace’s own private-label products, but also through the fees that are paid by merchants who sell through that platform. These fees can take many forms, including the necessity for merchants to spend more on advertising in order to gain visibility; Amazon products don’t have to pay such a levy.

The term “rents” dates back to the very earliest days of modern economics, when agricultural land was still the primary source of wealth. That land was worked productively by tenant farmers, who produced value through their labor. But the bulk of the benefit was taken by the landed gentry, who lived lives of ease on the unearned income that accrued to them simply through the ownership of their vast estates. In today’s parlance, Amazon’s merchants are becoming sharecroppers. The cotton field has been replaced by a search field.

Not all rents are bad. Economist Joseph Schumpeter pointed out that technological innovation often can lead to temporary rents, as innovators initially have a corner on a new product or service. But he also pointed out that these so-called Schumpeterian rents can, over time, become traditional monopolistic rents.

This is what antitrust regulators should be looking at when evaluating internet platform monopolies. Has control over the algorithms and designs that allocate attention become the latest tool in the landlord’s toolbox?

Big Tech has become the internet’s landlord—and rents are rising as a result.

In her book, The Value of Everything, economist Mariana Mazzucato makes the case that if we are really to understand the sources of inequality in our economy, economists must turn their attention back to rents. One of the central questions of classical economics was what activities are actually creating value for society, and which are merely value extracting—in effect charging a kind of tax on value that has actually been created elsewhere.

In today’s neoclassical economics, rents are seen as a temporary aberration, the result of market defects that will disappear given sufficient competition. But whether we are asking fundamental questions about value creation, or merely insufficient competition, rent extraction gives us a new lens through which to consider antitrust policy.

How internet platforms increase choice

Before digital marketplaces curtailed our choices as consumers, they first expanded our options.

Amazon’s virtually unlimited virtual shelf space radically expanded opportunity for both suppliers and consumers. After all, Amazon carries 120 million unique products in the US alone, compared to about 120,000 in a Walmart superstore or 35 million on What’s more, Amazon operates a marketplace with over 2.5 million third-party sellers, whose products, collectively, provide 58% of all Amazon retail revenue, with only 42% coming from Amazon’s first-party retail operation.

In the first-party retail operation, Amazon buys products from its suppliers and then resells them to consumers. In the third-party operation, Amazon collects fees for providing marketplace services to sellers—including display on, warehousing, shipping, and sometimes even financing—but never legally takes possession of the companies’ merchandise. This is what allows it to have so many more products to sell than its competitors: because Amazon never takes possession of inventory but instead charges suppliers for the services it provides, the risk of offering a slow-moving product is transferred from Amazon to its suppliers.

All of this appears to add up to the closest approximation ever seen in retail to what economists call “perfect competition.” This term refers to market conditions in which a large number of sellers with offers to provide comparable products at a range of prices are met by a large number of buyers looking for those products. Those buyers are armed not only with the ability to compare the price at which products are offered, but also to compare the quality of those products via consumer ratings and reviews. In order to win the business of consumers, suppliers must not only offer the best products at the best prices, but must compete for customers to express their satisfaction with the products they have bought.

So far, at least according to the statistics Bezos shared in his annual letter, the success of the Amazon marketplace is a triumph for both suppliers and consumers, and antitrust regulators should look elsewhere. As he put it, “Third-party sellers are kicking our first-party butt.”

He may well be right, but there are warning signs from other internet marketplaces like Google search that suggest the situation may not be as rosy as it appears. As it turns out, regulators need to consider some additional factors in order to understand the market power of internet platforms.

How internet platforms take away choice

If Amazon has become “the everything store” for physical goods, Google is the everything store for information.

Even more than Amazon, Google appears to meet the conditions for perfect competition. It matches up consumers with a near-infinite source of supply. Ask any question, and you’ll be provided with answers from hundreds or even thousands of competing content suppliers.

To do this, Google searches hundreds of billions of web pages created by hundreds of millions of information suppliers. Traditional price matching is absent, since much of the content is offered for free, but Google uses hundreds of other signals to determine what answers its customers are likely to find “best.” They measure such things as the reputation of the sites linking to any other site (page rank); the words those sites use to make those links (anchor text); the content of the document itself (via an AI engine referred to as “the Google Brain”); how likely people are to click on a given result in the list, based on millions of iterations, all recorded and measured; and even whether people clicked on a link and appear to have gone away satisfied (“a long click”) or came back and clicked on another (“a short click”).

The same goes for advertising on Google. Its “pay per click” ad auction model was a breakthrough in the direction of perfect competition: advertisers pay only when customers click on their ads. Both Google and advertisers are thus incentivized to feature ads that users actually want to see.

Only about 6% of Google search results pages contain any advertising at all. Both content producers and consumers have the benefit of Google’s immense effort to index and search all web pages, not just those that are commercially valuable. Google is like a store where all of the goods are free to consumers, but some merchants pay, in the form of advertising, to have their goods placed front and center.

The company is well aware of the risk that advertising will lead Google to favor the needs of advertisers over those of searchers. In fact, “Advertising and mixed motives” is the title of the appendix to Google founders Larry Page and Sergey Brin’s original 1998 research paper on Google’s search algorithms, written while they were still graduate students at Stanford.

By placement on the screen and algorithmic priority, platforms have the power to shape the pages users click on and the products they decide to buy.

“The goals of the advertising business model do not always correspond to providing quality search to users,” they thoughtfully observed. Google made enormous efforts to overcome those mixed motives by clearly separating their advertising results from their organic results, but the company has blurred those boundaries over time, perhaps without even recognizing the extent to which they have done so.

It is undeniable that the Google search results pages of today look nothing like they did when the company went public in 2004. The list of 10 “organic” results with three paid listings on the top and a sidebar of advertising results on the right that once characterized Google are long gone.

Dutch search engine consultant Eduard Blacquière documented the changes in size and placement of adwords (link in Dutch), the pay-per-click advertisements that run alongside searches, between 2010 and 2014. Here’s a page he captured in June 2010, the result for a search for the word “autoverzekering” (“auto insurance” in Dutch).

changes in size and placement of adwords
Figure 1. Screengrab: Eduard Blacquière.

Note that the adwords at the top of the page have a background tint, and those at the side have a narrower column width, setting both off clearly from the organic results. Take a quick glance at this page, and your eye can quickly jump to the organic results while ignoring the ads if that’s what you prefer.

Here is Blacquière’s dramatization of the change in size of that top block of adwords. As you can see, the ad block has both dramatically changed in size and lost its background color between 2010 and 2019, making it much harder to distinguish ads from organic results.

harder to distinguish ads from organic results
Figure 2. Screengrab: Eduard Blacquière.

Today, paid results can push organic results almost off the screen, so that the searcher has to scroll down to see them at all. On mobile pages with advertisements, this is almost always the case. Blacquière also documented the result of several studies done over a five-year period, which found the likelihood of a click on the first organic search result fell from over 40% in 2010 to less than 20% in 2014. This shows that through changes in homepage design alone, Google was able to shift significant attention from organic search results to ads.

Not only is paid advertising supplanting organic search results, but for more and more queries, Google itself has now collected enough information to provide what it considers to be the best answer directly to the consumer, eradicating the need to send us to a third-party website at all.

That’s the box that often appears above the search results when you ask a question, such as What are the lyrics to “Don’t Stop Believing,” or What date did WWII end?; the box to the right that pops up with restaurant reviews and opening hours; or a series of visual cards midway down the screen that show you the actors who appeared in a movie or different kinds of pastries common to a geographic region.

Through changes in homepage design alone, Google was able to shift significant attention from organic search results to ads.

Where does this information come from? In 2010, with the acquisition of Metaweb, Google committed to a project it called “the knowledge graph,” a collection of facts about well-known entities such as places, people, and events. This knowledge graph provides immediate answers for many of the most common queries.

The knowledge graph was initially culled from the web by ingesting information from sources such as Wikipedia, Wikidata, and the CIA Factbook, but since then, it has become far more encyclopedic and has ingested information from all over the web. In 2016, Google CEO Sundar Pichai claimed that the Google knowledge graph contained more than 70 billion facts.

As shown in the figure below, for a popular search that has commercial potential, like visit Yellowstone, not only is the search results page dominated by paid search results (ads) and content directly supplied by Google, but Google’s “answer boxes” are themselves filled with links to other Google pages rather than to third-party websites. (Note that Google personalizes results and also runs hundreds of thousands of A/B tests a day on the effect of minor changes in position, so your own results for this identical search may have different results than are shown here.)

google search results
Figure 3. Screengrab: Tim O'Reilly.

As of March 2017, user clickstream data provided by web analytics firm Jumpshot suggests that up to 40% of all Google queries no longer result in a click through to an external website. Think of all the questions you go to Google for that no longer require a second click: what’s the weather? What’s the current value of the euro against the dollar? What’s that song that’s playing in the background? What’s the best local restaurant? Biographies of eminent people, descriptions of cities, neighborhoods, businesses, historical events, quotes by famous authors, song lyrics, stock prices, and flight times all now appear as immediate answers from Google.

I am not necessarily suggesting anti-competitive intent. Google claims, with considerable justice, that all of these changes to search engine result pages are designed to improve user experience. And indeed, it is often helpful to get an immediate answer to a query rather than having to click through to another web site. Furthermore, much of this data is in fact licensed. But these deals seem like a step backward from the perfect competition represented by Google’s original reliance on multi-factor search algorithms to surface the very best information from independent web sites.

The net effect on Google’s financial performance is striking. In 2004, the year that Google went public, it had two principal advertising revenue engines: Adwords (those pay-per-click advertisements that run alongside searches on Google’s own site) and Adsense (pay-per-click advertisements that Google places on third-party websites on their behalf, either in search results on their site or directly alongside their content). In 2004, the two revenue sources were very close to equal. But by 2018, Google’s revenue from advertising on its own properties had grown to 82% of its total advertising revenue, with only 18% coming from the advertising it provides on third–party sites.

These examples illustrate the power of a platform to shape, both by placement on the screen and algorithmic priority, the pages users click on and the products they decide to buy—and therefore also the economic success for the supply side of its marketplace. Google maintains a rigorous separation between the search and advertising teams, but despite that fact, changes in the layout of Google’s pages and its algorithms have played an enormous role in shaping the attention of its users to favor those who advertise with Google.

When Google decides unilaterally on the size and position that its own products take on the screen, it also stops consumers from organically deciding what content to click on or what socks to buy. That’s what antitrust regulators should be considering: whether the algorithmic and design control exerted by sites like Google or Amazon reduces the choices we have as consumers.

Maintaining the illusion of choice

If Google has monopolized our access to information, Amazon’s fast-growing advertising business is now shaping what products consumers are actually given to choose from. Have they, too, taken a bite from the poisoned apple of advertising’s mixed motives?

Amazon’s merchants are becoming sharecroppers. The cotton field has been replaced by a search field.

Like Google, Amazon used to rely heavily on the collective intelligence of its users to recommend the best products from its suppliers. It did this by using information such as the supplier-provided description of the product, the number and quality of reviews, the number of inbound links, the sales rank of similar products, and so on, to determine the order in which search results would appear. These were all factored into Amazon’s default search ranking, which put products that were considered “Most Popular” first.

But as with Google, this eden of internet collective intelligence may be in danger of coming to an end.

In the example below, you can see that the default search for “best science fiction books” on Amazon now turns up only “Featured” (i.e., paid for) products. Are these the results you’d expect from this search? Where are the Hugo and Nebula award winners? Where are the books and authors with thousands of five-star reviews?

default search results

Contrast these results for those for the same search on Google, shown in the figure below. A knowledgeable science-fiction fan might quibble with some of these selections, but this is indeed a list of widely acknowledged classics in the field. In this case, Google presents no advertising, and so the results instead simply reflect the collective intelligence of what the web thinks is best.

While this might be taken as a reflection of the superiority of Google’s search algorithms over Amazon’s, the more important point is to note how differently a platform treats results when it has no particular commercial axe to grind.

how differently a platform treats results when it has no particular commercial axe to grind

Amazon has long claimed that the company is fanatically focused on the needs of its customers. A search like the one shown above, which favors paid results, demonstrates how far the quest for advertising dollars takes them from that avowed goal.

Advice for antitrust regulators

So, how are we therefore best to decide if these Big Tech platforms need to be regulated?

In one famous exchange, Bill Gates, the founder and former CEO of Microsoft, told Chamath Palihapitiya, the one-time head of the Facebook platform:

“This isn’t a platform. A platform is when the economic value of everybody that uses it exceeds the value of the company that creates it. Then it’s a platform.”

Given this understanding of the role of a platform, regulators should be looking to measure whether companies like Amazon or Google are continuing to provide opportunity for their ecosystem of suppliers, or if they’re increasing their own returns at the expense of that ecosystem.

Rather than just asking whether consumers benefit in the short term from the companies’ actions, regulators should be looking at the long-term health of the marketplace of suppliers—they are the real source of that consumer benefit, not the platforms alone. Have Amazon, Apple, or Google earned their profits, or are they coming from monopolistic rents?

How might we know whether a company operating an algorithmically managed marketplace is extracting rents rather than simply taking a reasonable cut for the services it provides? The first sign may not be that it is raising prices for consumers, but that it is taking a larger percentage from its suppliers, or competing unfairly with them.

Before antitrust authorities look to remedies like breaking up these companies, a good first step would be to require disclosure of information about the growth and health of the supply side of their marketplaces. The statistics about the growth of its third-party marketplace that Bezos trumpeted in his shareholder letter tell only half the story. The questions to ask are who profits, by how much, and how that allocation of rewards is changing over time.

Regulators such as the SEC should require regular financial reporting on the allocation of value between the platform and its marketplace. I have done limited analysis for Google and Amazon based on information provided in their annual public filings, but much of the information required for a rigorous analysis is just not available.

Google provides an annual economic impact report analyzing value provided to its advertisers, but there is no comparable report for the value created for its content suppliers. Nor is there any visibility into the changing fortunes of app suppliers into the Play Store, Google’s Android app marketplace, or into the fortunes of content providers on YouTube.

Questions of who gets what and why must be asked of Amazon’s marketplace and its other operating units, including its dominant cloud-computing division, or Apple’s App Store. The role of Facebook’s algorithms in deciding what content appears in its readers’ newsfeeds has been widely scrutinized with regard to political bias and manipulation by hostile actors, but there’s been little rigorous economic analysis of economic bias in the algorithms of any of these companies.

Data is the currency of these companies. It should also be the currency of those looking to regulate them. You cannot regulate what you don’t understand. The algorithms that these companies use may be defended as trade secrets, but their outcomes should be open to inspection.

Continue reading Antitrust regulators are using the wrong tools to break up Big Tech.


Lowest Bidder Squared [The Daily WTF]

Initech was in dire straits. The website was dog slow, and the budget had been exceeded by a factor of five already trying to fix it. Korbin, today's submitter, was brought in to help in exchange...


Netflix’s First Pirate Site Blocking Application Granted in Australia [TorrentFreak]

The rate at which ‘pirate’ sites are being blocked in various countries raises the question of how many more there are left to block.

The answer, it seems, is plenty more yet.

Back in May, yet another application filed in Australia’s Federal Court presented a unique feature – the inclusion of US-based streaming giant Netflix as one of the applicants.

This was the first time the company had appeared requesting a blocking application in the region, claiming infringement of its works Santa Clarita Diet and Stranger Things.

Netflix didn’t appear on its own. The application was headed by local movie giant Roadshow Films and supported by other prominent movie companies such as Disney Enterprises, Universal City Studios, Warner Bros., Television Broadcasts Limited, TVBO, and Madman Anime Group.

Together they demanded the blocking of over 130 domains related to close to 90 torrent, streaming, and similar sites by more than 50 local ISPs.

The claims were filed under Section 115a of Australia’s Copyright Act, which can grant injunctions to force local ISPs to prevent their subscribers from accessing overseas-based ‘infringing locations. It’s taken three months, but the content companies have now been successful.

This morning, Justice Thawley in the Federal Court ordered the respondents including Telstra, Optus, TPG, Vocus, and Vodafone, to take “reasonable steps to disable access to the Target Online Locations” within 15 business days. Each ISP will be handed AUS$50 per domain by the applicants to cover compliance costs.

In common with previous orders, the ISPs were given the option to utilize DNS, IP address, and/or URL blocking techniques (or any other technical means agreed in writing between them and the applications) to prevent access to the sites.

Of course, sites often decide to take countermeasures when orders such as this are handed down in order to circumvent blocking, so the order allows the studios to provide additional information so that these can be swiftly dealt with by the ISPs moving forward.

An updated/amended domain and URL list (there can be changes following an original application) is yet to appear in court records. However, the list of sites and domains in the original application can be viewed in our earlier report.

The order handed down this morning can be found here (pdf)

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.


Make a habit/break a habit [Seth's Blog]

If you’re trying to help yourself (or those you serve), the most effective thing you can do is create long-term habits. They become unseen foundations of who we will become.

The goal of running a marathon in six weeks is audacious, but it’s not a habit. You might succeed, but with all that pressure, it’s more likely you’ll simply abandon the project.

On the other hand, the goal of running to the mailbox (at least) and back for 50 days in a row is the sort of habit that might stick.

The same goes for education (“we do flashcards every day” is very different from “I need to cram to learn quantum mechanics for the test.”)

And it goes double for our lifestyles. If you can replace a bad habit with a good one, you’ll live with the benefits for decades.

The challenge is to set up systems that are likely to create habits, not sprints that lead to failure.


Feeds | How do you start an RSE Group? [Planet GridPP]

How do you start an RSE Group? s.aragon 19 August 2019 - 9:38am

By Jeremy Cohen, EPSRC RSE Fellow, Imperial College London. In the weeks running up to the RSE Conference, myself and some colleagues will be providing our thoughts on the questions people have submitted for our panel discussion with senior university management about how RSEs are being supported within academia. (You can submit more questions and vote on the current questions on Question: How do you start an RSE group at a university that only has scattered RSEs in different departments?


Comic: #sponsored [Penny Arcade]

New Comic: #sponsored


App [Ctrl+Alt+Del Comic]

The post App appeared first on Ctrl+Alt+Del Comic.


1323 [Looking For Group]

The post 1323 appeared first on Looking For Group.



Russ Allbery: Review: Spinning Silver [Planet Debian]

Review: Spinning Silver, by Naomi Novik

Publisher: Del Rey
Copyright: 2018
ISBN: 0-399-18100-8
Format: Kindle
Pages: 465

Miryem is the daughter of the village moneylender and the granddaughter (via her mother) of a well-respected moneylender in the city. Her grandfather is good at his job. Her father is not. He's always willing to loan the money out, but collecting it is another matter, and the village knows that and takes advantage of it. Each year is harder than the one before, in part because they have less and less money and in part because the winter is getting harsher and colder. When Miryem's mother falls ill, that's the last straw: she takes her father's ledger and goes to collect the money her family is rightfully owed.

Rather to her surprise, she's good at the job in all the ways her father is not. Daring born of desperation turns into persistent, cold anger at the way her family had been taken advantage of. She's good with numbers, has an eye for investments, and is willing to be firm and harden her heart where her father was not. Her success leads to good food, a warmer home, and her mother's recovery. It also leads to the attention of the Staryk.

The Staryk are the elves of Novik's world. They claim everything white in the forest, travel their own mysterious ice road, and raid villages when they choose. And, one night, one of the Staryk comes to Miryem's house and leaves a small bag of Staryk silver coins, challenging her to turn them into the gold the Staryk value so highly.

This is just the start of Spinning Silver, and Miryem is only one of a broadening cast. She demands the service of Wanda and her younger brother as payment for their father's debt, to the delight (hidden from Miryem) of them both since this provides a way to escape their abusive father. The Staryk silver becomes jewelry with surprising magical powers, which Miryem sells to the local duke for his daughter. The duke's daughter, in turn, draws the attention of the czar, who she met as a child when she found him torturing squirrels. And Miryem finds herself caught up in the world of the Staryk, which works according to rules that she can barely understand and may be a trap that she cannot escape.

Novik makes a risky technical choice in this book and pulls it off beautifully: the entirety of Spinning Silver is written in first person with frequently shifting narrators that are not signaled outside of the text. I think there were five different narrators in total, and I may be forgetting some. Despite that, I was never confused for more than a paragraph about who was speaking due to Novik's command of the differing voices. Novik uses this to great effect to show the inner emotions and motivations of the characters without resorting to the distancing effect of wandering third-person.

That's important for this novel because these characters are not emotionally forthcoming. They can't be. Each of them is operating under sharp constraints that make too much emotion unsafe: Wanda and her brother are abused, the Duke's daughter is valuable primarily as a political pawn and later is juggling the frightening attention of the czar, and Miryem is carefully preserving an icy core of anger against her parents' ineffectual empathy and is trying to navigate the perilous and trap-filled world of the Staryk. The caution and occasional coldness of the characters does require the reader do some work to extrapolate emotions, but I thought the overall effect worked.

Miryem's family is, of course, Jewish. The nature of village interactions with moneylenders make that obvious before the book explicitly states it. I thought Novik built some interesting contrasts between Miryem's navigation of the surrounding anti-Semitism and her navigation of the rules of the Staryk, which start off as far more alien than village life but become more systematic and comprehensible than the pervasive anti-Semitism as Miryem learns more. But I was particularly happy that Novik includes the good as well as the bad of Jewish culture among unforgiving neighbors: a powerful sense of family, household religious practices, Jewish weddings, and a cautious but very deep warmth that provides the emotional core for the last part of the book.

Novik also pulls off a rare feat in the plot structure by transforming most of the apparent villains into sympathetic characters and, unlike The Song of Ice and Fire, does this without making everyone awful. The Staryk, the duke, and even the czar are obvious villains on first appearances, but in each case the truth is more complicated and more interesting. The plot of Spinning Silver is satisfyingly complex and ever-changing, with just the right eventual payoffs for being a good (but cautious and smart!) person.

There were places when Spinning Silver got a bit bleak, such as when the story lingered a bit too long on Miryem trying and failing to navigate the Staryk world while getting herself in deeper and deeper, but her core of righteous anger and the protagonists' careful use of all the leverage that they have carried me through. The ending is entirely satisfying and well worth the journey. Recommended.

Rating: 8 out of 10



This is one of my favorite essays ever. I first read it when it was published in Harper’s Magazine, November 30, 1999 2002. I’m sharing the whole thing below because everyone should read it; if I get a copyright cease-and-desist, I’ll remove it. –NP




By Shelby Steele

Harper’s Magazine, November 30, 1999

One day back in the late fifties, when I was ten or eleven years old, there was a moment when I experienced myself as an individual–as a separate consciousness–for the first time. I was walking home from the YMCA, which meant that I was passing out of the white Chicago suburb where the Y was located and crossing Halsted Street back into Phoenix, the tiny black suburb where I grew up. It was a languid summer afternoon, thick with the industrial-scented humidity of south Chicago that I can still smell and feel on my skin, though I sit today only blocks from the cool Pacific and more than forty years removed.

Into Phoenix no more than a block and I was struck by a thought that seemed beyond me. I have tried for years to remember it, but all my effort only pushes it further away. I do remember that it came to me with the completeness of an aphorism, as if the subconscious had already done the labor of crafting it into a fine phrase. What scared me a little at the time was its implication of a separate self with independent thoughts–a distinct self that might distill experience into all sorts of ideas for which I would then be responsible. That feeling of responsibility was my first real experience of myself as an individual–as someone who would have to navigate a separate and unpredictable consciousness through a world I already knew to be often unfair and always tense.

Of course I already knew that I was black, or “Negro,” as we said back then. No secret there. The world had made this fact quite clear by imposing on my life all the elaborate circumscriptions of Chicago-style segregation. Although my mother was white, the logic of segregation meant that I was born in the hospital’s black maternity ward. I grew up in a black neighborhood and walked to a segregated black school as white children in the same district walked to a white school. Kindness in whites always came as a mild surprise and was accepted with a gratitude that I later understood to be a bit humiliating. And there were many racist rejections for which I was only partly consoled by the knowledge that racism is impersonal.

Back then I thought of being black as a fate, as a condition I shared with people as various as Duke Ellington and the odd-job man who plowed the neighborhood gardens with a mule and signed his name with an X. And it is worth noting here that never in my life have I met a true Uncle Tom, a black who identifies with white racism as a truth. The Negro world of that era believed that whites used our race against our individuality and, thus, our humanity. There was no embrace of a Negro identity, because that would have weakened the argument for our humanity. “Negroness” or “blackness” would have collaborated with the racist lie that we were different and, thus, would have been true Uncle Tomism. To the contrary, there was an embrace of the individual and assimilation.

My little experience of myself as an individual confirmed the message of the civil-rights movement itself, in which a favorite picket sign read, simply, “I am a man.” The idea of the individual resonated with Negro freedom–a freedom not for the group but for the individuals who made up the group. And assimilation was not a self-hating mimicry of things white but a mastery by Negro individuals of the modern and cosmopolitan world, a mastery that showed us to be natural members of that world. So my experience of myself as an individual made me one with the group.

Not long ago C-SPAN carried a Harvard debate on affirmative action between conservative reformer Ward Connerly and liberal law professor Christopher Edley. During the Q and A a black undergraduate rose from a snickering clump of black students to challenge Mr. Connerly, who had argued that the time for racial preferences was past. Once standing, this young man smiled unctuously, as if victory were so assured that he must already offer consolation. But his own pose seemed to distract him, and soon he was sinking into incoherence. There was impatience in the room, but it was suppressed. Black students play a role in campus debates like this and they are indulged.

The campus forum of racial confrontation is a ritual that has changed since the sixties in only one way. Whereas blacks and whites confronted one another back then, now black liberals and black conservatives do the confronting while whites look on–relieved, I’m sure–from the bleachers. I used to feel empathy for students like this young man, because they reminded me of myself at that age. Now I see them as figures of pathos. More than thirty years have passed since I did that sort of challenging, and even then it was a waste of time. Today it is perseveration to the point of tragedy.

Here is a brief litany of obvious truths that have been resisted in the public discourse of black America over the last thirty years: a group is no stronger than its individuals; when individuals transform themselves they transform the group; the freer the individual, the stronger the group; social responsibility begins in individual responsibility. Add to this an indisputable fact that has also been unmentionable: that American greatness has a lot to do with a culturally ingrained individualism, with the respect and freedom historically granted individuals to pursue their happiness–this despite many egregious lapses and an outright commitment to the oppression of black individuals for centuries. And there is one last obvious but unassimilated fact: ethnic groups that have asked a lot from their individuals have done exceptionally well in America even while enduring discrimination.

Now consider what this Harvard student is called upon by his racial identity to argue in the year 2002. All that is creative and imaginative in him must be rallied to argue the essential weakness of his own people. Only their weakness justifies the racial preferences they receive decades after any trace of anti-black racism in college admissions. The young man must not show faith in the power of his people to overcome against any odds; he must show faith in their inability to overcome without help. As Mr. Connerly points to far less racism and far more freedom and opportunity for blacks, the young man must find a way, against all the mounting facts, to argue that black Americans simply cannot compete without preferences. If his own forebears seized freedom in a long and arduous struggle for civil rights, he must argue that his own generation is unable to compete on paper-and-pencil standardized tests.

It doesn’t help that he locates the cause of black weakness in things like “structural racism” and “uneven playing fields,” because there has been so little correlation between the remedies for such problems and actual black improvement. Blacks from families that make $100,1300 a year or more perform worse on the SAT than whites from families that make $10,000 a year or less. After decades of racial preferences blacks remain the lowest performing student group in American higher education. And once they are out of college and in professions, their own children also underperform in relation to their white and Asian peers. Thus, this young man must also nurture the idea of a black psychological woundedness that is baroque in its capacity to stifle black aspiration. And all his faith, his proud belief, must be in the truth of this woundedness and the injustice that caused it, because this is his only avenue to racial pride. He is a figure of pathos because his faith in racial victimization is his only release from racial shame.

Right after the sixties’ civil-rights victories came what I believe to be the greatest miscalculation in black American history. Others had oppressed us, but this was to be the first “fall” to come by our own hand. We allowed ourselves to see a greater power in America’s liability for our oppression than we saw in ourselves. Thus, we were faithless with ourselves just when we had given ourselves reason to have such faith. We couldn’t have made a worse mistake. We have not been the same since.

To go after America’s liability we had to locate real transformative power outside ourselves. Worse, we had to see our fate as contingent on America’s paying off that liability. We have been a contingent people ever since, arguing our weakness and white racism in order to ignite the engine of white liability. And this has mired us in a protest-group identity that mistrusts individualism because free individuals might jeopardize the group’s effort to activate this liability.

Today I would be encouraged to squeeze my little childhood experience of individuality into a narrow group framework that would not endanger the group’s bid for white intervention. I would be urged to embrace a pattern of reform that represses our best hope for advancement–our individuals–simply to keep whites “on the hook.”

Mr. Connerly was outnumbered and outgunned at that Harvard debate. The consensus finally was that preferences would be necessary for a while longer. Whites would remain “on the hook.” The black student prevailed, but it was a victory against himself. In all that his identity required him to believe, there was no place for him.

In 1961, when I was fifteen years old, my imagination was taken over for some months by the movie Paris Blues, starring Sidney Poitier, Diahann Carroll, Paul Newman, and Joanne Woodward. For me this film was first of all an articulation of adult sophistication and deserved to be studied on these grounds alone. The music was by Duke Ellington and Billy Strayhorn, and the film was set in the jazz world of early-sixties Paris–a city that represented, in the folklore of American Negroes, a nirvana of complete racial freedom. To establish this freedom at the outset, Paul Newman (Ram) makes a pass at Diahann Carroll (Connie) as ffher race means no more to him than the color of her coat. Of course the protocols of segregation return soon enough, and the four stars are paired off by race. But I could not hold this against a film that gave me a chance to watch the beautiful, if prim, Diahann Carroll against a backdrop of Montmartre and the Seine, Paris a little dim for being next to her.

Sidney Poitier’s character (Eddie) has by far the most interesting internal conflict. He has come to Paris–like almost the entire postwar generation of black American artists, musicians, and intellectuals–to develop his talents and live as an individual free of American racism. Eddie finds this in Paris as a jazz musician in Ram’s band, and when he and Connie begin their romance, he is an unapologetic advocate of expatriation for blacks. Paris is freedom; America, interminable humiliation. “I’ll never forget the first time I walked down the Champs-Elysees…. I knew I was here to stay.”

But there is a ghost on his trail. And Connie, the new and true love of his life, embodies that ghost. A teacher on vacation in Paris, she brings him news of the civil-rights movement building momentum back home, and, as their love deepens, she makes it clear that their future together will require his coming home and playing some part in the struggle of his people. She brings him precisely what he has escaped: the priority of group identity over individual freedom. The best acting in the film is Eddie’s impassioned rejection of this priority. He hates America with good reason, and it is impossible to see him as simply selfish. He has already found in Paris the freedom blacks are fighting for back home. And he has found this freedom precisely by thinking of himself as an individual who is free to choose. For him individualism is freedom. And even if blacks won the civil-rights struggle, true freedom would still require individuals to choose for themselves. So by what ethic should he leave the freedom of Paris for the indignities of America?

Clearly no ethic would be enough. But love, on the other hand, is the tie that binds. And when the object of that love is Connie, Eddie begins to see a point in responsibility to the group. But at the very end Eddie does not get on the train out of Paris with Connie. He promises to follow her home as soon as he can arrange his affairs, and it looks like he will be good to his word. But the movie ends on his promise rather than on his action. It is a long time now since 1961, so we can know that Eddie will never have the same degree of individual freedom if he goes back home. If whites don’t use his race against him, they will use it for him. And there are always the pressures of his own group identity. As an individual he will have a hard swim. Thinking of the lovely Connie, some days I root for him to leave. Other days, even thinking of her, I root for him to stay.

The greatest problem in coming from an oppressed group is the power the oppressor has over your group. The second greatest problem is the power your group has over you. Group identity in oppressed groups is always very strategic, always a calculation of advantage. The humble black identity of the Booker T. Washington era–“a little education spoiled many a good plow hand”–allowed blacks to function as tradesmen, laborers, and farmers during the rise of Jim Crow, when hundreds of blacks were being lynched yearly. Likewise, the black militancy of the late sixties strategically aimed for advantage in an America suddenly contrite over its long indulgence in racism.

One’s group identity is always a mask–a mask replete with a politics. When a teenager in East Los Angeles says he is Hispanic, he is thinking of himself within a group strategy pitched at larger America. His identity is related far more to America than to Mexico or Guatemala, where he would not often think of himself as Hispanic. In fact, “Hispanic” is much more a political concept than a cultural one, and its first purpose is to win power within the fray of American identity politics. So this teenager must wear the mask that serves his group’s ambitions in these politics.

With the civil-rights victories, black identity became more carefully calculated around the pursuit of power, because black power was finally possible in America. So, as the repressions of racism receded, the repressions of group identity grew more intense for blacks. Even in Paris, Connie uses the censoring voice of the group: “Things are much better than they were five years ago … not because Negroes come to Paris but because Negroes stay home.” Here the collective identity is the true identity, and individual autonomy a mere affectation.

If Paris Blues ends without Eddie’s actual return to America, we can witness such a return in the life of a real-life counterpart to Eddie, the black American writer James Baldwin. In the late forties, Baldwin went to Paris, like his friend and mentor Richard Wright, to escape America’s smothering racism and to find himself as a writer and as an individual. He succeeded dramatically and quickly on both counts. His first novel, the minor masterpiece Go Tell It on the Mountain, appeared in 1953 and was quickly followed by another novel and two important essay collections.

It was clearly the remove of Europe that gave Baldwin the room to find his first important theme: self-acceptance. In a Swiss mountain village in winter, against an “absolutely alabaster landscape” and listening to Bessie Smith records, he accepts that he is black, gay, talented, despised by his father, and haunted by a difficult childhood. From this self-acceptance emerges an individual voice and one of the most unmistakable styles in American writing.

Then, in 1957, Baldwin did something that changed him–and his writing–forever. He came home to America. He gave up the psychological remove of Europe and allowed himself to become once again fully accountable as a black American. And soon, in blatant contradiction of his own powerful arguments against protest writing, he became a protest writer. There is little doubt that this new accountability weakened him greatly as an artist. Nothing he wrote after the early sixties had the human complexity, depth, or literary mastery of what he wrote in those remote European locales where children gawked at him for his color.

The South African writer Nadine Gordimer saw the black writer in her own country as conflicted between “a deep, intense, private view” on the one hand and the call to be a spokesman for his people on the other. This classic conflict–common to writers from oppressed groups around the world–is really a conflict of authority. In Europe, Baldwin enjoyed exclusive authority over his own identity. When he came back to America, he did what in Western culture is anathema to the artist: he submitted his artistic vision–his “private view”–to the authority of his group. From The Fire Next Time to the end of his writing life, he allowed protest to be the framing authority of his work.

What Baldwin did was perhaps understandable, because his group was in a pitched battle for its freedom. The group had enormous moral authority, and he had a splendid rhetorical gift the group needed. Baldwin was transformed in the sixties into an embodiment of black protest, an archetypal David–frail, effeminate, brilliant–against a brutish and stupid American racism. He became a celebrity writer on the American scene, a charismatic presence with huge, penetrating eyes that were fierce and vulnerable at the same time. People who had never read him had strong opinions about him. His fame was out of proportion to his work, and if all this had been limited to Baldwin himself, it might be called the Baldwin phenomenon. But, in fact, his ascendancy established a pattern that would broadly define, and in many ways corrupt, an entire generation of black intellectuals, writers, and academics. And so it must be called the Baldwin model.

The goal of the Baldwin model is to link one’s intellectual reputation to the moral authority–the moral glamour–of an oppressed group’s liberation struggle. In this way one ceases to be a mere individual with a mere point of view and becomes, in effect, the embodiment of a moral imperative. This is rarely done consciously, as a Faustian bargain in which the intellectual knowingly sells his individual soul to the group. Rather the group identity is already a protest-focused identity, and the intellectual simply goes along with it. Adherence to the Baldwin model is usually more a sin of thoughtlessness and convenience than of conscious avarice, though it is always an appropriation of moral power, a stealing of thunder.

The protest intellectual positions himself in the pathway of the larger society’s march toward racial redemption. By allowing his work to be framed by the protest identity, he articulates the larger society’s moral liability. He seems, therefore, to hold the key to how society must redeem itself. Baldwin was called in to advise Bobby Kennedy on the Negro situation. It is doubtful that the Baldwin of Go Tell It on the Mountain would have gotten such a call. But the Baldwin of The Fire Next Time probably expected it. Ralph Ellison, a contemporary of Baldwin’s who rejected the black protest identity but whose work showed a far deeper understanding of black culture than Baldwin’s, never had this sort of access to high places. By insisting on his individual autonomy as an artist, Ellison was neither inflated with the moral authority of his group’s freedom struggle nor positioned in the pathway of America’s redemption.

Today the protest identity is a career advantage for an entire generation of black intellectuals, particularly academics who have been virtually forced to position themselves in the path of their university’s obsession with “diversity.” Inflation from the moral authority of protest, added to the racialpreference policies in so many American institutions, provides an irresistible incentive for black America’s best minds to continue defining themselves by protest. Professors who resist the Baldwin model risk the Ellisonian fate of invisibility.

What happened in America to make the Baldwin model possible?

The broad answer is this: America moved from its long dark age of racism into an age of white guilt. I saw this shift play out in my own family.

I grew up watching my parents live out an almost perpetual protest against racial injustice. When I was five or six we drove out of our segregated neighborhood every Sunday morning to carry out the grimly disciplined business of integrating a lily-white church in the next town. Our family was a little off-color island of quiet protest amidst rows of pinched white faces. And when that battle was lost there was a long and successful struggle to create Chicago’s first fully integrated church. And from there it was on to the segregated local school system, where my parents organized a boycott against the elementary school that later incurred the first desegregation lawsuit in the North.

Amidst all this protest, I could see only the price people were paying. I saw my mother’s health start to weaken. I saw the white minister who encouraged us to integrate his church lose his job. There was a time when I was sent away to stay with family friends until things “cooled down.” Black protest had no legitimacy in broader America in the 1950s. It was subversive, something to be repressed, and people who indulged in it were made to pay.

And then there came the sunny day in the very late sixties when I leaned into the window of my parents’ old powder-blue Rambler and, inches from my mother’s face, said wasn’t it amazing that I was making $13,500 a year. They had come to visit me on my first job out of college, and had just gotten into the car for their return trip. I saw my mistake even as the words tumbled out. My son’s pride had blinded me to my parents’ feelings. This was four or five thousand dollars more than either of them had ever made in a single year. I had learned the year before that my favorite professor–a full professor with two books to his credit–had fought hard for a raise to $10,000 a year. Thirteen five implied a different social class, a different life than we had known as a family.

“Congratulations,” they said. “That’s very nice.”

The subtext of this role reversal was President Johnson’s Great Society, and beneath that an even more profound shift in the moral plates of society. The year was 1969, and I was already employed in my fourth Great Society program–three Upward Bound programs and now a junior college-level program called Experiment in Higher Education, in East St. Louis, Illinois. America was suddenly spending vast millions to end poverty “in our time,” and, as it was for James Baldwin on his return from Paris, the timing was perfect for me.

I was chosen for my first Upward Bound job because I was the leader of the campus civil-rights group. This engagement with black protest suddenly constituted a kind of aptitude, in my employers, minds, for teaching disadvantaged kids. It inflated me into a person who was gifted with young people. The protesting that had gotten me nowhere when I started college was serving me as well as an advanced degree by the time I was a senior.

Two great, immutable forces have driven America’s attitudes, customs, and public policies around race. The first has been white racism, and the second has been white guilt. The civil-rights movement was the dividing line between the two. Certainly there was some guilt before this movement, and no doubt some racism remains after it. But the great achievement of the civil-rights movement was that its relentless moral witness finally defeated the legitimacy of racism as propriety–a principle of social organization, manners, and customs that defines decency itself. An idea controls culture when it achieves the invisibility of propriety. And it must be remembered that racism was a propriety, a form of decency. When, as a boy, I was prohibited from entering the fine Christian home of the occasional white playmate, it was to save the household an indecency. Today, thanks to the civil-rights movement, white guilt is propriety–an utterly invisible code that defines decency in our culture with thousands of little protocols we no longer even think about. We have been living in an age of white guilt for four decades now.

What is white guilt? It is not a personal sense of remorse over past wrongs. White guilt is literally a vacuum of moral authority in matters of race, equality, and opportunity that comes from the association of mere white skin with America’s historical racism. It is the stigmatization of whites and, more importantly, American institutions with the sin of racism. Under this stigma white individuals and American institutions must perpetually prove a negative–that they are not racist–to gain enough authority to function in matters of race, equality, and opportunity. If they fail to prove the negative, they will be seen as racists. Political correctness, diversity policies, and multiculturalism are forms of deference that give whites and institutions a way to prove the negative and win reprieve from the racist stigma.

Institutions especially must be proactive in all this. They must engineer a demonstrable racial innocence to garner enough authority for simple legitimacy in the American democracy. No university today, private or public, could admit students by academic merit alone if that meant no black or brown faces on campus. Such a university would be seen as racist and shunned accordingly. White guilt has made social engineering for black and brown representation a condition of legitimacy.

People often deny white guilt by pointing to its irrationality–“I never owned a slave,” “My family got here eighty years after slavery was over.” But of course almost nothing having to do with race is rational. That whites are now stigmatized by their race is not poetic justice; it is simply another echo of racism’s power to contaminate by mere association.

The other common denial of white guilt has to do with motive: “I don’t support affirmative action because I’m guilty; I support it because I want to do what’s fair.” But the first test of sincere support is a demand that the policy be studied for effectiveness. Affirmative action went almost completely unexamined for thirty years and has only recently been briefly studied in a highly politicized manner now that it is under threat. The fact is that affirmative action has been a very effective racial policy in garnering moral authority and legitimacy for institutions, and it is now institutions–not individual whites or blacks–that are fighting to keep it alive.

The real difference between my parents and myself was that they protested in an age of white racism and I protested in an age of white guilt. They were punished; I was rewarded. By my time, moral authority around race had become a great and consuming labor for America. Everything from social programs to the law, from the color of TV sitcom characters to the content of school curricula, from college admissions to profiling for terrorists–every aspect of our culture–now must show itself redeemed of the old national sin. Today you cannot credibly run for president without an iconography of white guilt: the backdrop of black children, the Spanish-language phrases, the word “compassion” to separate conservatism from its associations with racism.

So then here you are, a black American living amidst all this. Every institution you engage–the government, universities, corporations, public and private schools, philanthropies, churches–faces you out of a deficit of moral authority. Your race is needed everywhere. How could you avoid the aggressions, and even the bigotries, of white guilt? What institution could you walk into without having your color tallied up as a credit to the institution? For that matter, what political party or ideological direction could you pursue without your race being plundered by that party or ideology for moral authority?

Because blacks live amidst such hunger for the moral authority of their race, we embraced protest as a permanent identity in order to capture the fruits of white guilt on an ongoing basis. Again, this was our first fall by our own hand. Still, it is hard to imagine any group of individuals coming out of four centuries of oppression and not angling their identity toward whatever advantage seemed available. White guilt held out the promise of a preferential life in recompense for past injustice, and the protest identity seemed the best way to keep that promise alive.

An obvious problem here is that we blacks fell into a group identity that has absolutely no other purpose than to collect the fruits of white guilt. And so the themes of protest–a sense of grievance and victimization–evolved into a sensibility, an attitude toward the larger world that enabled us always and easily to feel the grievance whether it was there or not. Protest became the mask of identity, because it defined us in a way that kept whites “on the hook.” Today the angry rap singer and Jesse Jackson and the black-studies professor are all joined by an unexamined devotion to white guilt.

To be black in my father’s generation, when racism was rampant, was to be a man who was very often victimized by racism. To be black in the age of white guilt is to be a victim who is very rarely victimized by racism. Today in black life there is what might be called “identity grievance”–a certainty of racial grievance that is entirely disconnected from actual grievance. And the fervor of this symbiosis with white guilt has all but killed off the idea of the individual as a source of group strength in black life. All is group and unity, even as those minority groups that ask much of their individuals thrive in America despite any discrimination they encounter.

I always thought that James Baldwin on some level knew that he had lost himself to protest. His work grew narrower and narrower when age and experience should have broadened it. And, significantly, he spent the better part of his last decades in France, where he died in 1987. Did he again need France in those years to be himself, to be out from under the impossible demands of a symbiotically defined black identity, to breathe on his own?

There is another final and terrible enemy of the black individual. I first saw it in that Great Society program in which my salary was so sweetened by white guilt. The program itself quickly slid into banana republic–style corruption, and I was happy to get away to graduate-student poverty. But on the way out certain things became clear. The program was not so much a program as it was an idea of the social “good,” around which there was an intoxicating enthusiasm. It was my first experience with the utter thrill of untested good intentions. On the way out I realized that thrill had been the point. That feeling is what we sent back to Washington, where it was received as an end in itself.

Now I know that white guilt is a moral imperative that can be satisfied by good intentions alone. In my own lifetime, racial reform in America changed from a struggle for freedom to a struggle for “the good.” A new metaphysics of the social good replaced the principles of freedom. Suddenly “diversity,” “inclusion,” “tolerance,” “pluralism,” and “multiculturalism” were all conjure words that aligned you with a social good so compelling that you couldn’t leave it to mere freedom. In certain circumstances freedom could be the outright enemy of “the good.” If you want a “diverse” student body at your university, for example, the individualistic principles of freedom might be a barrier. So usually “the good” has to be imposed from above out of a kind of moral imperialism by a well-meaning white elite.

In the sixties, black identity also shifted its focus from freedom to “the good” to better collect the fruits of white guilt. Thus it was a symbiosis of both white and black need that pushed racial reform into a totalitarian model where schemes of “the good” are imposed by coercion at the expense of freedom. The Franco-Czech writer Milan Kundera says that every totalitarianism is “also the dream of paradise.” And when people seem to stand in its way, the rulers “build a little gulag on the side of Eden.” In this good-driven age of white guilt, with all its paradises of diversity, a figurative gulag has replaced freedom’s tradition of a respected and loyal opposition. Conservatives are automatically relegated to this gulag because of their preference for freedom over ideas of “the good.”

But there is another “little gulag” for the black individual. He lives in a society that needs his race for the good it wants to do more than it needs his individual self. His race makes him popular with white institutions and unifies him with blacks. But he is unsupported everywhere as an individual. Nothing in his society asks for or even allows his flowering as a full, free, and responsible person. As is always the case when “the good” becomes ascendant over freedom, and coercion itself becomes a good thing, the individual finds himself in a gulag.

Something happened at Harvard last fall that provides a rare window into all of this. Harvard’s president, Lawrence H. Summers, rebuked the famous black-studies professor Cornel West for essentially being a lightweight on a campus of heavyweights. These were not his words, but there is little doubt that this was his meaning. West himself has said that he felt “devalued” and “disrespected” in the now famous meeting between the two.

The facts are all on Summers’s side. West’s achievements are simply not commensurate with his position as a University Professor, the very highest rank a member of an already esteemed faculty can ascend to–a rank normally reserved for Nobel-level accomplishment. West had spent the previous year on leave making a rap CD and chairing Al Sharpton’s presidential exploration committee. Privately–that is, behind the mask of the protest identity–few serious black academics saw West much differently than Summers did. Even publicly, where the mask is mandatory, he was never more than “officially” defended.

But Harvard itself had created the monster. Harvard did not promote Cornel West to a University Professorship because his academic work was seminal. Cornel West brought to campus the special charisma of the black protest identity–not, of course, in its unadorned street incarnation but dressed up in a three-piece suit and muted by an impenetrable academese that in the end said almost nothing and scared no one. This was not someone akin to the young Eldridge Cleaver, who had a real fire and could really write but who also might be rather difficult in and around Harvard Square. With Cornel you could sit the black protest identity down to dinner amidst the fine china and pretty girls from tony suburbs and everyone would be so thrilled.

Here, in the University Professorship, white guilt and black protest perfectly consummated their bargain. It was never Cornel West–the individual–that Harvard wanted; it was the defanged protest identity that he carried, which redounded to the university as racial innocence itself. How could anyone charge this university with racism when it promoted Cornel West to its upper reaches? His marginal accomplishments only made the gesture more grand. West was not at Harvard to do important work; he was there precisely to be promoted over his head. In the bold irrationality of the promotion was the daring display of racial innocence.

What Lawrence Summers did not understand, when he became Harvard’s new president, was that West was an important part of the institution’s iconography of racial innocence. Or maybe he did understand and wanted to challenge this way of doing things. In any case, he did the unthinkable: He saw West as an individual. Thus, he did not confuse the charisma of the protest identity with real achievement.

His rebuke of West caused an explosion, because it broke faith with the symbiotic enmeshment of white guilt and black protest. West has now left Harvard for Princeton, where this enmeshment prevails unthreatened by ham-fisted administrators who might inadvertently see their black moral-authority hires as individuals. Summers himself–as if fresh from re-education camp–has apologized to West and professed his support for affirmative action. The age of white guilt, with its myriad corruptions and its almost racist blindness to minority individuality, may someday go down like the age of racism went down–but only if people take the risk of standing up to it rather than congratulating themselves for doing things that have involved no real risk since 1965.

I know Cornel West to be a good man, whose grace and good manners even with people he disagrees with have been instructive to me. As contemporaries, we have both had to find our way in this age of white guilt. As educated blacks, we have both had to wrestle against the relentless moral neediness of American institutions, though I’m sure he wouldn’t see it that way. I saw the way race inflated people like us back in those Great Society programs I mentioned, and it was my good luck to enter them when the corruptions were so blatant that it was mere self-preservation to walk away.

One of my assignments in that last program was to help design some of the country’s very first black-studies programs, and by 1970 I already knew that they would always lack the most fundamental raison d’etre of any academic discipline: a research methodology of their own. This meant that black studies could never be more than an assemblage of courses cobbled together from “real” departments, and that it could never have more than a political mandate–a perfect formula for academic disrespect. But, as I say, it was luck to learn this early, before white guilt became infinitely more subtle and seductive.

In the age of racism there were more powerful black intellectuals, because nobody wanted them for their race. Richard Wright, Ralph Ellison, Zora Neale Hurston, W.E.B. Du Bois, and many others were fully developed, self-made individuals, no matter their various political and ideological bents. Race was not a “talent” that falsely inflated them or won them high position. Today no black intellectual in America, including this writer, is safe from this sort of inflation. The white world is simply too hungry for the moral authority our skins carry. And this is true on both the political left and right. Why did so many black churches have to be the backdrop for Clinton speeches, and why should Condoleezza Rice and Colin Powell have to hear Bush crow about their high place among his advisers?

James Baldwin once wrote: “What Europe still gives an American is the sanction, if one can accept it, to become oneself.” If America now gives this sanction to most citizens, its institutions still fiercely deny it to blacks. And this society will never sanction blacks in this way until it drops all the mechanisms by which it tries to appease white guilt. Guilt can be a very civilizing force, but only when it is simply carried as a kind of knowledge. Efforts to appease or dispel it will only engage the society in new patterns of dehumanization against the same people who inspired guilt in the first place. This will always be true.

Restraint should be the watchword in racial matters. We should help people who need help. There are, in fact, no races that need help; only individuals, citizens. Over time maybe nothing in the society, not even white guilt, will reach out and play on my race, bind me to it for opportunity. I won’t ever find in America what Baldwin found in Europe, but someday maybe others will.

Shelby Steele is a research fellow at the Hoover Institution at Stanford University. His last book was A Dream Deferred (HarperCollins)



Congrats to the winners of the 2019 Hugo! Kowal, Wells, Cho, Harrow, Chambers, AO3, Liu, Dozois, Wolfe, and more! [Cory Doctorow – Boing Boing]

The 2019 World Science Fiction Convention is being held in Dublin, and tonight, the con presented the annual Hugo Awards, voted on by the attendees and supporters of this year's con.

The winners included:

Best Novel: The Calculating Stars, by Mary Robinette Kowal (Tor)

Best Novella: Artificial Condition, by Martha Wells ( publishing)

Best Novelette: "If at First You Don’t Succeed, Try, Try Again,” by Zen Cho (B&N Sci-Fi and Fantasy Blog, 29 November 2018)

Best Short Story: “A Witch’s Guide to Escape: A Practical Compendium of Portal Fantasies,” by Alix E. Harrow (Apex Magazine, February 2018)

Best Series: Wayfarers, by Becky Chambers (Hodder & Stoughton / Harper Voyager)

Best Related Work: < ahref="">Archive of Our Own, a project of the Organization for Transformative Works

Best Graphic Story: Monstress, Volume 3: Haven, written by Marjorie Liu, art by Sana Takeda (Image Comics)

Best Professional Editor (Short Form): Gardner Dozois

Best Professional Editor, Long Form: Navah Wolfe

Best Professional Artist: Charles Vess

Best Semiprozine: Uncanny Magazine

Best Fanzine: Lady Business

Best Fancast: Our Opinions Are Correct

Best Fan Writer: Foz Meadows

Best Fan Artist: Likhain (Mia Sereno)

Best Art Book: The Books of Earthsea: The Complete Illustrated Edition, illustrated by Charles Vess, written by Ursula K. Le Guin (Saga Press / Gollancz)

Lodestar Award for Best Young Adult Book: Children of Blood and Bone, by Tomi Adeyemi (Henry Holt / Macmillan Children’s Books)

John W. Campbell Award for Best New Writer: Jeannette Ng

Best Dramatic Presentation, Long Form: Spider-Man: Into the Spider-Verse, screenplay by Phil Lord and Rodney Rothman, directed by Bob Persichetti, Peter Ramsey and Rodney Rothman (Sony)

Best Dramatic Presentation, Short Form: The Good Place: “Janet(s),” written by Josh Siegal & Dylan Morgan, directed by Morgan Sackett (NBC)

(Thumbnail: @Dublin2019)


Guest Comic: KB Spangler [QC RSS]

Today's comic is a guest strip from my good buddy KB Spangler! Thanks buddy! Regular comics resume tomorrow.

Sunday, 18 August


Markus Koschany: My Free Software Activities in July 2019 [Planet Debian]

Welcome to Here is my monthly report that covers what I have been doing for Debian. If you’re interested in Java, Games and LTS topics, this might be interesting for you.

DebConf 19 in Curitiba

I have been attending DebConf 19 in Curitiba, Brazil from 16.7.2019 to 28.7.2019. I gave two talks about games in Debian and the Long Term Support project, together with Hugo Lefeuvre, Chris Lamb and Holger Levsen. Especially the Games talk had some immediate positive impact. In response to it Reiner Herrmann and Giovanni Mascellani provided patches for release critical bugs related to GCC-9 and the Python 2 removal and we could already fix some of the more important problems for our current release cycle.

I had a lot of fun in Brazil and again met a couple of new and interesting people.  Thanks to all who helped organizing DebConf 19 and made it the great event it was!

Debian Games

  • We are back in business which means packaging new upstream versions of popular games. I packaged new versions of atomix, dreamchess and pygame-sdl2,
  • uploaded minetest 5.0.1 to unstable and backported it later to buster-backports,
  • uploaded new versions of freeorion and warzone2100 to Buster,
  • fixed bug #931415 in freeciv and #925866 in xteddy,
  • became the new uploader of enemylines7.
  • I reviewed and sponsored patches from Reiner Herrmann to port several games to python3-pygame including whichwayisup, funnyboat and monsterz,
  • from Giovanni Mascellani ember and enemylines7.

Debian Java

  • I packaged new upstream versions of robocode, jboss-modules, jboss-jdeparser2, wildfly-common, commons-dbcp2, jboss-logging-tools, jboss-logmanager,, jboss-logging, jboss-xnio, libjide-oss-java,  sweethome3d, sweethome3d-furniture, pdfsam, libsambox-java, libsejda-java, jackson-jr, jackson-dataformat-xml, libsmali-java and apktool.


  • I updated the popular Firefox/Chromium addons ublock-origin, https-everywhere and privacybadger and also packaged new upstream versions of wabt and binaryen which are both required for building webassembly files from source.

Debian LTS

This was my 41. month as a paid contributor and I have been paid to work 18,5 hours on Debian LTS, a project started by Raphaël Hertzog. In that time I did the following:

  • DLA-1854-1. Issued a security update for libonig fixing 1 CVE.
  • DLA-1860-1. Issued a security update for libxslt fixing 4 CVE.
  • DLA-1846-2. Issued a regression update for unzip to address a Firefox build failure.
  • DLA-1873-1. Issued a security update for proftpd-dfsg fixing 1 CVE.
  • DLA-1886-1. Issued a security update for openjdk-7 fixing 4 CVE.
  • DLA-1890-1. Issued a security update for kde4libs fixing 1 CVE.
  • DLA-1891-1. Reviewed and sponsored a security update for openldap fixing 2 CVE prepared by Ryan Tandy.


Extended Long Term Support (ELTS) is a project led by Freexian to further extend the lifetime of Debian releases. It is not an official Debian project but all Debian users benefit from it without cost. The current ELTS release is Debian 7 „Wheezy“. This was my fourteenth month and I have been paid to work 15 hours on ELTS.

  • I was in charge of our ELTS frontdesk from 15.07.2019 until 21.07.2019 and I triaged CVE in openjdk7, libxslt, libonig, php5, wireshark, python2.7, libsdl1.2, patch, suricata and libssh2.
  • ELA-143-1. Issued a security update for libonig fixing 1 CVE.
  • ELA-145-1.  Issued a security update for libxslt fixing 2 CVE.
  • ELA-151-1. Issued a security update for linux fixing 3 CVE.
  • ELA-154-1. Issued a security update for openjdk-7 fixing 4 CVE.

Thanks for reading and see you next time.


A new biography reveals the Koch brothers' very early role in creating organized climate denial [Cory Doctorow – Boing Boing]

The Koch brothers are quite an enigma: on the one hand, they owe their vast fortune to extremely long-range planning: Charles Koch is famously contemptuous of entrepreneurs who take their companies public, believing that the public markets insist on such short timescales that they undermine real growth; and he grew his father's hydrocarbon empire by investing heavily in automation systems with extremely long amortization schedules.

But cutting against this commitment to a hyper-rationalist, long-range thinking is the brothers' slavish devotion to neoliberal orthodoxy and a reflexive, irrational phobia of state intervention, despite that fact that states are often an important long-range counterforce to the short-termism of the markets.

Where these two forces collide, the results are bizarre: the application of the Kochs' long-term thinking to heading off any kind of long-term planning by states.

Nowhere is this more manifest than in the Kochs' overt and covert campaign against climate science, whose rationalist, empirical conclusion is that urgent, coordinated, non-market action is a hard requirement to avert a catastrophe that could result in the extinction of the human species (which would also result in significant falls in the Kochs' fortunes). There is no rational version of long-range thinking that says that climate denial will produce a good outcome; the majority of climate denial is centered around the kind of short-termism that Koch deplores, where the returns to capital over a couple quarters are more important than the long-term ruination of firms, enterprises (and civilizations).

In Kochland: The Secret History of Koch Industries and Corporate Power in America, veteran investigative business journalist Christopher Leonard delivers a deep (700+ page) look at the Kochs, revealing newly unearthed evidence of the Kochs' involvement in the very earliest stirrings of organized, corporate climate denial campaigns, providing the black money for the first climate denial events and publications.

Leonard reveals that the Kochs began funding climate denial in 1991, in response to George HW Bush's support for a treaty limiting emissions after the first IPCC report. The GOP of the 1990s accepted the scientific consensus on climate, and the Kochs' funding of a Cato Institute climate denial conference was the first step in the long road to turning the GOP into the human extinction/climate-denial party.

The Kochs' involvement in climate denial is a marvellous (and terrifying) case study in how smart people do dumb things, and why the idea of unfettered corporate power (one dollar, one vote) is a recipe for disaster. Though it was diversified, the Kochs' business empire was still built on a foundation of hydrocarbon extraction and incineration. The science on climate ran directly contrary to the Kochs' short-term profits (even if the Kochs' long-term interests depended on the planet continuing to be habitable by human beings), and "It is difficult to get a man to understand something, when his salary depends on his not understanding it."

The Kochs' capacity for motivated reasoning is precisely why we have regulation: even "rational actors" are prone to irrational, self-interested superstitions, and if markets are the only arbiters of what is and is not permitted, then billionaires' cherished delusions and pecadillos become the law of the land, which everyone else must live under.

The Kochs' were the canaries in neoliberalism's coalmines. As governments have been starved and deprived of their ability to gather facts and reason about them, the billionaire class has turned its preferences into our laws, reinventing the neofeudal corporate doctrines of Ford and his ilk.

Pluralism has many benefits, but chief among them is guarding against individual follies. It's the logic behind the scientific method and the key to the enlightenment: the rule of law, rather than the rule of man, depends on the freedom to undertake impartial evaluations of factual questions, like "is climate change being caused by the Kochs' business?" The Kochs, for obvious reasons, are not qualified to answer that question.

Leonard, nonetheless, manages to dig up valuable new material, including evidence of the Kochs’ role in perhaps the earliest known organized conference of climate-change deniers, which gathered just as the scientific consensus on the issue was beginning to gel. The meeting, in 1991, was sponsored by the Cato Institute, a Washington-based libertarian think tank, which the Kochs founded and heavily funded for years. As Leonard describes it, Charles Koch and other fossil-fuel magnates sprang into action that year, after President George H. W. Bush announced that he would support a treaty limiting carbon emissions, a move that posed a potentially devastating threat to the profits of Koch Industries. At the time, Bush was not an outlier in the Republican Party. Like the Democrats, the Republicans largely accepted the scientific consensus on climate change, reflected in the findings of expert groups such as the Intergovernmental Panel on Climate Change, which had formed in 1988, under the auspices of the United Nations.

“Kochland” Examines the Koch Brothers’ Early, Crucial Role in Climate-Change Denial [Jane Mayer/The New Yorker]

(via Naked Capitalism)


DISH Files $10m Copyright Infringement Lawsuit Against Easybox IPTV [TorrentFreak]

As the use of unlicensed IPTV services continues to gain popularity with consumers around the world, content owners and broadcasters are faced with a growing illicit market to disrupt.

As a result, copyright infringement and similar lawsuits against ‘pirate’ IPTV providers are definitely on the rise, with US-based broadcaster DISH Network at the forefront.

This week, DISH filed another lawsuit in the United States, this time targeting ‘pirate’ IPTV provider Easybox IPTV. This ‘company’ (the term is used loosely, given the unknown structure of the operation) appears not dissimilar to several others previously targeted by the broadcaster.

The model adopted by Easybox suggests the outfit primarily targets less experienced IPTV users, something that’s supported by the operation offering ready-configured (aka ‘fully-loaded’) devices as well as add-on subscription packages.

Part of the Easybox IPTV offering

The DISH lawsuit, filed in a Texas federal court, list DOES 1-5 individually and collectively doing business as Easybox IPTV. DISH doesn’t appear to know the identities of the people it’s suing but has concluded they may be from China.

The broadcaster says that historical WHOIS records for the service’s domain name suggest a China base while delivery time for devices sent to China is much quicker than those sent to the United States.

At issue are DISH’s ‘protected channels’, i.e those it supplies as a result of licensing agreements obtained from various TV networks. These allow the company to “distribute and publicly perform” in the United States “by means including satellite, OTT, Internet protocol television (‘IPTV’), and Internet.”

Easybox IPTV’s service, which offers “more than 1,000 channels” to its subscribers, includes the ‘protected channels’, a breach of the broadcaster’s rights, according to DISH.

“Defendants use their Easybox Service to transmit the Protected Channels over the Internet to Service Users soon after the original authorized transmission,” the complaint reads.

“Defendants capture live broadcast signals of the Protected Channels, transcode these signals into a format useful for streaming over the Internet, transfer the transcoded content to one or more servers provided, controlled, and maintained by Defendants, and then transmit the Protected Channels to Service Users through OTT delivery.”

An interesting element to the case are the efforts expended by DISH, in advance of this lawsuit, in order to get Easybox to cease-and-desist its activities. According to the broadcaster, since January 27, 2016, DISH and its partners sent at least 116 infringement notices, all of which were ignored.

“Instead [of responding], Defendants prevented DISH’s counsel from viewing by blocking their Internet Protocol (‘IP’) addresses,” the complaint adds.

On top of the direct notices, from February 8, 2016, more than 170 additional complaints were sent to CDNs associated with the Easybox service. DISH believes at least some of these were forwarded to the IPTV provider since it later countered by switching to different CDN providers.

All that considered, DISH is demanding a permanent injunction against Easybox (and anyone acting in concert with it) preventing it from “transmitting, streaming, distributing, or publicly performing in the United States, with any Easybox set-top box, smart IPTV subscription, subscription renewal, or any other device, application, service, or process, any of the Protected Channels or any of the programming that comprises any of the Protected Channels.”

DISH also seeks a ban on the distribution, sale, promotion or advertising of Easybox services and/or devices, including any inducement for others to carry out the same.

In addition, it requests statutory damages for 67 or more registered works at the rate of $150,000 each (more than $10 million) plus any profits generated by Easybox due to the infringement of non-registered works.

The DISH complaint against Easybox can be downloaded here (pdf)

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.


Turning a recalled children's unicorn boot into a display for endless product recall notices [Cory Doctorow – Boing Boing]

Phil Torrone from Adafruit told us about Consumers Should Immediately...: "This uses a live data feed from The United States Consumer Product Safety Commission (USCPSC) to randomly display thousands of products recalled for reasons such as fire, electrocution, entrapment, choking and a variety of other unintended dangers. Every two minutes the embedded screen lists the name of the product, the identified danger, the product manufacturer, and the original recall date. The electronics are enclosed in an actual recalled children’s unicorn boot, along with an embedded rechargeable battery, allowing for an uninterrupted stream of recalled products in any location."


Today in GPF History for Sunday, August 18, 2019 [General Protection Fault: The Comic Strip]

Harry Barker arrives at Hollerith's School of Phreaking and Hacking...


Link [Scripting News]

I read this VC appraisal of Dropbox vs Slack, which service was going to be the foundation for groupware in the enterprise world. I was unaware of this perspective and it was enlightening. I have studied both. Slack has the API, Dropbox went it alone. I think if Dropbox had fully embraced the idea that it was a developer's platform, there were a few small doors they had to open, they would have become the storage for networked apps. Same with Amazon S3. Each had the opportunity to bridge into the others' space, but neither has. I think the assumption at Dropbox was they knew everyone who was capable of making great groupware apps. That was their mistake. It's still not too late, they are dominant and totally baked in. They should be killing instead they are flailing. One of the biggest wasted opportunities I've seen in my career.


Git v2.23.0 released []

Version 2.23.0 of the Git source-code management system is out. There's a lot of new features, including a new "git merge --quit" option, new "git switch" and "git restore" commands, and more.


Link [Scripting News]

I'm getting ready for my next binge -- Big Little Lies season 2. I'm looking over the review summaries on Metacritic, and see they're all based on the first three episodes only. This was the same problem with software reviews, back when they did reviews of software (too bad they stopped). They would review the software based on a day's worth of use, if that much. But we design software so that it gets better the more you use it. We balance the tradeoffs. Of course we want the product to be easy to learn, but we also want it to be something you use all the time. It's as ridiculous to judge a serial show based on a third of a season. If I make it through the whole season, which seems likely based on the first season, I'll review the whole thing here on my blog. And I still have to relaunch bingeworthy so we accumulate judgements of Scripting News readers on these shows. I have an idea how to do it. 💥


The anatomy of annoying [Seth's Blog]

Pema Chodron’s story has stuck with me for a decade: At a meditation retreat, the guy sitting near her kept making an annoying clicking sound. Again and again, she was jolted from her practice because he kept clicking his tongue.

During the break, as she gathered up her courage to tell him that he was ruining the day for her and for everyone else, she realized that in fact, it was a nearby radiator that was causing the clicking.

Suddenly, the fact that it was an inanimate object changed everything for her.

It wasn’t about her any longer.

It wasn’t intentional or selfish.

It was simply a radiator.

The rest of the day was fine, because it was simply a radiator.

My biggest takeaway is that the key leap wasn’t in discovering that the sounds came from a radiator. The lesson is that acting like it comes from a radiator completely solves the problem.

Sometimes (often, usually), it’s not about us. It’s simply weather.


Anti-Piracy Efforts Are Unlikely to Beat Sci-Hub [TorrentFreak]

Sci-Hub has often been referred to as “The Pirate Bay of Science,” but that description really sells the site short.

While both sites are helping the public to access copyrighted content without permission, Sci-Hub has also become a crucial tool that arguably helps the progress of science.

The site allows researchers to bypass expensive paywalls so they can read articles written by their fellow colleagues. The information in these ‘pirated’ articles is then used to provide the foundation for future research.

What the site does is not permitted, according to the law, but in the academic world, Sci-Hub is praised by many. In particular, those who don’t have direct access to expensive journals but aspire to excel in their academic field.

This leads to a rather intriguing situation where many of the ‘creators,’ the people who write academic articles, are openly supporting the site. By doing so, they go directly against the major publishers, including the billion-dollar company Elsevier, which are the rightsholders.

Elsevier previously convinced the courts that Sci-Hub is a force of evil. Many scientists, however, see it as an extremely useful tool. This was illustrated once again by a ‘letter to the editor’ Dr. Prasanna R Deshpande sent to the Journal of Health & Allied Sciences recently.

While Deshpande works at the Department of Clinical Pharmacy at Poona College of Pharmacy, his latest writing is entirely dedicated to copyright and Sci-Hub. In his published letter (no paywall), the researcher explains why a site such as Sci-Hub is important for the scientific community as a whole.

The Indian researcher points out that Sci-Hub’s main advantage is that it’s free of charge. This is particularly important for academics in developing countries, who otherwise don’t have the means to access crucial articles. Sci-Hub actually allows these people to carry out better research.

“A researcher generally has to pay some money ($30 or more per article on an average) for accessing the scholarly articles. However, the amount may not be ‘small’ for a researcher/research scholar, especially from a developing country,” Deshpande notes.

Aside from the cost issue, Sci-hub is often seen as more convenient as well. Many professors use the site and a recent survey found that it’s used to conduct research by 62.5% of all medical students across six countries in Latin America.

According to Deshpande, these and other arguments lead to the conclusion that Sci-Hub should be supported, at least until there is a good alternative.

“Reading updated knowledge is one of the essential parts of lifelong learning. Currently, Sci‑Hub is the only answer for this. Therefore, Sci‑Hub has various advantages because of which it should be supported,”
Deshpande concludes.

This is of course just the opinion of one researcher, but the web is riddled with similar examples. A simple Twitter search shows that many academics are sharing Sci-Hub links among each other, and some have even created dedicated websites to show some of the latest working Sci-Hub mirrors.

The major publishers are obviously not happy with this. Aside from lawsuits against Sci-Hub, they regularly send takedown notices to sites that link to infringing articles, including Google.

Recently Elsevier took it a step further by going after Citationsy, a tool that allows academics and researchers to manage citations and reference lists. The service previously published a blog post summing up some options for people to download free research articles.

This blog post also linked to Sci-Hub. Elsevier clearly didn’t like this, and sent its lawyer after Citationsy, requesting it to remove the link.

Citantionsy founder Cenk Özbakır initially wasn’t sure how to respond. Linking to a website isn’t necessarily copyright infringement. However, challenging a multi-billion dollar company on an issue like this is a battle that’s hard to win.

Eventually, Özbakır decided to remove it, pointing to a Google search instead. However, not without being rather critical of the move by Elsevier and its law firm Bird & Bird.

“I have of course taken down any links to Sci-Hub on @ElsevierLabs obviously thinks making money is more important than furthering science. Congratulations, @twobirds! We all now that the only thing this will achieve is less people reading papers,” Özbakır wrote on Twitter.

The ‘linking’ issue was later picked up by BoingBoing which also pointed out that many of Elsevier’s own publications include links to Sci-Hub, as we also highlighted in the past.

While not all researchers are unanimously backing Sci-Hub, it appears that this type of enforcement may not be the best way forward.

Pressuring people with cease and desist notices, filing lawsuits, and sending takedown notices certainly isn’t sustainable in the long term, especially if they target people in the academic community.

Perhaps Elsevier and other publishers should use the massive popularity of Sci-Hub as a signal that something is clearly wrong with what they are offering. Instead of trying to hide piracy by sweeping it under the rug, Elsevier could learn from it and adapt.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.


Willowweep Update [Skin Horse]

Shaenon: Chris Baldwin and I have finished the rough draft of Willowweep Manor, our upcoming graphic novel. Here’s a sample (still rough) page.

Channing: I’ve long been a fan of Mr. Baldwin’s work, and so seeing his teamup with my favorite webcomicker should be a treat.


Bicycling from Urbana IL to Turkey Run IN [Nina Paley]

I’ve wanted to bike to Turkey Run State Park since I moved to Urbana 7 years ago, and this week I finally did. It’s about 72 miles, or 73 if you make a few wrong turns, as I did.

UPDATE: by popular demand, here is the route on RideWithGPS.

Previously I have biked to Indiana via the Northern Route, past Kickapoo; I found parts of the way terrifying, as I wrote here. For Turkey Run, I wanted a more direct and hopefully safer route. Local bike club riders warned me about gravel in Vermillion County, but had no further advice, so I relied on Google Maps. Between that and my own experience, I can say the route below, corrected for mistakes I made coming and going, minimizes gravel to a mere 4 miles!

On my way home, I was chased by a very determined dog on 900 North Road near Catlin-Indianaola road in Vermillion County IL. That’s right where most of the gravel was, so it was particularly exciting trying to maintain speed and balance with a barking howling hulking (Rottweiler? Pitbull/Doberman mix?) on my tail for 3 miles west.

I stopped at Angie’s Country Kitchen in Cayuga, IN, both ways. It’s roadside diner food, made as competently as such fare can be. I recommend the chocolate malt and fries, and the breakfast I had on my return trip was also solid.

Most of the “major” roads I traveled (Indiana 41, Indiana 234/Mill Road) were lightly trafficked and non-scary. Illinois route 150 was a bit scary, but there’s only 1 mile of it (you can bypass this if you’re willing to endure an extra mile of gravel on 900 N/850 N Road, as I did on my way out). The scariest traffic was on Indiana route 47, the very last leg leading to the park. For some reason people drove like crazy a**holes on it. But it’s only a few miles, and then you’re there!

Crossing the Indiana State Line…
Followed by this nice photogenic “Welcome To Indiana” sign about a mile later. Look at that big scary truck!
That truck picture was just for drama. Most of the time the road looked like this. Really route 234 was very civil, unlike route 136 up north.
Angie’s Country Kitchen in Cayuga. Solid.
Frazzled by the insane drivers on IN Route 47, and dessicated and sweaty from the mid-day sun, I parked at the trash can-adjacent bike rack at the Turkey Run Inn. Connie Bikeson, my separable steel Tour Easy recumbent, was the only bike I saw there the whole time.
The next day I hiked the fabled Trail 3 twice: once in the morning, and again in the late afternoon.
The bouncy Suspension Bridge.
Rare mosses.
Tenacious ferns.
Turkey Run State Park is gorgeous and amazing.
Special Man Friend™ got in the way of my nature photos, so I had to pixellate his face. No online surveillance for Special Man Friend™!
Facial recognition software can’t do much with the back of Special Man Friend™‘s head. I just wanted a photo of these cool stepped rocks of the stream bed/trail.
There were about 20 Amish in this group, and all of them together were quieter than the “English” cell-phone-totin’ mother-daughter pair we passed earlier, who decided to have a loud phone chat in the middle of the woods. Also the usual sounds of families-with-shrieking-children echoed through the canyons. On our return trip in the afternoon, WE were the noisy ones.
Steep ladders on Trail 3.
Returning to the park with more friends in the late afternoon.
Here’s my pal Lois on the ladder.
Early evening bridge reflection.
Special Man Friend™ protects his identity so I don’t have to apply a pixel-blur filter here like I did on the other photos. Thanks, Special Man Friend™!
Early the next morning, I headed home while my friends slept at the Inn. In Tangier, Indiana, this very polite and quiet dog walked unthreateningly up to me while I snarfed down an energy bar next to the Tangier Friends Church. The Indiana dogs I encountered along my route were remarkably polite, unlike Illinois country dogs, who are mostly savage maniacs.
There was no big “Welcome to Illinois” sign after this. Even Indiana is outshining Illinois these days.
Georgetown, IL, which I accidentally passed through after missing my planned turnoff. I had to go one mile on busy route 150, but I bypassed the mile+ of gravel I suffered on my way out.
This train was moving when I approached, but stopped completely at the crossing southeast of Sidney IL. Thrilling video footage here.

So there you have it – Turkey Run by bicycle, from Urbana. I recommend it!


Saturday, 17 August


Michael Stapelberg: Linux distributions: Can we do without hooks and triggers? [Planet Debian]

Hooks are an extension feature provided by all package managers that are used in larger Linux distributions. For example, Debian uses apt, which has various maintainer scripts. Fedora uses rpm, which has scriptlets. Different package managers use different names for the concept, but all of them offer package maintainers the ability to run arbitrary code during package installation and upgrades. Example hook use cases include adding daemon user accounts to your system (e.g. postgres), or generating/updating cache files.

Triggers are a kind of hook which run when other packages are installed. For example, on Debian, the man(1) package comes with a trigger which regenerates the search database index whenever any package installs a manpage. When, for example, the nginx(8) package is installed, a trigger provided by the man(1) package runs.

Over the past few decades, Open Source software has become more and more uniform: instead of each piece of software defining its own rules, a small number of build systems are now widely adopted.

Hence, I think it makes sense to revisit whether offering extension via hooks and triggers is a net win or net loss.

Hooks preclude concurrent package installation

Package managers commonly can make very little assumptions about what hooks do, what preconditions they require, and which conflicts might be caused by running multiple package’s hooks concurrently.

Hence, package managers cannot concurrently install packages. At least the hook/trigger part of the installation needs to happen in sequence.

While it seems technically feasible to retrofit package manager hooks with concurrency primitives such as locks for mutual exclusion between different hook processes, the required overhaul of all hooks¹ seems like such a daunting task that it might be better to just get rid of the hooks instead. Only deleting code frees you from the burden of maintenance, automated testing and debugging.

① In Debian, there are 8620 non-generated maintainer scripts, as reported by find shard*/src/*/debian -regex ".*\(pre\|post\)\(inst\|rm\)$" on a Debian Code Search instance.

Triggers slow down installing/updating other packages

Personally, I never use the apropos(1) command, so I don’t appreciate the man(1) package’s trigger which updates the database used by apropos(1). The process takes a long time and, because hooks and triggers must be executed serially (see previous section), blocks my installation or update.

When I tell people this, they are often surprised to learn about the existance of the apropos(1) command. I suggest adopting an opt-in model.

Unnecessary work if programs are not used between updates

Hooks run when packages are installed. If a package’s contents are not used between two updates, running the hook in the first update could have been skipped. Running the hook lazily when the package contents are used reduces unnecessary work.

As a welcome side-effect, lazy hook evaluation automatically makes the hook work in operating system images, such as live USB thumb drives or SD card images for the Raspberry Pi. Such images must not ship the same crypto keys (e.g. OpenSSH host keys) to all machines, but instead generate a different key on each machine.

Why do users keep packages installed they don’t use? It’s extra work to remember and clean up those packages after use. Plus, users might not realize or value that having fewer packages installed has benefits such as faster updates.

I can also imagine that there are people for whom the cost of re-installing packages incentivizes them to just keep packages installed—you never know when you might need the program again…

Implemented in an interpreted language

While working on hermetic packages (more on that in another blog post), where the contained programs are started with modified environment variables (e.g. PATH) via a wrapper bash script, I noticed that the overhead of those wrapper bash scripts quickly becomes significant. For example, when using the excellent magit interface for Git in Emacs, I encountered second-long delays² when using hermetic packages compared to standard packages. Re-implementing wrappers in a compiled language provided a significant speed-up.

Similarly, getting rid of an extension point which mandates using shell scripts allows us to build an efficient and fast implementation of a predefined set of primitives, where you can reason about their effects and interactions.

② magit needs to run git a few times for displaying the full status, so small overhead quickly adds up.

Incentivizing more upstream standardization

Hooks are an escape hatch for distribution maintainers to express anything which their packaging system cannot express.

Distributions should only rely on well-established interfaces such as autoconf’s classic ./configure && make && make install (including commonly used flags) to build a distribution package. Integrating upstream software into a distribution should not require custom hooks. For example, instead of requiring a hook which updates a cache of schema files, the library used to interact with those files should transparently (re-)generate the cache or fall back to a slower code path.

Distribution maintainers are hard to come by, so we should value their time. In particular, there is a 1:n relationship of packages to distribution package maintainers (software is typically available in multiple Linux distributions), so it makes sense to spend the work in the 1 and have the n benefit.

Can we do without them?

If we want to get rid of hooks, we need another mechanism to achieve what we currently achieve with hooks.

If the hook is not specific to the package, it can be moved to the package manager. The desired system state should either be derived from the package contents (e.g. required system users can be discovered from systemd service files) or declaratively specified in the package build instructions—more on that in another blog post. This turns hooks (arbitrary code) into configuration, which allows the package manager to collapse and sequence the required state changes. E.g., when 5 packages are installed which each need a new system user, the package manager could update /etc/passwd just once.

If the hook is specific to the package, it should be moved into the package contents. This typically means moving the functionality into the program start (or the systemd service file if we are talking about a daemon). If (while?) upstream is not convinced, you can either wrap the program or patch it. Note that this case is relatively rare: I have worked with hundreds of packages and the only package-specific functionality I came across was automatically generating host keys before starting OpenSSH’s sshd(8)³.

There is one exception where moving the hook doesn’t work: packages which modify state outside of the system, such as bootloaders or kernel images.

③ Even that can be moved out of a package-specific hook, as Fedora demonstrates.


Global state modifications performed as part of package installation today use hooks, an overly expressive extension mechanism.

Instead, all modifications should be driven by configuration. This is feasible because there are only a few different kinds of desired state modifications. This makes it possible for package managers to optimize package installation.

Michael Stapelberg: Linux package managers are slow [Planet Debian]

I measured how long the most popular Linux distribution’s package manager take to install small and large packages (the ack(1p) source code search Perl script and qemu, respectively).

Where required, my measurements include metadata updates such as transferring an up-to-date package list. For me, requiring a metadata update is the more common case, particularly on live systems or within Docker containers.

All measurements were taken on an Intel(R) Core(TM) i9-9900K CPU @ 3.60GHz running Docker 1.13.1 on Linux 4.19, backed by a Samsung 970 Pro NVMe drive boasting many hundreds of MB/s write performance.

See Appendix B for details on the measurement method and command outputs.


Keep in mind that these are one-time measurements. They should be indicative of actual performance, but your experience may vary.

ack (small Perl program)

distribution package manager data wall-clock time rate
Fedora dnf 107 MB 29s 3.7 MB/s
NixOS Nix 15 MB 14s 1.1 MB/s
Debian apt 15 MB 4s 3.7 MB/s
Arch Linux pacman 6.5 MB 3s 2.1 MB/s
Alpine apk 10 MB 1s 10.0 MB/s

qemu (large C program)

distribution package manager data wall-clock time rate
Fedora dnf 266 MB 1m8s 3.9 MB/s
Arch Linux pacman 124 MB 1m2s 2.0 MB/s
Debian apt 159 MB 51s 3.1 MB/s
NixOS Nix 262 MB 38s 6.8 MB/s
Alpine apk 26 MB 2.4s 10.8 MB/s

The difference between the slowest and fastest package managers is 30x!

How can Alpine’s apk and Arch Linux’s pacman be an order of magnitude faster than the rest? They are doing a lot less than the others, and more efficiently, too.

Pain point: too much metadata

For example, Fedora transfers a lot more data than others because its main package list is 60 MB (compressed!) alone. Compare that with Alpine’s 734 KB APKINDEX.tar.gz.

Of course the extra metadata which Fedora provides helps some use case, otherwise they hopefully would have removed it altogether. The amount of metadata seems excessive for the use case of installing a single package, which I consider the main use-case of an interactive package manager.

I expect any modern Linux distribution to only transfer absolutely required data to complete my task.

Pain point: no concurrency

Because they need to sequence executing arbitrary package maintainer-provided code (hooks and triggers), all tested package managers need to install packages sequentially (one after the other) instead of concurrently (all at the same time).

In my blog post “Can we do without hooks and triggers?”, I outline that hooks and triggers are not strictly necessary to build a working Linux distribution.

Thought experiment: further speed-ups

Strictly speaking, the only required feature of a package manager is to make available the package contents so that the package can be used: a program can be started, a kernel module can be loaded, etc.

By only implementing what’s needed for this feature, and nothing more, a package manager could likely beat apk’s performance. It could, for example:

  • skip archive extraction by mounting file system images (like AppImage or snappy)
  • use compression which is light on CPU, as networks are fast (like apk)
  • skip fsync when it is safe to do so, i.e.:
    • package installations don’t modify system state
    • atomic package installation (e.g. an append-only package store)
    • automatically clean up the package store after crashes

Current landscape

Here’s a table outlining how the various package managers listed on Wikipedia’s list of software package management systems fare:

name scope package file format hooks/triggers
AppImage apps image: ISO9660, SquashFS no
snappy apps image: SquashFS yes: hooks
FlatPak apps archive: OSTree no
0install apps archive: tar.bz2 no
nix, guix distro archive: nar.{bz2,xz} activation script
dpkg distro archive: tar.{gz,xz,bz2} in ar(1) yes
rpm distro archive: cpio.{bz2,lz,xz} scriptlets
pacman distro archive: tar.xz install
slackware distro archive: tar.{gz,xz} yes:
apk distro archive: tar.gz yes: .post-install
Entropy distro archive: tar.bz2 yes
ipkg, opkg distro archive: tar{,.gz} yes


As per the current landscape, there is no distribution-scoped package manager which uses images and leaves out hooks and triggers, not even in smaller Linux distributions.

I think that space is really interesting, as it uses a minimal design to achieve significant real-world speed-ups.

I have explored this idea in much more detail, and am happy to talk more about it in my post “Introducing the distri research linux distribution".

There are a couple of recent developments going into the same direction:

Appendix B: measurement details


You can expand each of these:

Fedora’s dnf takes almost 30 seconds to fetch and unpack 107 MB.
% docker run -t -i fedora /bin/bash
[root@722e6df10258 /]# time dnf install -y ack
Fedora Modular 30 - x86_64            4.4 MB/s | 2.7 MB     00:00
Fedora Modular 30 - x86_64 - Updates  3.7 MB/s | 2.4 MB     00:00
Fedora 30 - x86_64 - Updates           17 MB/s |  19 MB     00:01
Fedora 30 - x86_64                     31 MB/s |  70 MB     00:02
Install  44 Packages

Total download size: 13 M
Installed size: 42 M
real    0m29.498s
user    0m22.954s
sys     0m1.085s
NixOS’s Nix takes 14s to fetch and unpack 15 MB.
% docker run -t -i nixos/nix
39e9186422ba:/# time sh -c 'nix-channel --update && nix-env -i perl5.28.2-ack-2.28'
unpacking channels...
created 2 symlinks in user environment
installing 'perl5.28.2-ack-2.28'
these paths will be fetched (14.91 MiB download, 80.83 MiB unpacked):
copying path '/nix/store/gkrpl3k6s43fkg71n0269yq3p1f0al88-perl5.28.2-ack-2.28-man' from ''...
copying path '/nix/store/iykxb0bmfjmi7s53kfg6pjbfpd8jmza6-glibc-2.27' from ''...
copying path '/nix/store/x4knf14z1p0ci72gl314i7vza93iy7yc-perl5.28.2-File-Next-1.16' from ''...
copying path '/nix/store/89gi8cbp8l5sf0m8pgynp2mh1c6pk1gk-attr-2.4.48' from ''...
copying path '/nix/store/svgkibi7105pm151prywndsgvmc4qvzs-acl-2.2.53' from ''...
copying path '/nix/store/k8lhqzpaaymshchz8ky3z4653h4kln9d-coreutils-8.31' from ''...
copying path '/nix/store/57iv2vch31v8plcjrk97lcw1zbwb2n9r-perl-5.28.2' from ''...
copying path '/nix/store/zfj7ria2kwqzqj9dh91kj9kwsynxdfk0-perl5.28.2-ack-2.28' from ''...
building '/nix/store/q3243sjg91x1m8ipl0sj5gjzpnbgxrqw-user-environment.drv'...
created 56 symlinks in user environment
real    0m 14.02s
user    0m 8.83s
sys     0m 2.69s
Debian’s apt takes almost 10 seconds to fetch and unpack 16 MB.
% docker run -t -i debian:sid
root@b7cc25a927ab:/# time (apt update && apt install -y ack-grep)
Get:1 sid InRelease [233 kB]
Get:2 sid/main amd64 Packages [8270 kB]
Fetched 8502 kB in 2s (4764 kB/s)
The following NEW packages will be installed:
  ack ack-grep libfile-next-perl libgdbm-compat4 libgdbm5 libperl5.26 netbase perl perl-modules-5.26
The following packages will be upgraded:
1 upgraded, 9 newly installed, 0 to remove and 60 not upgraded.
Need to get 8238 kB of archives.
After this operation, 42.3 MB of additional disk space will be used.
real    0m9.096s
user    0m2.616s
sys     0m0.441s
Arch Linux’s pacman takes a little over 3s to fetch and unpack 6.5 MB.
% docker run -t -i archlinux/base
[root@9604e4ae2367 /]# time (pacman -Sy && pacman -S --noconfirm ack)
:: Synchronizing package databases...
 core            132.2 KiB  1033K/s 00:00
 extra          1629.6 KiB  2.95M/s 00:01
 community         4.9 MiB  5.75M/s 00:01
Total Download Size:   0.07 MiB
Total Installed Size:  0.19 MiB
real    0m3.354s
user    0m0.224s
sys     0m0.049s
Alpine’s apk takes only about 1 second to fetch and unpack 10 MB.
% docker run -t -i alpine
/ # time apk add ack
(1/4) Installing perl-file-next (1.16-r0)
(2/4) Installing libbz2 (1.0.6-r7)
(3/4) Installing perl (5.28.2-r1)
(4/4) Installing ack (3.0.0-r0)
Executing busybox-1.30.1-r2.trigger
OK: 44 MiB in 18 packages
real    0m 0.96s
user    0m 0.25s
sys     0m 0.07s


You can expand each of these:

Fedora’s dnf takes over a minute to fetch and unpack 266 MB.
% docker run -t -i fedora /bin/bash
[root@722e6df10258 /]# time dnf install -y qemu
Fedora Modular 30 - x86_64            3.1 MB/s | 2.7 MB     00:00
Fedora Modular 30 - x86_64 - Updates  2.7 MB/s | 2.4 MB     00:00
Fedora 30 - x86_64 - Updates           20 MB/s |  19 MB     00:00
Fedora 30 - x86_64                     31 MB/s |  70 MB     00:02
Install  262 Packages
Upgrade    4 Packages

Total download size: 172 M
real    1m7.877s
user    0m44.237s
sys     0m3.258s
NixOS’s Nix takes 38s to fetch and unpack 262 MB.
% docker run -t -i nixos/nix
39e9186422ba:/# time sh -c 'nix-channel --update && nix-env -i qemu-4.0.0'
unpacking channels...
created 2 symlinks in user environment
installing 'qemu-4.0.0'
these paths will be fetched (262.18 MiB download, 1364.54 MiB unpacked):
real    0m 38.49s
user    0m 26.52s
sys     0m 4.43s
Debian’s apt takes 51 seconds to fetch and unpack 159 MB.
% docker run -t -i debian:sid
root@b7cc25a927ab:/# time (apt update && apt install -y qemu-system-x86)
Get:1 sid InRelease [149 kB]
Get:2 sid/main amd64 Packages [8426 kB]
Fetched 8574 kB in 1s (6716 kB/s)
Fetched 151 MB in 2s (64.6 MB/s)
real    0m51.583s
user    0m15.671s
sys     0m3.732s
Arch Linux’s pacman takes 1m2s to fetch and unpack 124 MB.
% docker run -t -i archlinux/base
[root@9604e4ae2367 /]# time (pacman -Sy && pacman -S --noconfirm qemu)
:: Synchronizing package databases...
 core       132.2 KiB   751K/s 00:00
 extra     1629.6 KiB  3.04M/s 00:01
 community    4.9 MiB  6.16M/s 00:01
Total Download Size:   123.20 MiB
Total Installed Size:  587.84 MiB
real    1m2.475s
user    0m9.272s
sys     0m2.458s
Alpine’s apk takes only about 2.4 seconds to fetch and unpack 26 MB.
% docker run -t -i alpine
/ # time apk add qemu-system-x86_64
OK: 78 MiB in 95 packages
real    0m 2.43s
user    0m 0.46s
sys     0m 0.09s

Michael Stapelberg: distri: a Linux distribution to research fast package management [Planet Debian]

Over the last year or so I have worked on a research linux distribution in my spare time. It’s not a distribution for researchers (like Scientific Linux), but my personal playground project to research linux distribution development, i.e. try out fresh ideas.

This article focuses on the package format and its advantages, but there is more to distri, which I will cover in upcoming blog posts.


I was a Debian Developer for the 7 years from 2012 to 2019, but using the distribution often left me frustrated, ultimately resulting in me winding down my Debian work.

Frequently, I was noticing a large gap between the actual speed of an operation (e.g. doing an update) and the possible speed based on back of the envelope calculations. I wrote more about this in my blog post “Package managers are slow”.

To me, this observation means that either there is potential to optimize the package manager itself (e.g. apt), or what the system does is just too complex. While I remember seeing some low-hanging fruit¹, through my work on distri, I wanted to explore whether all the complexity we currently have in Linux distributions such as Debian or Fedora is inherent to the problem space.

I have completed enough of the experiment to conclude that the complexity is not inherent: I can build a Linux distribution for general-enough purposes which is much less complex than existing ones.

① Those were low-hanging fruit from a user perspective. I’m not saying that fixing them is easy in the technical sense; I know too little about apt’s code base to make such a statement.

Key idea: packages are images, not archives

One key idea is to switch from using archives to using images for package contents. Common package managers such as dpkg(1) use tar(1) archives with various compression algorithms.

distri uses SquashFS images, a comparatively simple file system image format that I happen to be familiar with from my work on the gokrazy Raspberry Pi 3 Go platform.

This idea is not novel: AppImage and snappy also use images, but only for individual, self-contained applications. distri however uses images for distribution packages with dependencies. In particular, there is no duplication of shared libraries in distri.

A nice side effect of using read-only image files is that applications are immutable and can hence not be broken by accidental (or malicious!) modification.

Key idea: separate hierarchies

Package contents are made available under a fully-qualified path. E.g., all files provided by package zsh-amd64-5.6.2-3 are available under /ro/zsh-amd64-5.6.2-3. The mountpoint /ro stands for read-only, which is short yet descriptive.

Perhaps surprisingly, building software with custom prefix values of e.g. /ro/zsh-amd64-5.6.2-3 is widely supported, thanks to:

  1. Linux distributions, which build software with prefix set to /usr, whereas FreeBSD (and the autotools default), which build with prefix set to /usr/local.

  2. Enthusiast users in corporate or research environments, who install software into their home directories.

Because using a custom prefix is a common scenario, upstream awareness for prefix-correctness is generally high, and the rarely required patch will be quickly accepted.

Key idea: exchange directories

Software packages often exchange data by placing or locating files in well-known directories. Here are just a few examples:

  • gcc(1) locates the libusb(3) headers via /usr/include
  • man(1) locates the nginx(1) manpage via /usr/share/man.
  • zsh(1) locates executable programs via PATH components such as /bin

In distri, these locations are called exchange directories and are provided via FUSE in /ro.

Exchange directories come in two different flavors:

  1. global. The exchange directory, e.g. /ro/share, provides the union of the share sub directory of all packages in the package store.
    Global exchange directories are largely used for compatibility, see below.

  2. per-package. Useful for tight coupling: e.g. irssi(1) does not provide any ABI guarantees, so plugins such as irssi-robustirc can declare that they want e.g. /ro/irssi-amd64-1.1.1-1/out/lib/irssi/modules to be a per-package exchange directory and contain files from their lib/irssi/modules.

Search paths sometimes need to be fixed

Programs which use exchange directories sometimes use search paths to access multiple exchange directories. In fact, the examples above were taken from gcc(1) ’s INCLUDEPATH, man(1) ’s MANPATH and zsh(1) ’s PATH. These are prominent ones, but more examples are easy to find: zsh(1) loads completion functions from its FPATH.

Some search path values are derived from --datadir=/ro/share and require no further attention, but others might derive from e.g. --prefix=/ro/zsh-amd64-5.6.2-3/out and need to be pointed to an exchange directory via a specific command line flag.

FHS compatibility

Global exchange directories are used to make distri provide enough of the Filesystem Hierarchy Standard (FHS) that third-party software largely just works. This includes a C development environment.

I successfully ran a few programs from their binary packages such as Google Chrome, Spotify, or Microsoft’s Visual Studio Code.

Fast package manager

I previously wrote about how Linux distribution package managers are too slow.

distri’s package manager is extremely fast. Its main bottleneck is typically the network link, even at high speed links (I tested with a 100 Gbps link).

Its speed comes largely from an architecture which allows the package manager to do less work. Specifically:

  1. Package images can be added atomically to the package store, so we can safely skip fsync(2) . Corruption will be cleaned up automatically, and durability is not important: if an interactive installation is interrupted, the user can just repeat it, as it will be fresh on their mind.

  2. Because all packages are co-installable thanks to separate hierarchies, there are no conflicts at the package store level, and no dependency resolution (an optimization problem requiring SAT solving) is required at all.
    In exchange directories, we resolve conflicts by selecting the package with the highest monotonically increasing distri revision number.

  3. distri proves that we can build a useful Linux distribution entirely without hooks and triggers. Not having to serialize hook execution allows us to download packages into the package store with maximum concurrency.

  4. Because we are using images instead of archives, we do not need to unpack anything. This means installing a package is really just writing its package image and metadata to the package store. Sequential writes are typically the fastest kind of storage usage pattern.

Fast installation also make other use-cases more bearable, such as creating disk images, be it for testing them in qemu(1) , booting them on real hardware from a USB drive, or for cloud providers such as Google Cloud.

Fast package builder

Contrary to how distribution package builders are usually implemented, the distri package builder does not actually install any packages into the build environment.

Instead, distri makes available a filtered view of the package store (only declared dependencies are available) at /ro in the build environment.

This means that even for large dependency trees, setting up a build environment happens in a fraction of a second! Such a low latency really makes a difference in how comfortable it is to iterate on distribution packages.

Package stores

In distri, package images are installed from a remote package store into the local system package store /roimg, which backs the /ro mount.

A package store is implemented as a directory of package images and their associated metadata files.

You can easily make available a package store by using distri export.

To provide a mirror for your local network, you can periodically distri update from the package store you want to mirror, and then distri export your local copy. Special tooling (e.g. debmirror in Debian) is not required because distri install is atomic (and update uses install).

Producing derivatives is easy: just add your own packages to a copy of the package store.

The package store is intentionally kept simple to manage and distribute. Its files could be exchanged via peer-to-peer file systems, or synchronized from an offline medium.

distri’s first release

distri works well enough to demonstrate the ideas explained above. I have branched this state into branch jackherer, distri’s first release code name. This way, I can keep experimenting in the distri repository without breaking your installation.

From the branch contents, our autobuilder creates:

  1. disk images, which…

  2. a package repository. Installations can pick up new packages with distri update.

  3. documentation for the release.

The project website can be found at The website is just the README for now, but we can improve that later.

The repository can be found at

Project outlook

Right now, distri is mainly a vehicle for my spare-time Linux distribution research. I don’t recommend anyone use distri for anything but research, and there are no medium-term plans of that changing. At the very least, please contact me before basing anything serious on distri so that we can talk about limitations and expectations.

I expect the distri project to live for as long as I have blog posts to publish, and we’ll see what happens afterwards. Note that this is a hobby for me: I will continue to explore, at my own pace, parts that I find interesting.

My hope is that established distributions might get a useful idea or two from distri.

There’s more to come: subscribe to the distri feed

I don’t want to make this post too long, but there is much more!

Please subscribe to the following URL in your feed reader to get all posts about distri:

Next in my queue are articles about hermetic packages and good package maintainer experience (including declarative packaging).

Feedback or questions?

I’d love to discuss these ideas in case you’re interested!

Please send feedback to the distri mailing list so that everyone can participate!


Man Tried to Burn Down Telecoms Watchdog to Avenge Pirate Site-Blocking [TorrentFreak]

While copyright holders and many governments see site-blocking as a reasoned and measured response to copyright infringement, some people view it as overkill.

People should be able to access whatever content they want without rich corporations deciding what should and should not appear on computer screens, the argument goes.

For former student Pavel Kopylov, blocking of pirate sites in Russia has gone too far. So, to make his displeasure obvious to Roscomnadzor, the government entity responsible for carrying it out, last year he attempted to burn one of its offices down – three times.

On April 2, 2018, reportedly dissatisfied that his favorite torrent tracker had been blocked, Kopylov went to the local offices of Roscomnadzor,
smashed a window, and threw a bottle of flammable liquid inside together with a burning match. The attempt was a failure – the fire didn’t ignite and a guard was alerted by the noise.

Almost two weeks later, Kopylov returned for a second try. This time a fire did ensue but it was put out, without causing catastrophic damage. A third attempt, on May 9, 2018, ended in complete failure, with a guard catching the would-be arsonist before he could carry out his plan.

Nevertheless, the prosecutor’s office saw the attacks as an attempt to destroy Roscomnadzor’s property by arson, an offense carrying a penalty of up to five years in prison. The prosecution sought two years but in the end, had to settle for considerably less.

Interfax reports that a court in the Ulyanovsk region has now sentenced the man for repeatedly trying to burn down Roscomnadzor’s regional office. He received 18 months probation but the prosecution intends to appeal, describing the sentence as excessively lenient.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.


Link [Scripting News]

I don't know if anyone else finds NPM to be unreliable, as I do, over many years. Here's a scenario. I have to make a minor change to a package. So I increment the version in package.json, and npm publish it. Then in the app that's using it, I do an npm update. The update happens. But the app doesn't get the latest version. Something is cached somewhere, because if I look in the node_modules folder, the new version is there. I've resorted to using a lib folder and keeping a copy of the package updated there. But of course I'm reinventing npm in scripts by doing this. Sometimes NPM works as I understand it should, but every so often it goes crazy like this.

Link [Scripting News]

The NYT needs a real public editor, a member of the public who is not a journalist, and has unfettered access to the op-ed page, and can provide perspective for other readers and for the writers and editors.

Towards a better practice of online news-corrections [Cory Doctorow – Boing Boing]

Dan Gillmor and the ASU News Co/Lab: "An honest admission of an error is transparency. It’s not just the right thing to do. It can enhance trust when done right. It can lead to more engagement — by which we mean deeper conversations — among journalists and people in communities."

* Gather and analyze the available research on journalistic corrections. We need to be clear what we know, what we don’t know, and what we need to know. We’re encouraged by a

* recent meta-analysis of research on correcting misinformation that found promising evidence that corrective messages that provide context alongside a retraction are effective. Look for our research roundup in the relatively near future.

* Build a tool that helps streamline the process of sending corrections (and essential updates) down the media pathways the original stories traveled. The tool will include research-oriented features that encourage experimentation, such as A/B testing to see what language gets the best results. We’ll be open-sourcing this work along the way.

* Consult with researchers and journalism partners. (If your news organization is interested in being part of this, let us know. We’re looking for collaborators that span various modes and styles of journalism. The key requirement is a belief in corrections, and willingness to experiment.)

* Convene a meeting with key researchers, journalists, and technologists who are working in this arena. One goal here is to develop an agenda that, we hope, will help the journalism craft as a whole modernize its attitudes about corrections and updates.

* Publish frequent blog posts to keep interested parties up to date.

McMansion Hell: the Campbell County, Wyoming edition [Cory Doctorow – Boing Boing]

McMansion Hell (previously) continues to tear through America's most affluent ZIP codes with trenchant commentary on realtors' listings for terrible monster homes; in the current edition, critic Kate Wagner visits Campbell County, Wyoming, home to some of the most ill-considered monstrosities in America. As always, she is laugh-aloud funny as she tackles the "Divorce Lawyer house," a 6,000 square foot house from 2002, listed for a mere $700k.

Tech conference changes policy, rescinds requirement for chipped, unremovable bracelets for attendees [Cory Doctorow – Boing Boing]

Update: Justin Reese from Abstractions writes, "policy changes were implemented last night and additional changes were made this morning."

He adds, "The article was also inaccurate from the start by calling the wristbands surveillance devices in the title. They are only used to control access and don't track where users are or have been except in the case where the attendee has given explicit permission in their profiles to share with sponsors and completed a double opt-in by scanning their ID at the sponsor table (the read range is about 2"). Unless we receive a double opt-in, the ids on the wristband are never associated with a user. It is no more a surveillance device than any other conference badge. I'd appreciate a retraction of this inaccuracy and an update regarding our policies."

Reese is correct that the manufacturers design RFID chips to be read from inches; however, that doesn't mean that they can't be read from longer distances (for example, distant, directional antennas can read them at longer distances while they are being energized by a nearby reader). Likewise, the idea that users can't be identified from persistent, anonymous identifiers is incorrect.

It's a pretty good example of how a thin understanding of privacy issues in wireless technologies and statistical analysis can result in selecting authentication systems that expose users to privacy risks.

Sumana Harihareswara (previously) writes, "The Abstractions tech conference (Aug 21-23, in Pittsburgh) doesn't tell attendees this before they buy a ticket, but attendance requires you wear their wristband with an embedded tracking chip -- and that you don't take it off at night or in the shower till the conference ends. Organizers haven't addressed privacy, health, physical safety, and inclusivity concerns that registered attendees raised privately earlier this month, so Jason Owen is blogging about the issue in hopes of getting them to modify their policy."


Today in GPF History for Saturday, August 17, 2019 [General Protection Fault: The Comic Strip]

Captured by bloodthirsty she-ninjas dressed as French maids, James Baud attempts smooth talk


Link [Scripting News]

I groan when passwords are required to have at least one uppercase letter, one lowercase, a number and/or a special character. I know as a matter of math that these requirements doesn't make passwords better. Does it help people who type asdf as their password come up with more random strings? Also I hate sites that make me create a new password every so often. I can manage that myself. I suppose maybe they're saying hey we were hacked recently and are requiring everyone to change their passwords instead of making a public statement.

Call for boycott after the actor who played Mulan in the reboot supports Hong Kong's brutal police crackdown on pro-democracy protestors [Cory Doctorow – Boing Boing]

Crystal Liu is the actor who plays Mulan in Disney's live-action reboot; on Weibo, she has been publishing messages in support of the Hong Kong police, who have been brutally attacking pro-Democracy protesters and tacitly collaborating with organized crime gangs.

Her Weibo messages included "I support Hong Kong's police, you can beat me up now" and "What a shame for Hong Kong." She's been using the hashtag "#IAlsoSupportTheHongKongPolice." In China, state media outlets have made memes out of Liu's statements and are circulating them widely.

In response, supporters of the Hong Kong protests have called for a boycott of the Mulan reboot, and #BoycottMulan is trending on multiple platforms.

The boycott was initiated by users of Lihkg, a Reddit-style online discussion forum in Hong Kong that has somewhat served as information central for the leader-less protest movement, wielding notable mobilization capability and members/readers across all ages and walks of life — including local police monitoring the posts to gather intelligence. Members of the Lihkg community have organized local protests and demonstrations and launched GoFundMe operations for overseas promotions of the movement that have raised millions.

While the Hong Kong box office is tiny compared with the mammoth Chinese one, the world’s second largest, the boycott's organizers appear to be hoping for international support for their campaign, calling for worldwide filmgoers “who support freedom and democracy” to join in.

The boycotters’ complaints were not only directed at Liu, but also at Disney. Some users of Lihkg expressed their dismay with the entertainment conglomerate for hiring someone who they believed "condones violence" and "suppresses people who are fighting for democracy and freedom but curries favor from powerful authorities," writing that “the image of Disney will be tarnished” and “Disney, you can do better."

Hong Kong Protestors Call for Disney Boycott After 'Mulan' Star Voices Support for Police Crackdown [Karen Chu and Patrick Brzeski/Hollywood Reporter]

The case for allowing children to vote [Cory Doctorow – Boing Boing]

Writing on Crooked Timber, John Quiggin (previously) responds to the epidemic of elderly reactionaries piling vitriol and violent rhetoric on the child activist Greta Thunberg and asks, why not let kids vote?

Quiggin points out that all the arguments against letting kids vote are also arguments for preventing older adults from voting: "Over 60 voters are, on average, poorly educated (the school leaving age in Australia was 15 when they went through and I assume similar in most places), and more likely to hold a wide range of false beliefs (notably in relation to climate change)."

Older voters delivered Brexit, Trump, Boris Johnson, Pauline Hanson, and "respond to unrealistic appeals to nostalgia, wanting to Make America Great Again, and restore the glories of the British Empire, while dismissing concerns about the future."

He says all of this isn't an argument for banning older voters, but for including younger people in elections. He points out that one of the main arguments against this -- that enfranchising teens will merely give an extra vote to their parents -- is the same argument that was deployed against giving women votes (that it would end up being an extra vote for their husbands and fathers).

Of course, we can’t do that kind of thing in a democracy,. That’s why we should act consistently with the core democratic principle that those affected by a decision should have a say in making it, unless they are absolutely disqualified in some way. In my view, that makes an open-and-shut case for lowering the voting age to 16.

But where should we stop? If we set the bar at the level of emotional maturity and intelligence shown by say, the crowd at a Trump rally, most 12 year olds would clear it with ease.

Give children the vote [John Quiggin/Crooked Timber]

Washington state rep Matt Shea secretly backed and organized terror-training camps to create child soldiers to fight in a race war [Cory Doctorow – Boing Boing]

Leaked emails reveal that Washington state rep Matt Shea has "close ties" with Team Rugged, a white nationalist/Christian fundamentalist terror organization that trains children, teens and young men to fight in an apocalyptic race-war against Muslims and communists.

In a leaked email, Team Rugged leader Patrick Caughran describes its mission as "[providing] patriotic and biblical training on war for young men...There will be scenarios where every participant will have to fight against one of the most barbaric enemies that are invading our country, Muslims terrorists." The training included knife, pistol and rifle combat.

Team Rugged's ideology comes from the white nationalist preacher John Weaver, whose writings glorified the Confederacy and slavery and condemns "interracial" marriage.

Shea had already openly supported Team Rugged and appeared with child soldiers the group had trained in promotional videos, praising their training, saying, "I love the fact that you guys looked like almost an acrobatic special-forces team out there." Shea had also acknowledged his authorship of "Biblical Basis for War," a manifesto calling for a "holy war" (he denied that it was a manifesto and claimed instead that it was a "sermon" about "war in the Old Testament").

However, leaked emails and messages from Team Rugged's Facebook group have revealed that Shea's ties with the group run deeper than suspected. The state House of Reps has hired private investigators to produce a report on Shea's promotion of terrorism.

Spokane County Sheriff Ozzie Knezovich, who has urged fellow Republicans to denounce Shea as an extremist, compared Team Rugged to the Hitler Youth of Nazi Germany.

“Any radicalization of youth in such a manner would be very comparable,” Knezovich said in a text message Wednesday.

Shea, who rarely speaks to the media, had not publicly responded to the reporting on Team Rugged as of Wednesday evening.

On Facebook Wednesday, he shared a blog post about Washington’s new Democratic House speaker, writing: “THE CURRENT DIRECTION OF WASHINGTON IS DECIDEDLY ANTI-CHRISTIAN.”

Leaked emails show Washington state Rep. Matt Shea endorsed training children to fight in holy war [Chad Sokol/Seattle Times]


Responsibility and the power of ‘could have’ [Seth's Blog]

The us/them mindset of most corporate customer service is simple:

  1. When you can, get it over with.
  2. If at all possible, evade responsibility.

Which means that when things go wrong, you’ll likely encounter a legalistic mentality that begins and ends with, “it’s out of our control.”

There’s an alternative.

It begins with understanding the economics of loyalty. Saving a customer is ten times more efficient than finding a new one. If it costs an airline $1,000 of marketing and route development to acquire a first class business traveler, it’s worth at least $10,000 in customer service to keep one. And that means that an extra ten minutes on the phone clocks in at a high value indeed.

And it continues with a simple tactic: Instead of defining the minimal legal requirement, outline the maximum possible action you could have taken.

“You’re right ma’am, that was a terrible situation. And we could have alerted you in advance that the plane was late, and we could have trained the flight attendants to be more aware of situations like this and we could have been significantly more responsive when we saw that the whole thing was going sideways. That’s incredibly frustrating–you’re right.”

Because it’s all true. You could have done all of these things. And it’s true, it was frustrating. If it wasn’t, she wouldn’t have called.

And then, after learning all the things you could have done, send the ideas upstream. It’s free advice, but it’s good advice.

Because the race doesn’t go to organizations that do the minimal legal requirement. The race goes to those that figure out what they could do. And do it.


[1075] Fitting Form [Twokinds]

Comic for August 17, 2019


Cyril Brulebois: Sending HTML messages with Net::XMPP (Perl) [Planet Debian]

Executive summary

It’s perfectly possible! Jump to the HTML demo!

Longer version

This started with a very simple need: wanting to improve the notifications I’m receiving from various sources. Those include:

  • changes or failures reported during Puppet runs on my own infrastructure, and on at a customer’s;
  • build failures for the Debian Installer;
  • changes in banking amounts;
  • and lately: build status for jobs in a customer’s Jenkins instance.

I’ve been using plaintext notifications for a number of years but I decided to try and pimp them a little by adding some colors.

While the XMPP-sending details are usually hidden in a local module, here’s a small self-contained example: connecting to a server, sending credentials, and then sending a message to someone else. Of course, one might want to tweak the Configuration section before trying to run this script…

use strict;
use warnings;

use Net::XMPP;

# Configuration:
my $hostname = '';
my $username = 'bot';
my $password = 'call-me-alan';
my $resource = 'demo';
my $recipient = '';

# Open connection:
my $con = Net::XMPP::Client->new();
my $status = $con->Connect(
    hostname       => $hostname,
    connectiontype => 'tcpip',
    tls            => 1,
    ssl_ca_path    => '/etc/ssl/certs',
die 'XMPP connection failed'
    if ! defined($status);

# Log in:
my @result = $con->AuthSend(
    hostname => $hostname,
    username => $username,
    password => $password,
    resource => $resource,
die 'XMPP authentication failed'
    if $result[0] ne 'ok';

# Send plaintext message:
my $msg = 'Hello, World!';
my $res = $con->MessageSend(
    to   => $recipient,
    body => $msg,
    type => 'chat',
die('ERROR: XMPP message failed')
    if $res != 0;

For reference, here’s what the XML message looks like in Gajim’s XML console (on the receiving end):

<message type='chat' to='' from=''>
  <body>Hello, World!</body>

Issues start when one tries to send some HTML message, e.g. with the last part changed to:

# Send plaintext message:
my $msg = 'This is a <b>failing</b> test';
my $res = $con->MessageSend(
    to   => $recipient,
    body => $msg,
    type => 'chat',

as that leads to the following message:

<message type='chat' to='' from=''>
  <body>This is a &lt;b&gt;failing&lt;/b&gt; test</body>

So tags are getting encoded and one gets to see the uninterpreted “HTML code”.

Trying various things to embed that inside <body> and <html> tags, with or without namespaces, led nowhere.

Looking at a message sent from Gajim to Gajim (so that I could craft an HTML message myself and inspect it), I’ve noticed it goes this way (edited to concentrate on important parts):

<message xmlns="jabber:client" to="" type="chat">
  <body>Hello, World!</body>
  <html xmlns="">
    <body xmlns="">
      <p>Hello, <strong>World</strong>!</p>

Two takeaways here:

  • The message is send both in plaintext and in HTML. It seems Gajim archives the plaintext version, as opening the history/logs only shows the textual version.

  • The fact that the HTML message is under a different path (/message/html as opposed to /message/body) means that one cannot use the MessageSend method to send HTML messages…

This was verified by checking the documentation and code of the Net::XMPP::Message module. It comes with various getters and setters for attributes. Those are then automatically collected when the message is serialized into XML (through the GetXML() method). Trying to add handling for a new HTML attribute would mean being extra careful as that would need to be treated with $type = 'raw'

Oh, wait a minute! While using git grep in the sources, looking for that raw type thing, I’ve discovered what sounded promising: an InsertRawXML() method, that doesn’t appear anywhere in either the code or the documentation of the Net::XMPP::Message module.

It’s available, though! Because Net::XMPP::Message is derived from Net::XMPP::Stanza:

use Net::XMPP::Stanza;
use base qw( Net::XMPP::Stanza );

which then in turn comes with this function:

# InsertRawXML - puts the specified string onto the list for raw XML to be
#                included in the packet.

Let’s put that aside for a moment and get back to the MessageSend() method. It wants parameters that can be passed to the Net::XMPP::Message SetMessage() method, and here is its entire code:

# MessageSend - Takes the same hash that Net::XMPP::Message->SetMessage
#               takes and sends the message to the server.
sub MessageSend
    my $self = shift;

    my $mess = $self->_message();

The first assignment is basically equivalent to my $mess = Net::XMPP::Message->new();, so what this function does is: creating a Net::XMPP::Message for us, passing all parameters there, and handing the resulting object over to the Send() method. All in all, that’s merely a proxy.

HTML demo

The question becomes: what if we were to create that object ourselves, then tweaking it a little, and then passing it directly to Send(), instead of using the slightly limited MessageSend()? Let’s see what the rewritten sending part would look like:

# Send HTML message:
my $text = 'This is a working test';
my $html = 'This is a <b>working</b> test';

my $message = Net::XMPP::Message->new();
    to   => $recipient,
    body => $text,
    type => 'chat',
my $res = $con->Send($message);

And tada!

<message type='chat' to='' from=''>
  <body>This is a working test</body>
    <body>This is a <b>working</b> test</body>

I’m absolutely no expert when it comes to XMPP standards, and one might need/want to set some more metadata like xmlns but I’m happy enough with this solution that I thought I’d share it as is. ;)


FeedRSSLast fetchedNext fetched after
XML 19:28, Wednesday, 21 August 20:09, Wednesday, 21 August
a bag of four grapes XML 19:49, Wednesday, 21 August 20:31, Wednesday, 21 August
A Smart Bear: Startups and Marketing for Geeks XML 19:28, Wednesday, 21 August 20:09, Wednesday, 21 August
All - O'Reilly Media XML 19:49, Wednesday, 21 August 20:31, Wednesday, 21 August
Anarcho's blog XML 20:00, Wednesday, 21 August 20:44, Wednesday, 21 August
Ansible XML 20:07, Wednesday, 21 August 20:47, Wednesday, 21 August
Bad Science XML 19:56, Wednesday, 21 August 20:45, Wednesday, 21 August
Black Doggerel XML 19:28, Wednesday, 21 August 20:09, Wednesday, 21 August
Blog – Official site of Stephen Fry XML 19:56, Wednesday, 21 August 20:45, Wednesday, 21 August
Broodhollow XML 19:28, Wednesday, 21 August 20:09, Wednesday, 21 August
Charlie Brooker | The Guardian XML 19:49, Wednesday, 21 August 20:31, Wednesday, 21 August
Charlie's Diary XML 19:42, Wednesday, 21 August 20:30, Wednesday, 21 August
Chasing the Sunset - Comics Only XML 19:56, Wednesday, 21 August 20:45, Wednesday, 21 August
Clay Shirky XML 20:00, Wednesday, 21 August 20:44, Wednesday, 21 August
Coding Horror XML 19:35, Wednesday, 21 August 20:22, Wednesday, 21 August
Cory Doctorow – Boing Boing XML 19:28, Wednesday, 21 August 20:09, Wednesday, 21 August
Cory Doctorow's XML 19:49, Wednesday, 21 August 20:31, Wednesday, 21 August
Ctrl+Alt+Del Comic XML 19:42, Wednesday, 21 August 20:30, Wednesday, 21 August
Cyberunions XML 19:56, Wednesday, 21 August 20:45, Wednesday, 21 August
David Mitchell | The Guardian XML 19:56, Wednesday, 21 August 20:39, Wednesday, 21 August
Debian GNU/Linux System Administration Resources XML 19:28, Wednesday, 21 August 20:09, Wednesday, 21 August
Deeplinks XML 20:00, Wednesday, 21 August 20:44, Wednesday, 21 August
Diesel Sweeties webcomic by rstevens XML 19:56, Wednesday, 21 August 20:39, Wednesday, 21 August
Dilbert XML 19:56, Wednesday, 21 August 20:45, Wednesday, 21 August
Dork Tower XML 19:49, Wednesday, 21 August 20:31, Wednesday, 21 August
Edmund Finney's Quest to Find the Meaning of Life XML 19:56, Wednesday, 21 August 20:39, Wednesday, 21 August
Eerie Cuties XML 19:35, Wednesday, 21 August 20:22, Wednesday, 21 August
EFF Action Center XML 19:56, Wednesday, 21 August 20:39, Wednesday, 21 August
Enspiral Tales - Medium XML 20:00, Wednesday, 21 August 20:45, Wednesday, 21 August
Erin Dies Alone XML 19:35, Wednesday, 21 August 20:22, Wednesday, 21 August
Events XML 19:42, Wednesday, 21 August 20:30, Wednesday, 21 August
Falkvinge on Liberty XML 19:42, Wednesday, 21 August 20:30, Wednesday, 21 August
Flipside XML 19:49, Wednesday, 21 August 20:31, Wednesday, 21 August
Free software jobs XML 20:07, Wednesday, 21 August 20:47, Wednesday, 21 August
Full Frontal Nerdity by Aaron Williams XML 19:42, Wednesday, 21 August 20:30, Wednesday, 21 August
General Protection Fault: The Comic Strip XML 19:42, Wednesday, 21 August 20:30, Wednesday, 21 August
George Monbiot XML 19:56, Wednesday, 21 August 20:39, Wednesday, 21 August
Girl Genius XML 19:56, Wednesday, 21 August 20:39, Wednesday, 21 August
God Hates Astronauts XML 19:42, Wednesday, 21 August 20:30, Wednesday, 21 August
Graeme Smith XML 20:00, Wednesday, 21 August 20:44, Wednesday, 21 August
Groklaw XML 19:42, Wednesday, 21 August 20:30, Wednesday, 21 August
Hackney Anarchist Group XML 19:56, Wednesday, 21 August 20:45, Wednesday, 21 August XML 20:07, Wednesday, 21 August 20:47, Wednesday, 21 August XML 20:00, Wednesday, 21 August 20:45, Wednesday, 21 August XML 20:07, Wednesday, 21 August 20:53, Wednesday, 21 August XML 20:07, Wednesday, 21 August 20:53, Wednesday, 21 August XML 19:56, Wednesday, 21 August 20:39, Wednesday, 21 August XML 19:56, Wednesday, 21 August 20:39, Wednesday, 21 August XML 19:35, Wednesday, 21 August 20:22, Wednesday, 21 August;_render=rss XML 20:07, Wednesday, 21 August 20:53, Wednesday, 21 August XML 19:35, Wednesday, 21 August 20:22, Wednesday, 21 August XML 20:00, Wednesday, 21 August 20:45, Wednesday, 21 August XML 20:07, Wednesday, 21 August 20:53, Wednesday, 21 August XML 19:56, Wednesday, 21 August 20:45, Wednesday, 21 August XML 20:00, Wednesday, 21 August 20:44, Wednesday, 21 August XML 19:49, Wednesday, 21 August 20:31, Wednesday, 21 August XML 19:42, Wednesday, 21 August 20:30, Wednesday, 21 August XML 20:00, Wednesday, 21 August 20:44, Wednesday, 21 August XML 19:56, Wednesday, 21 August 20:45, Wednesday, 21 August XML 20:00, Wednesday, 21 August 20:44, Wednesday, 21 August XML 19:42, Wednesday, 21 August 20:30, Wednesday, 21 August XML 20:07, Wednesday, 21 August 20:47, Wednesday, 21 August XML 20:07, Wednesday, 21 August 20:47, Wednesday, 21 August XML 19:28, Wednesday, 21 August 20:09, Wednesday, 21 August XML 20:07, Wednesday, 21 August 20:47, Wednesday, 21 August XML 19:28, Wednesday, 21 August 20:09, Wednesday, 21 August XML 19:56, Wednesday, 21 August 20:45, Wednesday, 21 August XML 19:35, Wednesday, 21 August 20:22, Wednesday, 21 August XML 20:07, Wednesday, 21 August 20:53, Wednesday, 21 August XML 20:07, Wednesday, 21 August 20:47, Wednesday, 21 August XML 19:56, Wednesday, 21 August 20:39, Wednesday, 21 August XML 20:00, Wednesday, 21 August 20:45, Wednesday, 21 August XML 19:28, Wednesday, 21 August 20:09, Wednesday, 21 August XML 19:35, Wednesday, 21 August 20:22, Wednesday, 21 August^876&maxPrice=240000&minBedrooms=2&displayPropertyType=houses&oldDisplayPropertyType=houses&primaryDisplayPropertyType=houses&oldPrimaryDisplayPropertyType=houses&numberOfPropertiesPerPage=24 XML 19:56, Wednesday, 21 August 20:39, Wednesday, 21 August
Humble Bundle Blog XML 19:35, Wednesday, 21 August 20:22, Wednesday, 21 August
I, Cringely XML 19:42, Wednesday, 21 August 20:30, Wednesday, 21 August
Irregular Webcomic! XML 19:28, Wednesday, 21 August 20:09, Wednesday, 21 August
Joel on Software XML 20:07, Wednesday, 21 August 20:53, Wednesday, 21 August
Judith Proctor's Journal XML 20:07, Wednesday, 21 August 20:47, Wednesday, 21 August
Krebs on Security XML 19:28, Wednesday, 21 August 20:09, Wednesday, 21 August
Lambda the Ultimate - Programming Languages Weblog XML 20:07, Wednesday, 21 August 20:47, Wednesday, 21 August
LLVM Project Blog XML 20:00, Wednesday, 21 August 20:45, Wednesday, 21 August
Looking For Group XML 20:00, Wednesday, 21 August 20:44, Wednesday, 21 August
Loomio Blog XML 20:07, Wednesday, 21 August 20:53, Wednesday, 21 August XML 19:28, Wednesday, 21 August 20:09, Wednesday, 21 August
Menage a 3 XML 20:00, Wednesday, 21 August 20:44, Wednesday, 21 August
Mimi and Eunice XML 20:00, Wednesday, 21 August 20:45, Wednesday, 21 August
Neil Gaiman's Journal XML 20:07, Wednesday, 21 August 20:47, Wednesday, 21 August
Nina Paley XML 19:35, Wednesday, 21 August 20:22, Wednesday, 21 August
O Abnormal – Scifi/Fantasy Artist XML 20:00, Wednesday, 21 August 20:45, Wednesday, 21 August
Oglaf! -- Comics. Often dirty. XML 19:42, Wednesday, 21 August 20:30, Wednesday, 21 August
Oh Joy Sex Toy XML 20:00, Wednesday, 21 August 20:44, Wednesday, 21 August
Order of the Stick XML 20:00, Wednesday, 21 August 20:44, Wednesday, 21 August
Original Fiction – XML 19:49, Wednesday, 21 August 20:31, Wednesday, 21 August
OSnews XML 20:00, Wednesday, 21 August 20:45, Wednesday, 21 August
Paul Graham: Unofficial RSS Feed XML 20:00, Wednesday, 21 August 20:45, Wednesday, 21 August
Penny Arcade XML 19:49, Wednesday, 21 August 20:31, Wednesday, 21 August
Penny Red XML 20:00, Wednesday, 21 August 20:45, Wednesday, 21 August
PHD Comics XML 19:56, Wednesday, 21 August 20:45, Wednesday, 21 August
Phil's blog XML 19:42, Wednesday, 21 August 20:30, Wednesday, 21 August
Planet Debian XML 20:00, Wednesday, 21 August 20:45, Wednesday, 21 August
Planet GridPP XML 19:35, Wednesday, 21 August 20:22, Wednesday, 21 August
Planet Lisp XML 19:56, Wednesday, 21 August 20:45, Wednesday, 21 August
Property is Theft! XML 20:07, Wednesday, 21 August 20:47, Wednesday, 21 August
PS238 by Aaron Williams XML 19:42, Wednesday, 21 August 20:30, Wednesday, 21 August
QC RSS XML 19:35, Wednesday, 21 August 20:22, Wednesday, 21 August
RevK®'s rants XML 20:07, Wednesday, 21 August 20:53, Wednesday, 21 August
Scenes From A Multiverse XML 19:35, Wednesday, 21 August 20:22, Wednesday, 21 August
Schneier on Security XML 20:07, Wednesday, 21 August 20:47, Wednesday, 21 August
SCHNEWS.ORG.UK XML 20:00, Wednesday, 21 August 20:44, Wednesday, 21 August
Scripting News XML 19:49, Wednesday, 21 August 20:31, Wednesday, 21 August
Seth's Blog XML 20:07, Wednesday, 21 August 20:53, Wednesday, 21 August
Skin Horse XML 19:49, Wednesday, 21 August 20:31, Wednesday, 21 August
Starslip by Kris Straub XML 19:49, Wednesday, 21 August 20:31, Wednesday, 21 August
Tales From the Riverbank XML 19:56, Wednesday, 21 August 20:45, Wednesday, 21 August
The Adventures of Dr. McNinja XML 20:00, Wednesday, 21 August 20:45, Wednesday, 21 August
The Bumpycat sat on the mat XML 20:07, Wednesday, 21 August 20:47, Wednesday, 21 August
The Command Line XML 20:07, Wednesday, 21 August 20:53, Wednesday, 21 August
The Daily WTF XML 20:07, Wednesday, 21 August 20:53, Wednesday, 21 August
The Monochrome Mob XML 19:28, Wednesday, 21 August 20:09, Wednesday, 21 August
The Non-Adventures of Wonderella XML 19:56, Wednesday, 21 August 20:39, Wednesday, 21 August
The Old New Thing XML 20:00, Wednesday, 21 August 20:44, Wednesday, 21 August
The Open Source Grid Engine Blog XML 19:35, Wednesday, 21 August 20:22, Wednesday, 21 August
The Phoenix Requiem XML 20:07, Wednesday, 21 August 20:47, Wednesday, 21 August
The Rogues Gallery XML 19:42, Wednesday, 21 August 20:30, Wednesday, 21 August
The Stranger, Seattle's Only Newspaper: Savage Love XML 20:00, Wednesday, 21 August 20:45, Wednesday, 21 August
TorrentFreak XML 19:56, Wednesday, 21 August 20:39, Wednesday, 21 August
towerhamletsalarm XML 20:07, Wednesday, 21 August 20:53, Wednesday, 21 August
Twokinds XML 19:49, Wednesday, 21 August 20:31, Wednesday, 21 August
UK Indymedia Features XML 19:49, Wednesday, 21 August 20:31, Wednesday, 21 August
Uploads from ne11y XML 20:07, Wednesday, 21 August 20:53, Wednesday, 21 August
Uploads from piasladic XML 19:56, Wednesday, 21 August 20:39, Wednesday, 21 August
Use Sword on Monster XML 19:35, Wednesday, 21 August 20:22, Wednesday, 21 August
Wayward Sons: Legends - Sci-Fi Full Page Webcomic - Updates Daily XML 20:07, Wednesday, 21 August 20:53, Wednesday, 21 August
What If? XML 19:28, Wednesday, 21 August 20:09, Wednesday, 21 August
Whatever XML 19:56, Wednesday, 21 August 20:45, Wednesday, 21 August
Whitechapel Anarchist Group XML 19:56, Wednesday, 21 August 20:45, Wednesday, 21 August
WIL WHEATON dot NET XML 20:00, Wednesday, 21 August 20:44, Wednesday, 21 August
wish XML 20:00, Wednesday, 21 August 20:45, Wednesday, 21 August XML 19:56, Wednesday, 21 August 20:39, Wednesday, 21 August