Saturday, 17 March


Vodafone Appeals Decision Forcing it to Block Pirate Streaming Site Kinox [TorrentFreak]

Streaming site Kinox has proven hugely problematic for German authorities and international rightsholders for many years.

Last year, following a three-year manhunt, one of the site’s alleged operators was detained in Kosovo. Despite this and other actions, the site remains online.

Given the profile of the platform and its popularity in Germany, it came as no surprise when Kinox became the guinea pig for site-blocking in the country. Last month following a complaint from local film production and distribution company Constantin Film, a district court in Munich handed down a provisional injunction against Internet provider Vodafone.

In common with many similar cases across the EU, the Court cited a 2017 ruling from the European Court of Justice which found that local authorities can indeed order blockades of copyright-infringing sites. The Court ordered Vodafone to prevent its subscribers from accessing the site and shortly after the provider complied, but not willingly it seems.

According to local news outlet Golem, last week Vodafone filed an appeal arguing that there is no legal basis in Germany for ordering the blockade.

“As an access provider, Vodafone provides only neutral access to the Internet, and we believe that under current law, Vodafone cannot be required to curb copyright infringement on the Internet,” a Vodafone spokesperson told the publication.

The ISP says that not only does the blocking injunction impact its business operations and network infrastructure, it also violates the rights of its customers. Vodafone believes that blocking measures can only be put in place with an explicit legal basis and argues that no such basis exists under German law.

Noting that blockades are easily bypassed by determined users, the ISP says that such measures can also block lots of legal content, making the whole process ineffective.

“[I]nternet blocking generally runs the risk of blocking non-infringing content, so we do not see it as an effective way to make accessing illegal offers more difficult,” Vodafone’s spokesperson said.

Indeed, it appears that the Kinox blockade is a simple DNS-only effort, which means that people can bypass it by simply changing to an alternative DNS provider such as Google DNS or OpenDNS.

Given all of the above, Vodafone is demanding clarification of the earlier decision from a higher court. Whether or not the final decision will go in the ISP’s favor isn’t clear but there is plenty of case law at the European level that suggests the balance of probabilities lies with Constantin Film.

When asked to balance consumer rights versus copyrights, courts have tended to side with the latter in recent years.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.


"What does this remind you of?" [Seth Godin's Blog on marketing, tribes and respect]

That's a much more useful way to get feedback than asking if we like it.

We make first impressions and long-term judgments based on the smallest of clues. We scan before we dive in, we see the surface before we experience the substance.

And while the emotions that are created by your work aren't exactly like something else, they rhyme.

It could be your business model, your haircut or the vibrato on your guitar.

"What does this remind you of" opens the door for useful conversations that you can actually do something about. Yes, be original, but no, it's not helpful to be so original that we have no idea what you're doing.



Elana Hashman: Stop streaming music from YouTube with this one weird trick [Planet Debian]

Having grown up on the internet long before the average connection speed made music streaming services viable, streaming has always struck me as wasteful. And I know that doesn't make much sense—it's not like there's a limited amount of bandwidth to go around! But if I'm going to listen to the same audio file five times, why not just download it once and listen to it forever? Particularly if I want to listen to it while airborne and avoid the horrors of plane wifi. Or if I want to remove NSFW graphics that seem to frequently accompany mixes I enjoy.

youtube-dl to the rescue

Luckily, at least as far as YouTube audio is concerned, there is plenty of free software available to help with this! I like to fetch and process music from YouTube using youtube-dl and ffmpeg. Both are packaged and available in Debian if you like to apt install things:

Saving audio files from YouTube

Well, let's suppose you want to download some eurobeat. youtube-dl can help. The -x flag tells youtube-dl to skip downloading video and to only fetch audio.

$ youtube-dl -x
[youtube] 8B4guKLlbVU: Downloading webpage
[youtube] 8B4guKLlbVU: Downloading video info webpage
[youtube] 8B4guKLlbVU: Extracting video information
[youtube] 8B4guKLlbVU: Downloading js player vflGUPF-i
[download] Destination: SUPER EUROBEAT MIX-8B4guKLlbVU.webm
[download] 100% of 60.68MiB in 20:57
Deleting original file SUPER EUROBEAT MIX-8B4guKLlbVU.webm (pass -k to keep)

YouTube sometimes throttles connections to approximate real-time buffer rates, so the download could take a while. If you need to interrupt the download for some reason, you can use SIGINT (Ctrl+C) to stop it. If you run youtube-dl again, it's smart enough to resume the download where you left off.

Once the download is complete, there's not much more to do. It will be saved with the appropriate file extension so you can determine what audio codec it uses. You might want to rename the file:

$ mv SUPER\ EUROBEAT\ MIX-8B4guKLlbVU.ogg super_eurobeat_mix.ogg

Now we can enjoy many plays of our super_eurobeat_mix.ogg file! 🚗🎶

Re-encoding audio to another format

Suppose that I have a really old MP3 player I'd like to put this file on, and it doesn't support the OGG Vorbis format. That's not a problem; we can use ffmpeg to re-encode the audio.

The -i flag to ffmpeg specifies an input file. -acodec mp3 says that we want to use the mp3 codec to re-encode our audio. The last positional argument, super_eurobeat_mix.mp3, is the name of the file we want to output.

$ ffmpeg -i super_eurobeat_mix.ogg -acodec mp3 super_eurobeat_mix.mp3
ffmpeg version 3.4.2-1+b1 Copyright (c) 2000-2018 the FFmpeg developers
  built with gcc 7 (Debian 7.3.0-4)
  configuration: --prefix=/usr --extra-version=1+b1 --toolchain=hardened
 --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu
 --enable-gpl --disable-stripping --enable-avresample --enable-avisynth
 --enable-gnutls --enable-ladspa --enable-libass --enable-libbluray
 --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libflite
 --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme
 --enable-libgsm --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg
 --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librubberband
 --enable-librsvg --enable-libshine --enable-libsnappy --enable-libsoxr
 --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame
 --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp
 --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzmq
 --enable-libzvbi --enable-omx --enable-openal --enable-opengl --enable-sdl2
 --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-chromaprint
 --enable-frei0r --enable-libopencv --enable-libx264 --enable-shared
  libavutil      55. 78.100 / 55. 78.100
  libavcodec     57.107.100 / 57.107.100
  libavformat    57. 83.100 / 57. 83.100
  libavdevice    57. 10.100 / 57. 10.100
  libavfilter     6.107.100 /  6.107.100
  libavresample   3.  7.  0 /  3.  7.  0
  libswscale      4.  8.100 /  4.  8.100
  libswresample   2.  9.100 /  2.  9.100
  libpostproc    54.  7.100 / 54.  7.100
Input #0, ogg, from 'super_eurobeat_mix.ogg':
  Duration: 01:06:35.57, start: 0.000000, bitrate: 124 kb/s
    Stream #0:0(eng): Audio: vorbis, 44100 Hz, stereo, fltp, 128 kb/s
      LANGUAGE        : eng
      ENCODER         : Lavf57.83.100
Stream mapping:
  Stream #0:0 -> #0:0 (vorbis (native) -> mp3 (libmp3lame))
Press [q] to stop, [?] for help
Output #0, mp3, to 'super_eurobeat_mix.mp3':
    TSSE            : Lavf57.83.100
    Stream #0:0(eng): Audio: mp3 (libmp3lame), 44100 Hz, stereo, fltp
      LANGUAGE        : eng
      encoder         : Lavc57.107.100 libmp3lame
size=   62432kB time=01:06:35.58 bitrate= 128.0kbits/s speed=22.4x    
video:0kB audio:62431kB subtitle:0kB other streams:0kB global headers:0kB
muxing overhead: 0.000396%

Voila! We now have a super_eurobeat_mix.mp3 file we can copy to our janky old MP3 player.

Extracting audio from an existing video file

Sometimes I accidentally forget to pass the -x flag to youtube-dl, and get a video file instead of an audio track. Oops.

But that's okay! Extracting the audio track from the video file with ffmpeg is just a couple of commands away.

First, we should determine the encoding of the audio track. The combined video file is a .webm file, but we can peek inside using ffmpeg.

$ ffmpeg -i SUPER\ EUROBEAT\ MIX-8B4guKLlbVU.webm 
ffmpeg version 3.4.2-1+b1 Copyright (c) 2000-2018 the FFmpeg developers
  built with gcc 7 (Debian 7.3.0-4)
  configuration: --prefix=/usr --extra-version=1+b1 --toolchain=hardened
 --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu
 --enable-gpl --disable-stripping --enable-avresample --enable-avisynth
 --enable-gnutls --enable-ladspa --enable-libass --enable-libbluray
 --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libflite
 --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme
 --enable-libgsm --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg
 --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librubberband
 --enable-librsvg --enable-libshine --enable-libsnappy --enable-libsoxr
 --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame
 --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp
 --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzmq
 --enable-libzvbi --enable-omx --enable-openal --enable-opengl --enable-sdl2
 --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-chromaprint
 --enable-frei0r --enable-libopencv --enable-libx264 --enable-shared
  libavutil      55. 78.100 / 55. 78.100
  libavcodec     57.107.100 / 57.107.100
  libavformat    57. 83.100 / 57. 83.100
  libavdevice    57. 10.100 / 57. 10.100
  libavfilter     6.107.100 /  6.107.100
  libavresample   3.  7.  0 /  3.  7.  0
  libswscale      4.  8.100 /  4.  8.100
  libswresample   2.  9.100 /  2.  9.100
  libpostproc    54.  7.100 / 54.  7.100
Input #0, matroska,webm, from 'SUPER EUROBEAT MIX-8B4guKLlbVU.webm':
    ENCODER         : Lavf57.83.100
  Duration: 01:06:35.57, start: 0.000000, bitrate: 490 kb/s
    Stream #0:0(eng): Video: vp9 (Profile 0), yuv420p(tv,
bt709/unknown/unknown), 1920x1080, SAR 1:1 DAR 16:9, 29.97 fps, 29.97 tbr, 1k
tbn, 1k tbc (default)
      DURATION        : 01:06:35.558000000
    Stream #0:1(eng): Audio: vorbis, 44100 Hz, stereo, fltp (default)
      DURATION        : 01:06:35.572000000
At least one output file must be specified

The important line is here:

Stream #0:1(eng): Audio: vorbis, 44100 Hz, stereo, fltp (default)

The audio uses the "vorbis" codec. Hence, we should probably use the .ogg extension for our output file, to ensure we specify a compatible audio container format. If it were mp3-encoded, we'd use .mp3, and so on.

Let's extract the audio track from our video file. We need a couple new flags for ffmpeg. The first is -vn, which tells ffmpeg to not include a video track. The second is -acodec copy, which says we want to copy the existing audio track, rather than re-encode it.

$ ffmpeg -i SUPER\ EUROBEAT\ MIX-8B4guKLlbVU.webm -vn -acodec copy super_eurobeat_mix.ogg
ffmpeg version 3.4.2-1+b1 Copyright (c) 2000-2018 the FFmpeg developers
  built with gcc 7 (Debian 7.3.0-4)
  configuration: --prefix=/usr --extra-version=1+b1 --toolchain=hardened
 --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu
 --enable-gpl --disable-stripping --enable-avresample --enable-avisynth
 --enable-gnutls --enable-ladspa --enable-libass --enable-libbluray
 --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libflite
 --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme
 --enable-libgsm --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg
 --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librubberband
 --enable-librsvg --enable-libshine --enable-libsnappy --enable-libsoxr
 --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame
 --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp
 --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzmq
 --enable-libzvbi --enable-omx --enable-openal --enable-opengl --enable-sdl2
 --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-chromaprint
 --enable-frei0r --enable-libopencv --enable-libx264 --enable-shared
  libavutil      55. 78.100 / 55. 78.100
  libavcodec     57.107.100 / 57.107.100
  libavformat    57. 83.100 / 57. 83.100
  libavdevice    57. 10.100 / 57. 10.100
  libavfilter     6.107.100 /  6.107.100
  libavresample   3.  7.  0 /  3.  7.  0
  libswscale      4.  8.100 /  4.  8.100
  libswresample   2.  9.100 /  2.  9.100
  libpostproc    54.  7.100 / 54.  7.100
Input #0, matroska,webm, from 'SUPER EUROBEAT MIX-8B4guKLlbVU.webm':
    ENCODER         : Lavf57.83.100
  Duration: 01:06:35.57, start: 0.000000, bitrate: 490 kb/s
    Stream #0:0(eng): Video: vp9 (Profile 0), yuv420p(tv,
bt709/unknown/unknown), 1920x1080, SAR 1:1 DAR 16:9, 29.97 fps, 29.97 tbr, 1k
tbn, 1k tbc (default)
      DURATION        : 01:06:35.558000000
    Stream #0:1(eng): Audio: vorbis, 44100 Hz, stereo, fltp (default)
      DURATION        : 01:06:35.572000000
Output #0, ogg, to 'super_eurobeat_mix.ogg':
    encoder         : Lavf57.83.100
    Stream #0:0(eng): Audio: vorbis, 44100 Hz, stereo, fltp (default)
      DURATION        : 01:06:35.572000000
      encoder         : Lavf57.83.100
Stream mapping:
  Stream #0:1 -> #0:0 (copy)
Press [q] to stop, [?] for help
size=   60863kB time=01:06:35.54 bitrate= 124.8kbits/s speed=1.03e+03x    
video:0kB audio:60330kB subtitle:0kB other streams:0kB global headers:4kB
muxing overhead: 0.884396%

We've successfully extracted super_eurobeat_mix.ogg from our video file! Go us!

Happy listening, and drive safely while you eurobeat. 🎵


Microsoft to force Mail links to open in Edge [OSNews]

For Windows Insiders in the Skip Ahead ring, we will begin testing a change where links clicked on within the Windows Mail app will open in Microsoft Edge, which provides the best, most secure and consistent experience on Windows 10 and across your devices. With built-in features for reading, note-taking, Cortana integration, and easy access to services such as SharePoint and OneDrive, Microsoft Edge enables you to be more productive, organized and creative without sacrificing your battery life or security. I'm one of those weird people who actually really like the default Windows 10 Mail application, but if this absolutely desperate, user-hostile move - which ignores any default browser setting - makes it into any definitive Windows 10 release, I won't be able to use it anymore. As always, we look forward to feedback from our WIP community. Oh you'll get something to look forward to alright.

iOS 11 bugs are so common they now appear in Apple ads [OSNews]

If you blink during Apple’s latest iPhone ad, you might miss a weird little animation bug. It’s right at the end of a slickly produced commercial, where the text from an iMessage escapes the animated bubble it’s supposed to stay inside. It’s a minor issue and easy to brush off, but the fact it’s captured in such a high profile ad just further highlights Apple’s many bugs in iOS 11. The fact Apple's marketing department signed off on this ad with such a bug in it is baffling.

Google renames Android Wear to Wear OS [OSNews]

As our technology and partnerships have evolved, so have our users. In 2017, one out of three new Android Wear watch owners also used an iPhone. So as the watch industry gears up for another Baselworld next week, we’re announcing a new name that better reflects our technology, vision, and most important of all - the people who wear our watches. We’re now Wear OS by Google, a wearables operating system for everyone. If a company changes the name of one of its operating system, but nobody cares - has the name really been changed?


Louis-Philippe Véronneau: Minimal SQL privileges [Planet Debian]

Lately, I have been working pretty hard on a paper I have to hand out at the end of my university semester for the machine learning class I'm taking. I will probably do a long blog post about this paper in May if it turns out to be good, but for the time being I have some time to kill while my latest boosting model runs.

So let's talk about something I've started doing lately: creating issues on FOSS webapp project trackers when their documentation tells people to grant all privileges to the database user.

You know, something like:

GRANT ALL PRIVILEGES ON database.* TO 'username'@'localhost' IDENTIFIED BY 'password';

I'd like to say I've never done this and always took time to specify a restricted subset of privileges on my servers, but I'd be lying. To be honest, I woke up last Christmas when someone told me it was an insecure practice.

When you take a few seconds to think about it, there are quite a few database level SQL privileges and I don't see why I should grant them all to a webapp if it only needs a few of them.

So I started asking projects to do something about this and update their documentation with a minimal set of SQL privileges needed to run correctly. The Drupal project does this quite well and tells you to:


When I first reached out to the upstream devs of these projects, I was sure I'd be seen as some zealous nuisance. To my surprise, everyone thought it was a good idea and fixed it.

Shout out to Nextcloud, Mattermost and KanBoard for taking this seriously!

If you are using a webapp and the documentation states you should grant all privileges to the database user, here is a template you can use to create an issue and ask them to change it:


The installation documentation says that you should grant all SQL privileges to
the database user:

    GRANT ALL PRIVILEGES ON database.* TO 'username'@'localhost' IDENTIFIED BY 'password';

I was wondering what are the true minimal SQL privileges WEBAPP needs to run

I don't normally like to grant all privileges for security reasons and would
really appreciate it if you could publish a minimal SQL database privileges

I guess I'm expecting something like [Drupal][drupal] does.


At the database level, [MySQL/MariaDB][mariadb] supports:

* `DROP`

Does WEBAPP really need database level privileges like EVENT or CREATE ROUTINE?
If not, why should I grant them?

Thanks for your work on WEBAPP!


Friday, 16 March


Page 23 [Flipside]

Page 23 is done.


Dirk Eddelbuettel: RcppClassicExamples 0.1.2 [Planet Debian]

Per a CRAN email sent to 300+ maintainers, this package (just like many others) was asked to please register its S3 method. So we did, and also overhauled a few other packagaging standards which changed since the previous uploads in December of 2012 (!!).

No new code or features. Full details below. And as a reminder, don't use the old RcppClassic -- use Rcpp instead.

Changes in version 0.1.2 (2018-03-15)

  • Registered S3 print method [per CRAN request]

  • Added src/init.c with registration and updated all .Call usages taking advantage of it

  • Updated http references to https

  • Updated DESCRIPTION conventions

Thanks to CRANberries, you can also look at a diff to the previous release.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Dirk Eddelbuettel: RDieHarder 0.1.4 [Planet Debian]

Per a CRAN email sent to 300+ maintainers, this package (just like many others) was asked to please register its S3 method. So we did, and also overhauled a few other packagaging standards which changed since the last upload in 2014.

No NEWS.Rd file to take a summary from, but the top of the ChangeLog has details.

Thanks to CRANberries, you can also look at a diff to the previous release.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.


MPAA Brands 123Movies as the World’s Most Popular Illegal Site [TorrentFreak]

With millions of visitors per day, pirate streaming site 123movies, also known as GoMovies, is a force to be reckoned with.

The Motion Picture Association of America (MPAA) is fully aware of this and previously alerted the US Trade Representative about this “notorious market.”

However, since the site is not operating from the US, Hollywood’s industry group is also reaching out to 123movies’ alleged home turf, Vietnam. Following in the footsteps of the US ambassador, the MPAA seeks assistance from local authorities.

The MPAA is currently in Vietnam where it’s working with the Office of the Police Investigation Agency to combat pirate sites. According to the MPAA’s Executive Vice President & Chief of Global Content Protection, Jan van Voorn, 123movies is one of the prime targets.

“Right now, the most popular illegal site in the world, (at this point), is operated from Vietnam, and has 98 million visitors a month,” Van Voorn said, quoted by VNExpress.

“There are more services like this – sites that are not helpful for local legitimate businesses,” he adds.

The MPAA hopes that the Vietnamese authorities will step in to take these pirate sites offline, so that legal alternatives can grow. In addition, it stresses that the public should be properly educated, to change their views on movie piracy.

While it’s clear that 123movies is a threat to Hollywood, there are bigger fish out there.

The 98 million number MPAA mentions appears to come from SimilarWeb’s January estimate. While this is a lot of traffic indeed, it’s not the largest pirate site. The Pirate Bay, for example, had an estimated 282 million visitors during the same period.

TorrentFreak asked the MPAA to confirm the claim but at the time of writing, we have yet to hear back. Perhaps Van Voorn was referring to streaming sites specifically, which would make more sense.

In any case, it’s clear that Hollywood is concerned about 123movies and similar sites and will do everything in its power to get them offline.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.


News Post: The Hierarchy [Penny Arcade]

Tycho: YouTube is Broadcast Television and Twitch is Cable.  Does that sound about right?  Am I close?  I feel like that happened really fast.  YouTube’s true audience is Unilever or whoever the fuck.  Multinational Hydrae bound in circles of salt by public markets.  Twitch is the warping House of Chaos that dances in fire.  You have no idea what kind of angel or devil is going to splup out of some inky hole and currently - currently - that is a market advantage. Having chiddlers is like having an ambulatory weathervane in your house that walks around and…


Washington State lawmaker thinks the courts will uphold state Net Neutrality law because the FCC abdicated its duty [Boing Boing]

Washington State was the first to pass a true Net Neutrality law that restored all the public protections the FCC withdrew when it killed Net Neutrality late last year; the move is symbolically awesome but legally fraught, seeking to redefine the line where the FCC's authority stops and the states' authorities start. (more…)


Big Telco hates "regulation," but they love their billions in government handouts [Boing Boing]

When it comes to killing Net Neutrality, Big Telco's major talking point is that "government regulation" has no place in telcoms; but the reality is that the nation's telecommunications providers are the recipients of regulatory gifts that run to $5B/year, and are expected to do very little in return for this corporate welfare. (more…)

Austin bombings: literal American carnage meets with Trumpian indifference [Boing Boing]

Remember when Donald Trump campaigned on a promise to make Americans safe, and promised an end to "American carnage" at his inauguration? Yeah, neither does he. (more…)


Podcast: The Man Who Sold the Moon, Part 06 [Cory Doctorow's]

Here’s part six of my reading (MP3) (part five, part four, part three, part two, part one) of The Man Who Sold the Moon, my award-winning novella first published in 2015’s Hieroglyph: Stories and Visions for a Better Future, edited by Ed Finn and Kathryn Cramer. It’s my Burning Man/maker/first days of a better nation story and was a kind of practice run for my 2017 novel Walkaway.


Podcast: The Man Who Sold the Moon, Part 06 [Boing Boing]

Here's part six of my reading (MP3) (part five, part four, part three, part two, part one) of The Man Who Sold the Moon, my award-winning novella first published in 2015's Hieroglyph: Stories and Visions for a Better Future, edited by Ed Finn and Kathryn Cramer. It's my Burning Man/maker/first days of a better nation story and was a kind of practice run for my 2017 novel Walkaway.


Friday Squid Blogging: New Squid Species Discovered in Australia [Schneier on Security]

A new species of pygmy squid was discovered in Western Australia. It's pretty cute.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.


Link [Scripting News]

My brain is somewhat adjusting to the new vision situation. I have five old pairs of glasses to try out and think I have settled on one tha's best for computer work. I did a podcast (below) because that doesn't use my eyes for much. My brain really wants to do something creative. Let's see if I can figure it out. ;-)

Link [Scripting News]

A podcast about Linux Journal, Doc Searls and why he became its new editor-in-chief. Doc, one of the Cluetrain authors, has a vision for advertising that works. If it works, it will not only revolutionize advertising, it will also help get software development back on track. His project really has the potential.

Today in GPF History for Friday, March 16, 2018 [General Protection Fault: The Comic Strip]

Trudy relates to Ki how she took revenge for Trent's double-cross back in college...


Security updates for Friday []

Security updates have been issued by CentOS (firefox), Debian (clamav and firefox-esr), openSUSE (Chromium and kernel-firmware), Oracle (firefox), Red Hat (ceph), Scientific Linux (firefox), Slackware (curl), and SUSE (java-1_7_1-ibm and mariadb).

Bruce Sterling's 2018 SXSW keynote: Disrupting Dystopia, or what the tech arts scene could and should be [Boing Boing]

Since the first days of SXSW Interactive, Bruce Sterling has closed the festivities with a haranguing, funny, provocative keynote and nearly every year (2017, 2016, 2014, 2013, 2012 etc) we link to it. (more…)

Clearchannel took over America's local radio, Bain Capital took over Clearchannel, Clearchannel went bankrupt [Boing Boing]

As I've written, the demise of newsmedia can't be blamed on tech -- rather, it was the combination of technology and deregulated, neoliberal capitalism, which saw media companies merged and acquired, vertically and horizontally integrated, with quality lowered, staff outsourced and assets stripped, leaving them vulnerable to technological shocks, after all their in-house experts were turned into contractors who drifted away, their physical plant sold and leased back, their war-chests drained by vulture capitalists who loaded them up with debt that acted like a millstone around their necks as they strove to maneuver their way out of their economic conundrum. (more…)


‘Dutch Pirate Bay Blocking Case Should Get a Do-Over’ [TorrentFreak]

The Pirate Bay is arguably the most widely blocked website on the Internet. ISPs from all over the world have been ordered by courts to prevent users from accessing the torrent site.

In most countries courts have decided relatively quickly, but not in the Netherlands, where there’s still no final decision after eight years.

A Dutch court first issued an order to block The Pirate Bay in 2012, but this decision was overturned two years later. Anti-piracy group BREIN then took the matter to the Supreme Court, which subsequently referred the case to the EU Court of Justice, seeking further clarification.

After a careful review of the case, the EU Court of Justice decided last year that The Pirate Bay can indeed be blocked.

The top EU court ruled that although The Pirate Bay’s operators don’t share anything themselves, they knowingly provide users with a platform to share copyright-infringing links. This can be seen as “an act of communication” under the EU Copyright Directive.

This put the case back with the Dutch Supreme court, which now has to decide on the matter.

Today, Advocate General Van Peursem advised the court to throw out the previous court order, and do the case over in a new court.

In his recommendation, Van Peursem cites similar blocking orders from other European countries. He stresses that the rights of copyright holders should be carefully weighed against those of the ISPs and the public in general.

In blocking cases, this usually comes down to copyright protection versus Internet providers’ freedom to carry on business and the right to freedom of information. The Advocate General specifically highlights a recent Premier League case in the UK, where the court ruled that copyright prevails over the other rights.

The ultimate decision, however, depends on the context of the case, Van Peursem notes.

“At most, one can say that if a copyright is infringed, it normally won’t be possible to justify the infringement by invoking the freedom to conduct business or the freedom of information. After all, these freedoms find their limit in what is legally permissible.

“This does not mean that a blockade aimed at protecting the right to property always ‘wins’ over the freedoms of entrepreneurship and information,” he adds.

Previously, the Supreme Court already ruled that it was incorrect of the lower court to rule that the Pirate Bay blockade was ineffective. Together, this means that it will be tough for the ISPs to win this case.

If the Supreme Court throws out the previous court order the case will start over from scratch, but with this new context and the EU court orders as further clarification.

The Pirate Bay, meanwhile, remains blocked in The Netherlands as the result of an interim injunction BREIN obtained last year.

The Advocate General’s advice is not binding, so it’s not yet certain whether there will be a do-over. However, in most cases, the recommendations are followed by the Supreme Court.

The Supreme Court is expected to release its final verdict later this year.

Update: The article was updated to clarify that the existing blocking injunctions remain in place.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Stop cherry-picking, start merging, Part 5: Exploiting the three-way merge [The Old New Thing]

Last time, we answered some questions based on what we know about the recursive merge algorithm. Today, we'll answer questions based on what we know about the three-way merge algorithm.

After choosing a merge base (possibly by manufacturing one via the recursive merge algorithm), the three-way merge algorithm takes the three versions identified by the merge base, the source head commit, and the destination head commit. It then identifies the changes in the two head commits relative to the merge base and tries to reconcile them.

The important detail here is what doesn't participate in the merge: Everything else.

In particular, any commits leading up to the head commits have no effect. And you can take advantage of this when answering the next few questions.

What if I already made the fix in my feature branch by committing directly to it, rather than creating a patch branch? Can I create a patch branch retroactively?

Yes, you can create a patch branch retroactively. Suppose you are in this situation:

    M1   master
apple ↙︎
    F1 ← F1a   feature
    apple   berry

Starting from a common commit A, you fork off a feature branch and commit a change F1. Meanwhile, the master branch commits a change M1. You then discover a terrible problem in the feature branch and apply an emergency fix F1a to the feature branch. Further investigation reveals that this terrible problem also exists in the master branch. How do you get the fix into the master branch without running the risk of a cherry-pick disaster?

Go ahead and create your patch branch like before, and merge it into both the master and feature branches.

    apple       berry
    M1 ← ← ← M2   master
apple ↙︎     berry ↙︎
A ← ← ← P       patch
  ↖︎       ↖︎
    F1 ← F1a ← F2   feature
    apple   berry   berry

We created a new branch called patch based on the common ancestor commit A, and cherry-picked our fix F1a to the patch branch as commit P. We then merged commit P into the master branch, and also into the feature branch, producing commits M2 and F2, respectively. The merge into the master branch as M2 propagates the fix to the master branch, and the merge into the feature branch as F2 has no code effect because the fix is already in the feature branch. However, the merge into the feature branch is a crucial step, because it establishes commit P as the new common ancestor.

Observe that as far as the three commits involved in the merge are concerned, everything look the same as if you had made the fix in the patch branch originally. The fix is in the patch branch and in the heads of the master and feature branches. The feature branch can continue making changes, possibly to the same file, and that will be correctly detected as a change in the feature branch.

From a merge-theoretical point of view, you can use your thumb and cover up commit F1a, because that commit doesn't participate in the three-way merge:

    apple       berry
    M1 ← ← ← M2   master
apple ↙︎     berry ↙︎
A ← ← ← P       patch
  ↖︎       ↖︎
    F1 ← ← ← F2   feature
    apple       berry

And then you see that this diagram is the same as the diagram we had when the change originated in the patch branch.

How can I verify that a merge carried no code change?

If you have committed the merge locally, then you can run local git commands to get your answer. If you just want a yes/no answer as to whether the most recent commit carried no code change, you can see whether the trees are the same.

git diff-tree HEAD

If there is no output, then the trees are the same, which means that there was no code change.

If you don't trust git diff-tree, you can compare the trees manually:

git rev-parse HEAD^{tree} HEAD~^{tree}

(If you are using cmd.exe, then you'll have to double the ^ characters because ^ is the command prompt's escape character.)

If you want to see the differences, you can use git diff HEAD~ HEAD to view them.

If you use an online service to manage pull requests, then you'll have to consult your online service's documentation to see if there's a way to preview the merge commit and diff it against the parent. (We'll pick up this topic in a future installment.)

What if I already made the fix in my feature branch by committing directly to it, and then I cherry-picked the change into the master branch? Can I create a patch branch retroactively?

Yes, you can still create a patch branch retroactively. This is just an extension of the case where you want to retroactively pull the commit back from the feature branch, except this time you're retroactively pulling the commit back from both branches:

    apple   berry   berry
    M1 ← M1a ← M2   master
apple ↙︎     berry ↙︎
A ← ← ← P       patch
  ↖︎       ↖︎
    F1 ← F1a ← F2   feature
    apple   berry   berry

The analysis is the same: The only commits that participate in the three-way merge are the common merge base P and the heads of the master and feature branches.

What if I already made the fix in my feature branch by committing directly to it, and then I cherry-picked the change into the master branch, and I already made further changes in both branches, including a conflicting change in my feature branch? Can I create a patch branch retroactively?

Yes, you can still create the patch branch retroactively, but you have to be a bit careful because you want the merge into the feature branch to contain no code changes; the merge is for bookkeeping purposes only.

    apple   berry   berry   berry
    M1 ← M1a ← M2 ← M3   master
apple ↙︎         berry ↙︎
A ← ← ← ← ← P       patch
  ↖︎           ↖︎
    F1 ← F1a ← F2 ← F3   feature
    apple   berry   cherry   cherry

From the initial common commit A, the feature branch makes an unrelated commit F1, then makes the fix F1a, and then makes a second commit F2 that alters the fix from berry to cherry. Meanwhile, the main branch makes an unrelated commit M1, then cherry-picks the fix M1a, and then makes another unrelated commit M2.

How do you connect the fix in the feature branch with its cherry-picked doppelgänger?

As before, create a patch branch from the common commit A and cherry-pick F1a into it. This is the fix that you want to be considered as existing in both the master and feature branches. Merge this branch into the master and feature branches, as usual. The merge into the master branch will go cleanly because the master branch hasn't made any changes that conflict with the fix. However, the merge into the feature branch will encounter a merge conflict because the feature branch continued and made a subsequent conflicting change F2.

When you get that merge conflict, specify that you want to keep the changes in the feature branch and ignore the changes in the patch branch. In other words, you want this to be a no-code-change merge. You can use the -s ours option to git merge to indicate that you want no code changes from the merge; you are doing this only for bookkeeping purposes.

I use an online service to manage pull requests. How can I force the online service to use the -s ours merge algorithm?

This is really a question for your online service. But let's suppose that your online service doesn't let you customize the merge algorithm. How can you force it anyway?

You can do it by pre-merging the result in your pull request. Note that this means that you will need two patch branches, one for each of the merge destinations.

    apple   berry   berry   berry
    M1 ← M1a ← M2 ← M3       master
apple ↙︎         berry ↙︎
A ← ← ← ← P           patch-master
  ↖︎           ↖︎ apple
  ↖︎             ~P       patch-feature
  ↖︎               ↖︎
    F1 ← F1a ← F2 ← ← ← F3   feature
    apple   berry   cherry       cherry

As is customary, we start with a common ancestor commit A. The feature branch makes an unrelated commit F1, and then applies an important bug fix as commit F1a. The master branch makes an unrelated change M1, and then cherry-picks the fix as commit M1a. Both branches make additional changes: In the master branch, an unrelated commit M2, and in the feature branch, a conflicting commit F2.

Now you want to retroactively connect the commit F1a with its cherry-pick commit M1a so that when the master and feature branches merge, you don't get a conflict (or worse, a silent revert).

We start as before and create a patch branch from the common ancestor commit A, and create a commit P that describes the commit that got cherry-picked. This branch merges cleanly into the master branch with the cherry-picked version M1a. However, this branch doesn't merge cleanly into the feature branch made a conflicting commit F2, and your online service service rejects the pull request due to the conflict.

To fix this, you need to make sure that the branch submitted to your online service has all the conflicts pre-resolved. Create a new patch-feature branch from the patch branch you used for the master branch, and in that patch-feature branch, revert commit P, producing commit ~P, so that the patch-feature branch shows no net code change relative to the common ancestor commit A.¹

Now that the patch-feature branch has no net change, it should merge cleanly into the feature branch. There was no code change in the payload, but the reason for the merge wasn't to pick up a code change; it was to connect the master and feature branches via the shared commit P, which becomes the new common ancestor for the future merge of the master and feature branches.


Okay, we saw the sorts of problems that cherry-picks can create, from merge conflicts (sometimes in unrelated branches) to silent reverts. In practice, people cherry-pick only because they don't have a better choice available. They would rather perform a partial merge but git doesn't support partial merges, so people feel that they have to cherry-pick. But I showed that partial merges are possible after all! You just have to think about the graph the right way: Instead of merging directly between branches, you create a helper branch that contains the partial content and merge the helper branch into the desired destinations.

As we saw when we explored the recursive merge algorithm, if you expect that your change will need to be cherry-picked to many other branches, you can stage a helper branch that is based on a commit far back enough in time that everybody who would be interested in cherry-picking the change will also have the commit your branch is based on. (In practice, this means going back to the commit that introduced the change that you are trying to patch.) If everybody merges from that helper branch rather than cherry-picking, then when all the branches merge together, the helper branch will contribute to the merge base, and that avoids the conflicts and other bad things.

My team applied the techniques in this series, and following the guidance herein reduced the number of conflicts in a single merge from over 1500 files to only 20. This changed an unmanageable merge to one that could be handled by contacting the person responsible for each conflict and asking them to resolve it.

(Note: This series isn't even half-over, even though I wrote a Conclusion. So don't worry: There's plenty of agony still to come.)


¹ Another way to do this is to create a new branch named patch-feature from commit F2, and then perform a git merge -s ours patch-master to create a no-code-change merge from the patch-master branch. This results in a line from P2 to F2, which is harmless:

    apple   berry   berry   berry
    M1 ← M1a ← M2 ← M3       master
apple ↙︎         berry ↙︎
A ← ← ← ← P           patch-master
  ↖︎           ↖︎ cherry
  ↖︎             P2       patch-feature
  ↖︎           ↙︎   ↖︎
    F1 ← F1a ← F2 ← ← ← F3   feature
    apple   berry   cherry       cherry

If you want to get rid of the superfluous line, you could use the --squash option, but I would leave it because it makes it clearer what happened. (Otherwise, it will look like the patch branch made a huge commit.)

Personally, I would use git commit-tree to construct commit P2. I'll talk about the magical powers of git commit-tree at some unspecified future point.

However you created the patch-feature branch, you can then create a pull request from the patch-feature branch to the feature branch.

Who Is Afraid of More Spams and Scams? [Krebs on Security]

Security researchers who rely on data included in Web site domain name records to combat spammers and scammers will likely lose access to that information for at least six months starting at the end of May 2018, under a new proposal that seeks to bring the system in line with new European privacy laws. The result, some experts warn, will likely mean more spams and scams landing in your inbox.

On May 25, the General Data Protection Regulation (GDPR) takes effect. The law, enacted by the European Parliament, requires companies to get affirmative consent for any personal information they collect on people within the European Union. Organizations that violate the GDPR could face fines of up to four percent of global annual revenues.

In response, the Internet Corporation for Assigned Names and Numbers (ICANN) — the nonprofit entity that manages the global domain name system — has proposed redacting key bits of personal data from WHOIS, the system for querying databases that store the registered users of domain names and blocks of Internet address ranges (IP addresses).

Under current ICANN rules, domain name registrars should collect and display a variety of data points when someone performs a WHOIS lookup on a given domain, such as the registrant’s name, address, email address and phone number. Most registrars offer a privacy protection service that shields this information from public WHOIS lookups; some registrars charge a nominal fee for this service, while others offer it for free.

But in a bid to help registrars comply with the GDPR, ICANN is moving forward on a plan to remove critical data elements from all public WHOIS records. Under the new system, registrars would collect all the same data points about their customers, yet limit how much of that information is made available via public WHOIS lookups.

The data to be redacted includes the name of the person who registered the domain, as well as their phone number, physical address and email address. The new rules would apply to all domain name registrars globally.

ICANN has proposed creating an “accreditation system” that would vet access to personal data in WHOIS records for several groups, including journalists, security researchers, and law enforcement officials, as well as intellectual property rights holders who routinely use WHOIS records to combat piracy and trademark abuse.

But at an ICANN meeting in San Juan, Puerto Rico on Thursday, ICANN representatives conceded that a proposal for how such a vetting system might work probably would not be ready until December 2018. Assuming ICANN meets that deadline, it could be many months after that before the hundreds of domain registrars around the world take steps to adopt the new measures.

Gregory Mounier, head of outreach at EUROPOL‘s European Cybercrime Center and member of ICANN’s Public Safety Working Group, said the new WHOIS plan could leave security researchers in the lurch — at least in the short run.

“If you don’t have an accreditation system by 25 May then there’s no means for cybersecurity folks to get access to this information,” Mounier told KrebsOnSecurity. “Let’s say you’re monitoring a botnet and have 10.000 domains connected to that and you want to find information about them in the WHOIS records, you won’t be able to do that anymore. It probably won’t be implemented before December 2018 or January 2019, and that may mean security gaps for many months.”

Rod Rasmussen, chair of ICANN’s Security and Stability Advisory Committee, said ICANN does not have a history of getting things done before or on set deadlines, meaning it may be well more than six months before researchers and others can get vetted to access personal information in WHOIS data.

Asked for his take on the chances that ICANN and the registrar community might still be designing the vetting system this time next year, Rasmussen said “100 percent.”

“A lot of people who are using this data won’t be able to get access to it, and it’s not going to be pretty,” Rasmussen said. “Once things start going dark it will have a cascading effect. Email deliverability is going to be one issue, and the amount of spam that shows up in peoples’ inboxes will be climbing rapidly because a lot of anti-spam technologies rely on WHOIS for their algorithms.”

As I noted in last month’s story on this topic, WHOIS is probably the single most useful tool we have right now for tracking down cybercrooks and/or for disrupting their operations. On any given day I probably perform 20-30 different WHOIS queries; on days I’ve set aside for deep-dive research, I may run hundreds of WHOIS searches.

WHOIS records are a key way that researchers reach out to Web site owners when their sites are hacked to host phishing pages or to foist malware on visitors. These records also are indispensable for tracking down cybercrime victims, sources and the cybercrooks themselves. I remain extremely concerned about the potential impact of WHOIS records going dark across the board.

There is one last possible “out” that could help registrars temporarily sidestep the new privacy regulations: ICANN board members told attendees at Thursday’s gathering in Puerto Rico that they had asked European regulators for a “forbearance” — basically, permission to be temporarily exempted from the new privacy regulations during the time it takes to draw up and implement a WHOIS accreditation system.

But so far there has been no reply, and several attendees at ICANN’s meeting Thursday observed that European regulators rarely grant such requests.

Some registrars are already moving forward with their own plans on WHOIS privacy. GoDaddy, one of the world’s largest domain registrars, recently began redacting most registrant data from WHOIS records for domains that are queried via third-party tools. And experts say it seems likely that other registrars will follow GoDaddy’s lead before the May 25 GDPR implementation date, if they haven’t already.

A Durham Bull Stares Into Your Soul [Whatever]

I’ve been in Durham, NC the last couple of days, visiting friends and seeing sights, including this here sculpture of a bull, which frankly seems to be judging me. How dare you, sir! It’s been fun but now I’m on my way home again to see Krissy and Athena and the cats. Life is good.



Russell Coker: Racism in the Office [Planet Debian]

Today I was at an office party and the conversation turned to race, specifically the incidence of unarmed Afro-American men and boys who are shot by police. Apparently the idea that white people (even in other countries) might treat non-white people badly offends some people, so we had a man try to explain that Afro-Americans commit more crime and therefore are more likely to get shot. This part of the discussion isn’t even noteworthy, it’s the sort of thing that happens all the time.

I and another man pointed out that crime is correlated with poverty and racism causes non-white people to be disproportionately poor. We also pointed out that US police seem capable of arresting proven violent white criminals without shooting them (he cited arrests of Mafia members I cited mass murderers like the one who shot up the cinema). This part of the discussion isn’t particularly noteworthy either. Usually when someone tries explaining some racist ideas and gets firm disagreement they back down. But not this time.

The next step was the issue of whether black people are inherently violent. He cited all of Africa as evidence. There’s a meme that you shouldn’t accuse someone of being racist, it’s apparently very offensive. I find racism very offensive and speak the truth about it. So all the following discussion was peppered with him complaining about how offended he was and me not caring (stop saying racist things if you don’t want me to call you racist).

Next was an appeal to “statistics” and “facts”. He said that he was only citing statistics and facts, clearly not understanding that saying “Africans are violent” is not a statistic. I told him to get his phone and Google for some statistics as he hadn’t cited any. I thought that might make him just go away, it was clear that we were long past the possibility of agreeing on these issues. I don’t go to parties seeking out such arguments, in fact I’d rather avoid such people altogether if possible.

So he found an article about recent immigrants from Somalia in Melbourne (not about the US or Africa, the previous topics of discussion). We are having ongoing discussions in Australia about violent crime, mainly due to conservatives who want to break international agreements regarding the treatment of refugees. For the record I support stronger jail sentences for violent crime, but this is an idea that is not well accepted by conservatives presumably because the vast majority of violent criminals are white (due to the vast majority of the Australian population being white).

His next claim was that Africans are genetically violent due to DNA changes from violence in the past. He specifically said that if someone was a witness to violence it would change their DNA to make them and their children more violent. He also specifically said that this was due to thousands of years of violence in Africa (he mentioned two thousand and three thousand years on different occasions). I pointed out that European history has plenty of violence that is well documented and also that DNA just doesn’t work the way he thinks it does.

Of course he tried to shout me down about the issue of DNA, telling me that he studied Psychology at a university in London and knows how DNA works, demanding to know my qualifications, and asserting that any scientist would support him. I don’t have a medical degree, but I have spent quite a lot of time attending lectures on medical research including from researchers who deliberately change DNA to study how this changes the biological processes of the organism in question.

I offered him the opportunity to star in a Youtube video about this, I’d record everything he wants to say about DNA. But he regarded that offer as an attempt to “shame” him because of his “controversial” views. It was a strange and sudden change from “any scientist will support me” to “it’s controversial”. Unfortunately he didn’t give up on his attempts to convince me that he wasn’t racist and that black people are lesser.

The next odd thing was when he asked me “what do you call them” (black people), “do you call them Afro-Americans when they are here”. I explained that if an American of African ancestry visits Australia then you would call them Afro-American, otherwise not. It’s strange that someone goes from being so certain of so many things to not knowing the basics. In retrospect I should have asked whether he was aware that there are black people who aren’t African.

Then I sought opinions from other people at the party regarding DNA modifications. While I didn’t expect to immediately convince him of the error of his ways it should at least demonstrate that I’m not the one who’s in a minority regarding this issue. As expected there was no support for the ideas of DNA modifying. During that discussion I mentioned radiation as a cause of DNA changes. He then came up with the idea that radiation from someone’s mouth when they shout at you could change your DNA. This was the subject of some jokes, one man said something like “my parents shouted at me a lot but didn’t make me a mutant”.

The other people had some sensible things to say, pointing out that psychological trauma changes the way people raise children and can have multi-generational effects. But the idea of events 3000 years ago having such effects was ridiculed.

By this time people were starting to leave. A heated discussion of racism tends to kill the party atmosphere. There might be some people who think I should have just avoided the discussion to keep the party going (really I didn’t want it and tried to end it). But I’m not going to allow a racist to think that I agree with them, and if having a party requires any form of agreement to racism then it’s not a party I care about.

As I was getting ready to leave the man said that he thought he didn’t explain things well because he was tipsy. I disagree, I think he explained some things very well. When someone goes to such extraordinary lengths to criticise all black people after a discussion of white cops killing unarmed black people I think it shows their character. But I did offer some friendly advice, “don’t drink with people you work with or for or any other people you want to impress”, I suggested that maybe quitting alcohol altogether is the right thing to do if this is what it causes. But he still thought it was wrong of me to call him racist, and I still don’t care. Alcohol doesn’t make anyone suddenly think that black people are inherently dangerous (even when unarmed) and therefore deserving of being shot by police (disregarding the fact that police can take members of the Mafia alive). But it does make people less inhibited about sharing such views even when it’s clear that they don’t have an accepting audience.

Some Final Notes

I was not looking for an argument or trying to entrap him in any way. I refrained from asking him about other races who have experienced violence in the past, maybe he would have made similar claims about other non-white races and maybe he wouldn’t, I didn’t try to broaden the scope of the dispute.

I am not going to do anything that might be taken as agreement or support of racism unless faced with the threat of violence. He did not threaten me so I wasn’t going to back down from the debate.

I gave him multiple opportunities to leave the debate. When I insisted that he find statistics to support his cause I hoped and expected that he would depart. Instead he came back with a page about the latest racist dog-whistle in Australian politics which had no correlation with anything we had previously discussed.

I think the fact that this debate happened says something about Australian and British culture. This man apparently hadn’t had people push back on such ideas before.


Unprostrated [George Monbiot]

I have prostate cancer, but I’m happy. Here’s how.

By George Monbiot, published in the Guardian 14th March 2018


It came, as these things often do, like a gunshot on a quiet street: shocking and disorienting. In early December, my urine turned brown. The following day I felt feverish and found it hard to pee. I soon realised I had a urinary tract infection. It was unpleasant, but seemed to be no big deal. Now I know that it might have saved my life.

The doctor told me this infection was unusual in a man of my age, and hinted at an underlying condition. So I had a blood test, which revealed that my prostate specific antigen (PSA) levels were off the scale. An MRI scan and a mortifying biopsy confirmed my suspicions. Prostate cancer: all the smart young men have it this season.

On Monday, I go into surgery. The prostate gland is buried deep in the body, so removing it is a major operation: there are six entry points and it takes four hours. The procedure will hack at the roots of my manhood. Because of the damage that will be caused to the surrounding nerves, there’s a high risk of permanent erectile dysfunction. Because the urethra needs to be cut and reattached to the bladder, I will almost certainly suffer urinary incontinence for a few months, and possibly permanently. Because the removal of part of the urethra retracts the penis, it appears to shrink, at least until it can be stretched back into shape.

I was offered a choice: radical surgery or brachytherapy. This means implanting radioactive seeds in the parts of the prostrate affected by cancer. Brachytherapy has fewer side effects, and recovery is much faster. But there’s a catch. If it fails to eliminate the cancer, there’s nothing more that can be done. This treatment sticks the prostate gland to the bowel and bladder, making surgery extremely difficult. Once you’ve had one dose of radiation, they won’t give you another. I was told that the chances of brachytherapy working in my case were between 70 and 80%. The odds were worse, in other words, than playing Russian roulette (which, with one bullet in a six-chambered revolver, gives you 83%). Though I have a tendency to embrace risk, this was not an attractive option.

It would be easy to curse my luck and start to ask “why me?”. I have never smoked and hardly drink; I have a ridiculously healthy diet and follow a severe fitness regime. I’m 20 or 30 years younger than most of the men I see in the waiting rooms. In other words, I would have had a lower risk of prostate cancer only if I had been female. And yet … I am happy. In fact, I’m happier than I was before my diagnosis. How can this be?

The reason is that I’ve sought to apply the three principles which, I believe, sit at the heart of a good life. The first is the most important: imagine how much worse it could be, rather than how much better.

When you are diagnosed with prostate cancer, your condition is ranked on the Gleason Score, which measures its level of aggression. Mine is graded at 7 out of 10. But this doesn’t tell me where I stand in general. I needed another index to assess the severity of my condition, so I invented one: the Shitstorm Scale. How does my situation compare to those of people I know, who contend with other medical problems or family tragedies? How does it compare to what might have been, had the cancer had not been caught while it is still – apparently – confined to the prostate gland? How does it compare to innumerable other disasters that could have befallen me?

When I completed the exercise, I realised that this bad luck, far from being a cause of woe, is a reminder of how lucky I am. I have the love of my family and friends. I have the support of those with whom I work. I have the NHS. My Shitstorm Score is a mere 2 out of 10.

The tragedy of our times is that, rather than apply the most useful of English proverbs – “cheer up, it could be worse” – we are constantly induced to imagine how much better things could be. The rich lists and power lists with which the newspapers are filled, our wall-to-wall celebrity culture, the invidious billions spent on marketing and advertising, create an infrastructure of comparison that ensures we see ourselves as deprived of what others possess. It is a formula for misery.

The second principle is this: change what you can change, accept what you can’t. This is not a formula for passivity. I’ve spent my working life trying to alter outcomes that might have seemed immovable to other people. The theme of my latest book is that political failure is, at heart, a failure of imagination. But sometimes we simply have to accept an obstacle as insuperable. Fatalism in these circumstances is protective. I accept that my lap is in the lap of the gods.

So I will not rage against the morbidity this surgery might cause. I won’t find myself following Groucho Marx who, at the age of 81, magnificently lamented, “I’m going to Iowa to collect an award. Then I’m appearing at Carnegie Hall, it’s sold out. Then I’m sailing to France to pick up an honour from the French government. I’d give it all up for one erection.” And today there’s viagra.

The third principle is this: do not let fear rule your life. Fear hems us in, stops us from thinking clearly and prevents us from either challenging oppression or engaging calmly with the impersonal fates. When I was told that this operation has an 80% chance of success, my first thought was “that’s roughly the same as one of my kayaking trips. And about twice as good as the chance of emerging from those investigations in West Papua and the Amazon”.

There are, I believe, three steps to overcoming fear: name it, normalise it, socialise it. For too long, cancer has been locked in the drawer labelled Things We Don’t Talk About. When we call it the Big C, it becomes, as the term suggests, not smaller, but larger in our minds. He Who Must Not Be Named is diminished by being identified, and diminished further when he becomes a topic of daily conversation.

The super-volunteer Jeanne Chattoe, whom I interviewed recently for another column, reminded me that, just 25 years ago, breast cancer was a taboo subject. Thanks to the amazing advocacy of its victims, this is almost impossible to imagine today. Now we need to do the same for other cancers. Let there be no more terrible secrets.

So I have sought to discuss my prostate cancer as I would discuss any other issue. I make no apologies for subjecting you to the grisly details: the more familiar they become, the less horrifying. In doing so, I socialise my condition. Last month, I discussed the remarkable evidence suggesting that a caring community enhances recovery and reduces mortality. In talking about my cancer with family and friends, I feel the love that I know will get me through this. The old strategy of suffering in silence could not have been more misguided.

I had intended to use this column to urge men to get themselves tested. But since my diagnosis, we’ve discovered two things. The first is that prostate cancer has overtaken breast cancer to become the third biggest cancer killer in the UK. The second is that the standard assessment (the PSA blood test) is of limited use. As prostate cancer in its early stages is likely to produce no symptoms, it’s hard to see what men can do to protect themselves. That urinary tract infection was a remarkably lucky break.

Instead, I urge you to support the efforts led by Prostate Cancer UK to develop a better test. Breast cancer has attracted twice as much money and research as prostate cancer, not because (as the Daily Mail suggests) men are the victims of injustice, but because women’s advocacy has been so effective. Campaigns such as Men United and the Movember Foundation have sought to bridge this gap, but there’s a long way to go. Prostate cancer is discriminatory: for reasons unknown, black men are twice as likely to suffer it as white men. Finding better tests and treatments is a matter of both urgency and equity.

I will ride this out. I will own this disease but I won’t be defined by it: I will not be prostrated by my prostate. I will be gone for a few weeks but when I return, I do solemnly swear I will still be the argumentative old git with whom you are familiar.


Interesting Article on Marcus Hutchins [Schneier on Security]

This is a good article on the complicated story of hacker Marcus Hutchins.


Four short links: 16 March 2018 [All - O'Reilly Media]

Longevity, Partner Violence, Leaking Secrets, and Fallacy of Objective Measurement

  1. Longevity FAQ (Laura Deming) -- I run Longevity Fund. I spend a lot of time thinking about what could increase healthy human lifespan. This is my overview of the field for beginners.
  2. Intimate Partner Violence -- What we’ve discovered in our research is that digital abuse of intimate partners is both more mundane and more complicated than we might think. [...] [I]ntimate partner violence upends the way we typically think about how to protect digital privacy and security. You should read this because we all need to get a lot more aware of the ways in which the tools we make might be used to hurt others.
  3. The Secret Sharer -- Machine learning models based on neural networks and deep learning are being rapidly adopted for many purposes. What those models learn, and what they may share, is a significant concern when the training data may contain secrets and the models are public—e.g., when a model helps users compose text messages using models trained on all users’ messages. [...] [W]e show that unintended memorization occurs early, is not due to overfitting, and is a persistent issue across different types of models, hyperparameters, and training strategies.
  4. Gaydar and the Fallacy of Objective Measurement -- By taking gaydar into the lab, these research teams have taken the creative adaptation of an oppressed community of atomized members and turned gaydar into an essentialist story of “gender atypicality,” a topic that is related to, but distinctly different from, sexual orientation.

Continue reading Four short links: 16 March 2018.

Error'd: Drunken Parsing [The Daily WTF]

"Hi, $(lookup(BOOZE_SHOP_OF_LEAST_MISTRUST))$ Have you been drinking while parsing your variables?" Tom G. writes.   "Alright, so, I can access this website at more than an...


Local Governments in Mexico Might ‘Pirate’ Dragon Ball Super [TorrentFreak]

When one thinks of large-scale piracy, sites like The Pirate Bay and perhaps 123Movies spring to mind.

Offering millions of viewers the chance to watch the latest movies and TV shows for free the day they’re released or earlier, they’re very much hated by the entertainment industries.

Tomorrow, however, there’s the very real possibility of a huge copyright infringement controversy hitting large parts of Mexico, all centered around the hugely popular anime series Dragon Ball Super.

This Saturday episode 130, titled “The Greatest Showdown of All Time! The Ultimate Survival Battle!!”, will hit the streets. It’s the penultimate episode of the series and will see the climax of Goku and Jiren’s battle – apparently.

The key point is that fans everywhere are going nuts in anticipation, so much so that various local governments in Mexico have agreed to hold public screenings for free, including in football stadiums and public squares.

“Fans of the series are crazy to see the new episode of Dragon Ball Super and have already organized events around the country as if it were a boxing match,” local media reports.

For example, Remberto Estrada, the municipal president of Benito Juárez, Quintana Roo, confirmed that the episode will be aired at the Cultural Center of the Arts in Cancun. The mayor of Ciudad Juarez says that a viewing will go ahead at the Plaza de la Mexicanidad with giant screens and cosplay contests on the sidelines.

Many local government Twitter accounts sent out official invitations, like the one shown below.

But despite all the preparations, there is a big problem. According to reports, no group or organization has the rights to show Dragon Ball Super in public in Mexico, a fact confirmed by Toei Animation, the company behind the show.

“To the viewers and fans of Dragon Ball. We have become aware of the plans to exhibit episode # 130 of our Dragon Ball Super series in stadiums, plazas, and public places throughout Latin America,” the company said in an official announcement.

“Toei Animation has not authorized these public shows and does not support or sponsor any of these events nor do we or any of our titles endorse any institution exhibiting the unauthorized episode.

“In an effort to support copyright laws, to protect the work of thousands of persons and many labor sectors, we request that you please enjoy our titles at the official platforms and broadcasters and not support illegal screenings that incite piracy.”

Armando Cabada, mayor of Ciudad Juarez, Chihuahua, was one of the first municipal officials to offer support to the episode 130 movement. He believes that since the events are non-profit, they can go ahead but others have indicated their screenings will only go ahead if they can get the necessary permission.

Crunchyroll, the US video-streaming company that holds some Dragon Ball Super rights, is reportedly trying to communicate with the establishments and organizations planning to host the events to ensure that everything remains legal and above board. At this stage, however, there’s no indication that any agreements have been reached or whether they’re simply getting in touch to deliver a warning.

One region that has already confirmed its event won’t go ahead is Mexico City. The head of the local government there told disappointed fans that since they can’t get permission from Toei, the whole thing has been canceled.

What will happen in the other locations Saturday night if licenses haven’t been obtained is anyone’s guess but thousands of disappointed fans in multiple locations raises the potential for the kind of battle the Mexican authorities can well do without, even if Dragon Ball Super thrives on them.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Daniel Pocock: OSCAL'18, call for speakers, radio hams, hackers & sponsors reminder [Planet Debian]

The OSCAL organizers have given a reminder about their call for papers, booths and sponsors (ask questions here). The deadline is imminent but you may not be too late.

OSCAL is the Open Source Conference of Albania. OSCAL attracts visitors from far beyond Albania (OpenStreetmap), as the biggest Free Software conference in the Balkans, people come from many neighboring countries including Kosovo, Montenegro, Macedonia, Greece and Italy. OSCAL has a unique character unlike any other event I've visited in Europe and many international guests keep returning every year.

A bigger ham radio presence in 2018?

My ham radio / SDR demo worked there in 2017 and was very popular. This year I submitted a fresh proposal for a ham radio / SDR booth and sought out local radio hams in the region with an aim of producing an even more elaborate demo for OSCAL'18.

If you are a ham and would like to participate please get in touch using this forum topic or email me personally.

Why go?

There are many reasons to go to OSCAL:

  • We can all learn from their success with diversity. One of the finalists for Red Hat's Women in Open Source Award, Jona Azizaj, is a key part of their team: if she is announced the winner at Red Hat Summit the week before OSCAL, wouldn't you want to be in Tirana when she arrives back home for the party?
  • Warm weather to help people from northern Europe to thaw out.
  • For many young people in the region, their only opportunity to learn from people in the free software community is when we visit them. Many people from the region can't travel to major events like FOSDEM due to the ongoing outbreak of immigration bureaucracy and the travel costs. Many Balkan countries are not EU members and incomes are comparatively low.
  • Due to the low living costs in the region and the proximity to larger European countries, many companies are finding compelling opportunities to work with local developers there and OSCAL is a great place to make contacts informally.

Sponsors sought

Like many free software communities, Open Labs is a registered non-profit organization.

Anybody interested in helping can contact the team and ask them for whatever details you need. The Open Labs Manifesto expresses a strong commitment to transparency which hopefully makes it easy for other organizations to contribute and understand their impact.

Due to the low costs in Albania, even a small sponsorship or donation makes a big impact there.

If you can't make a direct payment to Open Labs, you could also potentially help them with benefits in kind or by contributing money to one of the larger organizations supporting OSCAL.

Getting there without direct service from Ryanair or Easyjet

These notes about budget airline routes might help you plan your journey. It is particularly easy to get there from major airports in Italy. If you will also have a vacation at another location in the region it may be easier and cheaper to fly to that location and then use a bus to Tirana.

Making it a vacation

For people who like to combine conferences with their vacations, the Balkans (WikiTravel) offer many opportunities, including beaches, mountains, cities and even a pyramid (in Tirana itself).

It is very easy to reach neighboring countries like Montenegro and Kosovo by coach in just 3-4 hours. For example, there is the historic city of Prizren in Kosovo and many beach resorts in Montenegro.

If you go to Kosovo, don't miss the Prishtina hackerspace.

Tirana Pyramid: a future hackerspace?

Daniel Powell: Mentorship within software development teams [Planet Debian]

In response to: This email I wrote a short blog post with some insight about the subject of mentorship.


In my journey to find an internship opportunity through Google Summer of Code, I wanted to give input about the relationship between a mentor and an intern/apprentice. My time as a service manager in the automotive repair industry gave me insight into the design of these relationships.

My recommendation for mentoring programs within a software development team are to have a dual group and private messaging environment for teams of 3 mentors guiding 2 or 3 interns based on their comfort and experience in a group setting. My rationale for this is ass follows:

Every personality does not necessarily engage well with each other. While it's important to learn to work with people who you disagree with, I have found that when given the opportunity to float between mentors for different issues, apprentices will learn more from those who they get along with the best. If the end goal is for the pupil to learn the most during this experience, and hence increase also their productivity on a project then having the dual ability to use a group setting or PM to a specific mentor is ideal. This also gives the opportunity for a mentor to recommend asking a question to another mentor because their specialty in the topic area is better, which in turn can help assuage a conflict of personality simply from the shared introduction. (Just think about when someone you like or respect recommends you work with someone who you thought you didn't get along with - it's a more comfortable situation when you are introduced in this circumstance, when it's done in a transparent and positive light).

Our most successful ratio of mentors to apprentices was 3:2 for technicians who were short on shop experience, but in the scope of this project a 3:3 ratio could be appropriate. I would, however, avoid assigning a mentor as a lead for a student in this format. It makes the barrier for reaching out to the other two mentors too high (especially for those who are relatively new to a team dynamic). You may also change the ratio based on the experience of the students that you accept and their team experience. For example, if you have two students who have never worked in a team environment it may be prudent to move to a 3:2 ratio as to not overwhelm the mentors. It's nice to have that flexibility, so it may be good to avoid such a rigid structuring of teams.

Michael Stapelberg: dput usability changes [Planet Debian]

dput-ng ≥ 1.16 contains two usability changes which make uploading easier:

  1. When no arguments are specified, dput-ng auto-selects the most recent .changes file (with confirmation).
  2. Instead of erroring out when detecting an unsigned .changes file, debsign(1) is invoked to sign the .changes file before proceeding.

With these changes, after building a package, you just need to type dput (in the correct directory of course) to sign and upload it.


03/14/18 PHD comic: 'Misunderreading' [PHD Comics]

Piled Higher & Deeper by Jorge Cham
Click on the title below to read the comic
title: "Misunderreading" - originally published 3/14/2018

For the latest news in PHD Comics, CLICK HERE!

Raphaël Hertzog: Freexian’s report about Debian Long Term Support, February 2018 [Planet Debian]

A Debian LTS logoLike each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In February, about 196 work hours have been dispatched among 12 paid contributors. Their reports are available:

Evolution of the situation

The number of sponsored hours did not change but a new platinum sponsor is about to join our project.

The security tracker currently lists 60 packages with a known CVE and the dla-needed.txt file 33. The number of open issues increased significantly and we seem to be behind in terms of CVE triaging.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Louis-Philippe Véronneau: Roundcube fr_FEM locale 1.3.5 [Planet Debian]

Roundcube 1.3.5 was released today and with it, I've released version 1.3.5 of my fr_FEM (French gender-neutral) locale.

This latest version is actually the first one that can be used with a production version of Roundcube: the first versions I released were based on the latest commit in the master branch at the time instead of an actual release. Not sure why I did that.

I've also changed the versioning scheme to follow Roundcube's. Version 1.3.5 of my localisation is thus compatible with Roundcube 1.3.5. Again, I should have done that from the start.

The fine folks at Riseup actually started using fr_FEM as the default French locale on their instance and I'm happy to say the UI integration seems to be working pretty well.

Sandro Knauß (hefee), who is working on the Debian Roundcube package, also told me he'd like to replace the default Roundcube French locale by fr_FEM in Debian. Nice to see people think a gender-neutral locale is a good idea!

Finally, since this was the first time I had to compare two different releases of Roundcube to see if the 20 files I care about had changed, I decided to write a simple script that leverages git to do this automatically. Running ./ -p git_repo -i 1.3.4 -f 1.3.5 -l fr_FR -o roundcube_diff.txt outputs a nice file that tells you if new localisation files have been added and displays what changed in the old ones.

You can find the locale here.

Everyone has an accent [Seth Godin's Blog on marketing, tribes and respect]

The fact that we think the way we speak is normal is the first clue that empathy is quite difficult.

You might also notice how easy it is to notice people who are much worse at driving than you are--but that you almost never recognize someone who's driving better than you are.



Fish In The Sea, p7 [Ctrl+Alt+Del Comic]

Ctrl+Alt+Del is being sponsored this week by our friends over at SUMO! SUMO is celebrating  St Patrick’s Day with 10% off their collection of comfy bean bag chairs. I’ve sat in (and fallen asleep in) a lot of their bean bag chairs over the years, and I highly recommend them to anyone looking for high-end lounge furniture!

For books that ship from the US warehouse, click here.

For delivery to Australia or New Zealand, use the buttons below.

To Australia Book + Shipping = $90.

To New Zealand, Book + Shipping = $110.


Comic: The Hierarchy [Penny Arcade]

New Comic: The Hierarchy


Why Is SQLite Coded In C [Lambda the Ultimate - Programming Languages Weblog]

We are nearing the day someone quips that C is an improvement on most of its successors (smirk). So reading this page from the SQLite website is instructive, as is reading the page on the tooling and coding practices that make this approach work.

I think none of this is news, and these approaches have been on the books for quite a bit. But still, as I said: an improvement on most of its successors. Hat tip: HN discussion.


Norbert Preining: TeX Live 2018 (pretest) hits Debian/experimental [Planet Debian]

TeX Live 2017 has been frozen and we have entered into the preparation phase for the release of TeX Live 2018. Time to update also the Debian packages to the current status.

The other day I have uploaded the following set of packages to Debian/experimental:

  • texlive-bin 2018.20180313.46939-1
  • texlive-base, texlive-lang, texlive-extra 2018.20180313-1
  • biber 2.11-1

This brings Debian/experimental on par with the current status of TeX Live’s tlpretest. After a bit of testing and the sources have stabilized a bit more I will upload all the stuff to unstable for broader testing.

This year hasn’t seen any big changes, see the above linked post for details. Testing and feedback would be greatly appreciated.


Girl Genius for Friday, March 16, 2018 [Girl Genius]

The Girl Genius comic for Friday, March 16, 2018 has been posted.


Lipstick on a Prig [Diesel Sweeties webcomic by rstevens]

sleep is dumb

Tonight's comic is about who gets to be a geek.


Loafing Around [QC RSS]



Thursday, 15 March



Malcolm: Usability improvements in GCC 8 []

Over on the Red Hat Developer Program blog, David Malcolm describes a number of usability improvements that he has made for the upcoming GCC 8 release. Malcolm has made a number of the C/C++ compiler error messages much more helpful, including adding hints for integrated development environments (IDEs) and other tools to suggest fixes for syntax and other kinds of errors. "[...] the code is fine, but, as is common with fragments of code seen on random websites, it’s missing #include directives. If you simply copy this into a new file and try to compile it as-is, it fails. This can be frustrating when copying and pasting examples – off the top of your head, which header files are needed by the above? – so for gcc 8 I’ve added hints telling you which header files are missing (for the most common cases)." He has various examples showing what the new error messages and hints look like in the blog post.


Link [Scripting News]

The other day a friend asked for a reference that showed the role I played in the bootstrap of podcasting. I looked around, and most of the stories written by journalists are wildly innaccurate. Then I realized there's a better way to show the work. I archived the earliest podcasts so they'd be easily found. This is imho the best anchor point. There's no question these are podcasts. They were a collaboration between myself, Chris Lydon and Bob Doyle. There was important work that predated it, with myself and Adam Curry, and podcasting came together as a growing medium in the summer of 2004 after the DNC with the iPodder group, all independent of what we were doing at Harvard. But these podcasts, the BloggerCon sessions, and the Berkman Thursday meetups, all were central to the bootstrap. Others have managed to insert themselves into the story, they don't belong there. It's important, if we want to create more open media types, to tell the story accurately.


Deezer Piles Pressure on Pirates, Deezloader Reborn Throws in the Towel [TorrentFreak]

Spotify might grab most of the headlines in the world of music streaming but French firm Deezer is also growing in popularity.

Focused more on non-English speaking regions, the music service still has a massive selection of tens of millions of tracks. More importantly for pirates, it also has a loophole or two that allows users to permanently download songs from the service, a huge ‘selling’ point for the compulsive archiver.

One of the most popular third-party tools for achieving this was Deezloader but last year Deezer put pressure on its operators to cease-and-desist.

“On April 27, 2017 we received takedowns and threatened legal action from Deezer if we don’t shut down by April 29. So we decided to shut down Deezloader permanently,” the team announced.

Rather than kill the scene, the attack on Deezloader only seemed to spur things on. Many other apps underwent development in the months that followed but last December it became evident that Deezer (and probably the record labels supplying its content) were growing increasingly tired of these kinds of applications.

The company sent a wave of DMCA notices to developer platform GitHub, targeting several tools, claiming that they are “in total violation of our rights and of the rights of our music licensors.”

GitHub responded quickly by removing access to repositories referencing Deezloader, DeezerDownload, Deeze, Deezerio, Deezit, Deedown, and their associated forks. Deezer also reportedly modified its API, in order to stop or hinder apps already in existence.

However, pirates are a determined bunch and behind the scenes many sought to breathe new life into their projects, to maintain the flow of free music from Deezer. One of those that gained traction was the obviously-titled ‘Deezloader Reborn’ which enjoyed a new lease of life on both Github and Reddit after taking over from DeezLoader V2.3.1.

But in January 2018, Deezer turned up the pressure again, hitting Github with a wave (1,2) of takedown notices targeting various projects. On January 23, Deezer hit Deezloader Reborn itself with the notice detailed below.

The following project, identified in the paragraph below, makes available a hacked version of our Deezer application by describing methods to bypass Deezer’s security measures to unlawfully download its music catalogue, in total violation of our rights and of the rights of our music licensors (phonographic producers, performing artists, songwriters and composers):

I therefore ask that you immediately take down the project corresponding to the URL above and all of the related forks by others members who have had access or even contributed to such projects.

Not only did Github comply with Deezer’s request, Reddit did too. According to a thread still listed on the site, Reddit removed a post about Deezloader Reborn following a copyright complaint from Deezer.

Two days later Deezer targeted similar projects on Github but by this time, Deezloader Reborn already had new plans. Speaking with TF, project developer ExtendLord said that he wouldn’t be shutting down but would continue on code repository Gitlab instead. Now, however, those plans have also come to an abrupt end after Gitlab took the page down.

Deezloader Reborn – gone from Gitlab

A copy of the page available on shows Deezloader Reborn at version 3.0.5 with the ability to download music ready-tagged and in FLAC quality. Links to newer versions are being shared on Reddit but it appears there is no longer a central trusted source for the application.

There’s no official confirmation yet but it seems likely that Deezer was behind the Gitlab takedown. TorrentFreak has contacted ExtendLord who linked us to this page which states that “DeezLoader Reborn is no longer maintained due to DMCA. [Version] 3.1.0 is the last update, no more updates will be made.”

So, at least for now, it appears that Deezloader Reborn will go the way of various other Deezer-reliant applications. That won’t be the end of the story though, that’s a certainty.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.



Hugo nominations close tomorrow! [Cory Doctorow's]

If you attended either of the past two World Science Fiction Conventions or are registered for the next one in San Jose, California, you’re eligible to nominate for the Hugo Awards, which you can do here — you’ve only got until midnight tomorrow!

The 2017 Locus Recommended Reading List is a great place to start if you’re looking to refresh your memory about the sf/f you enjoyed last year.

May I humbly remind you that my novel Walkaway is eligible in the Best Novel category?

(via Scalzi)

[$] The strange story of the ARM Meltdown-fix backport []

Alex Shi's posting of a patch series backporting a set of Meltdown fixes for the arm64 architecture to the 4.9 kernel might seem like a normal exercise in making important security fixes available on older kernels. But this case raised a couple of interesting questions about why this backport should be accepted into the long-term-support kernels — and a couple of equally interesting answers, one of which was rather better received than the other.

Stable kernels 4.15.10 and 4.14.27 []

Greg Kroah-Hartman has announced the release of the 4.15.10 and 4.14.27 stable kernels. Each contains a large number of patches throughout the kernel tree; users should upgrade.

Security updates for Thursday []

Security updates have been issued by Arch Linux (samba), CentOS (389-ds-base, kernel, libreoffice, mailman, and qemu-kvm), Debian (curl, libvirt, and mbedtls), Fedora (advancecomp, ceph, firefox, libldb, postgresql, python-django, and samba), Mageia (clamav, memcached, php, python-django, and zsh), openSUSE (adminer, firefox, java-1_7_0-openjdk, java-1_8_0-openjdk, and postgresql94), Oracle (kernel and libreoffice), Red Hat (erlang, firefox, flash-plugin, and java-1.7.1-ibm), Scientific Linux (389-ds-base, kernel, libreoffice, and qemu-kvm), SUSE (xen), and Ubuntu (curl, firefox, linux, linux-raspi2, and linux-hwe).

Feeds | Only one week left to register for Collaborations Workshop 2018! [Planet GridPP]

Only one week left to register for Collaborations Workshop 2018! s.aragon 16 March 2018 - 9:00am

14811288593_0e03f2c604_z.jpgBy Raniere Silva, Software Sustainability Institute.

The clock is ticking away, and if you haven’t registered for Collaborations Workshop 2018 (CW18) yet, this is your last chance!

Hugo nominations close tomorrow! [Boing Boing]

If you attended either of the past two World Science Fiction Conventions or are registered for the next one in San Jose, California, you're eligible to nominate for the Hugo Awards, which you can do here -- you've only got until midnight tomorrow! (more…)

Crowdfunding an official, licensed Scrabble mechanical keyboard [Boing Boing]

On the crowdfunding site Massdrop, board-game fan Cassidy Williams is taking preorders for a $160 Scrabble-themed mechanical keyboard with Cherry MX Brown switches (if you've got a mechanical keyboard kicking around that you'd like to convert, you can get the $47 keycap set instead). (more…)

Reminder: Get Your Hugo Nominations In! [Whatever]

Hello, this serves as your reminder that the 2018 Hugo nomination window comes to a close tomorrow, March 16 at 11:59pm Pacific Time, so if you are eligible to nominate for the Hugo Awards (ie., you were a member of last year’s, this year’s, or next year’s, Worldcon, as of 12/31/17), don’t forget to go nominate the things you liked in each category. Here’s the link to Worldcon 76’s Hugo page, which includes a link to the nomination page for this year’s Hugos and for the 1943 Retro Hugos.

Remember that a) you shouldn’t worry if you didn’t read “widely enough,” since nominating what you did read and did like is good enough, b) that you don’t need to feel obliged to fill up all five nomination slots in every category. Just nominate what you think is deserving, and if that number is less than five, so be it.

That said, if you do need a quick refresher on some of what’s been critically acclaimed this year, the Locus Recommended Reading List is a good start.

Self-interested note: The Collapsing Empire is eligible for best novel, and Don’t Live For Your Obituary is eligible for Best Related Work. If you liked them, please feel free to nominate them, with my thanks. But if there were works you prefer better in these categories, please nominate those works instead!


UX for security: Why privacy matters [All - O'Reilly Media]

User experience designers have an opportunity to empower people to take control of their privacy.

Continue reading UX for security: Why privacy matters.

Linkrot is a kind of API breakage [Scripting News]

There are APIs everywhere you look on the web.

Yesterday, Jay Rosen reported on Twitter that the NYT, in a redesign, had eliminated its paragraph-level permalinks (like the ones you see on Scripting News). He was peeved, for sure, and imho had every right to be. I asked if this broke existing links, i.e. had he been using the feature (I assume so) and did the links still point to paragraphs on the NYT site (yes, of course they pointed there, but did the links work when clicked). It's hard to phrase this question for non-developers, but the issue is real.

Users don't like steps backwards. This applies equally to word processers and websites. It's all software. Only in this case they're developers too because URLs are an API, and as we know APIs break. đŸ’Ľ

Does the NYT have an obligation to continue to support this feature? Of course not. But one of the unwritten rules of the web, going way back, is that linkrot sucks and we should do everything we can to minimize it. The NYT has been conspicuously excellent at not breaking links, btw, over the 20-plus year history of news on the web.

To get an idea of how bad linkrot is, here's an archive page for this blog for November 1999. Try clicking on some links. So many of them are broken.

PS: As far as I'm concerned credit for the concept and the term linkrot goes to Jakob Nielsen, but it's hard to find any references to his authoritative piece on the subject. How nice that his 20-year-old piece is still there and renders nicely in a 2018 browser. It would be ironic if it had been lost to linkrot. đŸš€

Today in GPF History for Thursday, March 15, 2018 [General Protection Fault: The Comic Strip]

"Did I mention I'm allergic to dust, pollen, chocolate, and COBOL programs?"

Caped one-percenters: how superheros make out like bandits under the Trump tax-plan [Boing Boing]

Tax lawyer Jed Bodger has publised an analysis in the journal Tax Notes detailing the expected gains for superheroes under the Trump tax plan. (more…)


Umlauts, day 2 [Scripting News]

Re: yesterday's River5 issue.

Okay we know what the problem is, and the solution as well.

Of course it's an encoding issue.

Explained here.

My eyes, day 2 [Scripting News]

My eyes don't work very well this week. As a result I can barely see when I'm out walking. To look at the price on a cash register I have to put my face in it. Add to that my hearing isn't so great, it's like a comedy when I'm out and about.

It's kind of fun to walk around Manhattan barely being able to see. No chance of eye contact. Which means I win every faceoff. That's one of the rules of being a pedestrian in NY, if fail to avert your eyes, you lose. ​

With the hearing it's age. I've had it tested and functionally it's fine.The brain, as you age, has trouble separating sounds in noisy places. So at Chipotle, for example, that plays fairly loud music as background, renders my hearing pretty much useless.

However it activates my sense of humor. And wonderment that I can function like this. I couldn't drive a car, however. And I might have trouble navigating a new neighborhood, because I can't read street signs.

Until Monday I'm kind of in limbo.

It's bad timing because I just started working with a bunch of testers on my new product. Please stay tuned, hopefully my eyes will be back in commission next week. Until then it's going to be a lot of Netflix and book reading for Davey.

China's mass surveillance and pervasive social controls are based on a rocket scientist's advocacy for "systems thinking" [Boing Boing]

In 1955, MIT- and Caltech-educated Qian Xuesen was fired from his job teaching at JPL and deported from the USA under suspicion of being a communist sympathizer; on his return to China, he led the country's nuclear weapons program and became a folk hero who is still worshipped today.


SEC charges former Equifax CIO with insider trading [Boing Boing]

Jun Ying was serving as CIO of Equifax when he avoided more than $117,000 in losses by exercising and liquidating all of his stock options before the public was notified of the company's catastrophic breach -- but after he had figured out what was going on. (more…)


Stop cherry-picking, start merging, Part 4: Exploiting the recursive merge algorithm [The Old New Thing]

The last few days have looked at the dangers of cherry-picking, both in terms of latent merge conflicts and (even worse) missing merge conflicts, and last time, I proposed the alternative to cherry-picking: Merging from a common branch.

Before we can explore further, we need to understand the recursive merge algorithm.

Instead of trying to explain it, I will defer to this explanation of the recursive merge algorithm. Go read it, and then we can talk about its consequences.

Hi, thanks for coming back. Our simple example last time did not require the full power of the recursive merge because there is still a single best common ancestor. But knowing how the recursive merge works helps you answer some common follow-up questions.

How do I find the correct merge base?

The git merge-base master feature command will find a merge base. You can use that as the basis for your patch branch. You can also say git merge-base -a master feature to show all merge bases, and you can choose the one that best describes your intent.

How do I know which best describes my intent?

That's really up to you to decide. For example, maybe one of the merge bases is a patch branch, and the other is a regularly-scheduled merge between the master and feature branch. If your patch is intended to build on top of the previous patch, then using the previous patch branch describes your intent better. But if your patch is independent of the previous patch, then using the regularly-scheduled merge describes your intent better.

What if I pick the wrong merge base, and instead pick a merge base that is not a common ancestor?

If you choose a merge base that isn't actually a common ancestor, then when you prepare the merges from the patch branch into the master and feature branches, one or the other merge will encompass more than just the single commit you are trying to patch.

Let's go back to the diagram we had at the start of yesterday's discussion:

, which does show up -->
apple   apple
A ← M1   master
    F1   feature

From a common ancestor A, commit F1 happens on the feature branch, and commit M1 happens on the master branch. Now you realize that you need to apply a fix to both branches. But instead of creating your patch branch from commit A (the common ancestor), you mistakenly create it from F1.

    apple       berry
    M1 ← ← ← M2   master
apple ↙︎     berry ↙︎
A       P       patch
  ↖︎   ↙︎   ↖︎
    F1 ← ← ← F2   feature
    apple       berry

From commit F1, you create a patch branch and apply a commit P to it, which contains the desired fix. This branch is then merged into the master branch (creating commit M2) and into the feature branch (creating commit F2).

This diagram is identical to the second diagram from last time, except that the patch commit P is based off commit F1 rather than commit A.

What happens?

What happens is that the merge into the master branch includes commit F1, which is not what you intended.

If you're using a pull request, then the list of encompassed commits in the pull request will be longer than just one commit, and the diff of the pull request will show unwanted changes. That is your signal that something funny is going on. If you're doing straight merges from the command line, you'll find that the history for the merge into the master branch pulled in more than just one change, and the diff of the merge shows that the merge into the master branch contained both the desired changes from commit P as well as some unwanted changes from commit F1.

You can protect against this by doing

git log master..patch
git log feature..patch

and verifying that only one commit (namely, your patch) comes out of each log query.

What if I pick the wrong merge base, and instead pick a merge base that is a common ancestor, but not the most recent one?

Suppose that instead of choosing the most recent common ancestor, you choose an older one.

        apple       berry
        M1 ← ← ← M2   master
    apple ↙︎     berry ↙︎
    A   ↙︎ ← P       patch
  ↙︎ ⤪︎       ↖︎
B ← ↙︎   F1 ← ← ← F2   feature
apple       apple       berry

From a common ancestor A, commit F1 happens on the feature branch, and commit M1 happens on the master branch. We create a patch branch not from commit A, but from an even older commit B that is an ancestor of commit A. We then merge that patch branch into the master and feature branches.

What happens is that the eventual merge of the master and feature branch will have multiple best common ancestors. One is the merge base that would have been selected if you had never created a patch branch (A). The other is the patch branch that you created (P). The recursive merge algorithm will merge these two branches together, the result of which is... surprise! the version of the patch branch you would have gotten if you had created it from the correct merge base in the first place.

In other words, it doesn't matter which common ancestor you pick, as long as you don't pick one so far back that the merge with the most recent common ancestor will encounter a merge conflict. But you're unlikely to do that because that would mean that the merges into the current heads of the master and feature branches would also encounter merge conflicts, and that would tell you that something is wrong.

The above result is an important one, because it means that you could choose as your common ancestor not the most recent common ancestor, but in fact the oldest common ancestor that the patch still applies to. In other words, go back to the commit that introduced the code you want to change. That commit is in both the master and feature branches by virtue of the fact that the problem exists in both branches. Apply your patch to that commit, and then merge the patch into the master and feature branches. From the graph, it looks like you had a side branch that immediately fixed the problem, but you merely took a long time before deciding to merge that fix back into the master and feature branches.

Having a patch branch ready with the fix is handy when we get to the next question.

Note that you might choose an older common ancestor on purpose, if it better describes your intent. For example, in the above diagram, commit A might be a commit from a patch branch, whereas commit B is a regularly-scheduled merge between the master and feature branches. As with the case of multiple merge bases, you can choose the commit that best expresses what you're trying to do with your patch. If your patch builds on top of the work in commit A, then creating your patch branch from commit A describes your intent best. On the other hand, if your patch is independent of the previous patch, then creating your patch branch from commit B makes it clearer that your patch is unrelated to the previous one.

What if I have multiple branches that I need to fix?

As we discovered in the previous question, it doesn't matter which common ancestor you use, as long as it's a common ancestor. So create a patch branch that is based on an old common ancestor, old enough to be in all the branches you want to apply the fix to. Say, the commit that introduced the line of code you want to modify with the patch. Tell anyone who wants to pick up the fix, "Merge the patch branch into your branch."

Instead of telling everybody to cherry-pick the fix, tell them to merge the patch branch. This is a branch specially crafted so that merging it picks up exactly one commit, namely the fix.

And if everybody merges the same patch branch, then they won't encounter conflicts when they merge with each other.

What if I need to share multiple changes between the branches?

Maybe you need multiple changes rather than a single change. For example, you created a patch branch with the fix, then discovered a problem with the fix, so you have another commit that fixes the fix.

No problem. In your patch branch, put all the changes you want to share. Once you have your entire payload sitting in the patch branch, you can merge it into the master and feature branches.

What if I later realize I need to merge another fix?

What if you're having a really bad day, and after you merge the patch branch into the master and feature branches, you discover another problem that forces you to disable a different part of the feature. Is it safe to create a second patch branch and follow the same exercise? Does the second patch branch have to be based on the first patch branch?

Again, through the magic of the recursive merge algorithm, it doesn't matter which way you do it. Whether your second patch branch is based on the first patch branch or whether it's an independent branch turns out to be irrelevant, because the recursive merge algorithm will merge all the patch branches together anyway. The decision to base it on the previous patch branch should be based on what is easier for others to understand.

Okay, those are the follow-up questions that can be answered by applying your understanding of the recursive merge algorithm. Next time, we'll look at follow-up questions that can be answered by applying your understanding of the three-way merge algorithm.

Los Angeles: come see me talk with Jen Wang about her amazing graphic novel the Prince and the Dressmaker TONIGHT! [Boing Boing]

Hey, LA! Molly "Strong Female Protagonist" Ostertag, Tillie Walden and I are going to be talking with Jen Wang about her amazing, genderqueer middle-grades fairy-tale/graphic novel The Prince and the Dressmaker, tonight at 7PM at Chevalier's Books. Be there or be square!


Tamilrockers Arrests: Police Parade Alleged Movie Pirates on TV [TorrentFreak]

Just two years ago around 277 million people used the Internet in India. Today there are estimates as high as 355 million and with a population of more than 1.3 billion, India has plenty of growth yet to come.

Also evident is that in addition to a thirst for hard work, many Internet-enabled Indians have developed a taste for Internet piracy. While the US and Europe were the most likely bases for pirate site operators between 2000 and 2015, India now appears in a growing number of cases, from torrent and streaming platforms to movie release groups.

One site that is clearly Indian-focused is the ever-popular Tamilrockers. The movie has laughed in the face of the authorities for a number of years, skipping from domain to domain as efforts to block the site descend into a chaotic game of whack-a-mole. Like The Pirate Bay, Tamilrockers has burned through plenty of domains including,,,,, and

Now, however, the authorities are claiming a significant victory against the so-far elusive operators of the site. The anti-piracy cell of the Kerala police announced last evening that they’ve arrested five men said to be behind both Tamilrockers and alleged sister site, DVDRockers.

They’re named as alleged Tamilrockers owner ‘Prabhu’, plus ‘Karthi’ and ‘Suresh’ (all aged 24), plus alleged DVD Rockers owner ‘Johnson’ and ‘Jagan’ (elsewhere reported as ‘Maria John’). The men were said to be generating between US$1,500 and US$3,000 each per month. The average salary in India is around $600 per annum.

While details of how the suspects were caught tend to come later in US and European cases, the Indian authorities are more forthright. According to Anti-Piracy Cell Superintendent B.K. Prasanthan, who headed the team that apprehended the men, it was a trail of advertising revenue crumbs that led them to the suspects.

Prasanthan revealed that it was an email, sent by a Haryana-based ad company to an individual who was arrested in 2016 in a similar case, that helped in tracking the members of Tamilrockers.

“This ad company had sent a mail to [the individual], offering to publish ads on the website he was running. In that email, the company happened to mention that they have ties with Tamilrockers. We got the information about Tamilrockers through this ad company,” Prasanthan said.

That information included the bank account details of the suspects.

Given the technical nature of the sites, it’s perhaps no surprise that the suspects are qualified in the IT field. Prasanthan revealed that all had done well.

“All the gang members were technically qualified. It even included MSc and BSc holders in computer science. They used to record movies in pieces from various parts of the world and join [them together]. We are trying to trace more members of the gang including Karthi’s brothers,” Prasanathan said.

All five men were remanded in custody but not before they were paraded in front of the media, footage which later appeared on TV.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

How to train and deploy deep learning at scale [All - O'Reilly Media]

The O’Reilly Data Show Podcast: Ameet Talwalkar on large-scale machine learning.

In this episode of the Data Show, I spoke with Ameet Talwalkar, assistant professor of machine learning at CMU and co-founder of Determined AI. He was an early and key contributor to Spark MLlib and a member of AMPLab. Most recently, he helped conceive and organize the first edition of SysML, a new academic conference at the intersection of systems and machine learning (ML).

We discussed using and deploying deep learning at scale. This is an empirical era for machine learning, and, as I noted in an earlier article, as successful as deep learning has been, our level of understanding of why it works so well is still lacking. In practice, machine learning engineers need to explore and experiment using different architectures and hyperparameters before they settle on a model that works for their specific use case. Training a single model usually involves big (labeled) data and big models; as such, exploring the space of possible model architectures and parameters can take days, weeks, or even months. Talwalkar has spent the last few years grappling with this problem as an academic researcher and as an entrepreneur. In this episode, he describes some of his related work on hyperparameter tuning, systems, and more.

Continue reading How to train and deploy deep learning at scale.

1174 [LFG Comics]

The post 1174 appeared first on Looking For Group.

214 [LFG Comics]

The post 214 appeared first on Tiny Dick Adventures.

Get Caught Up on LICD [LFG Comics]

Afternoon, folks, I trust you are all doing well, on this, the 13th Day of the Lord or whatever. While I try to keep my cross posting between LICD and LFG at a minimum, every now and then I’m forced […]

The post Get Caught Up on LICD appeared first on Looking For Group.

1173 [LFG Comics]

The post 1173 appeared first on Looking For Group.

1172 [LFG Comics]

The post 1172 appeared first on Looking For Group.

212 [LFG Comics]

The post 212 appeared first on Tiny Dick Adventures.

1171 [LFG Comics]

The post 1171 appeared first on Looking For Group.

Pounded in the Butt by My Own Podcast: Chuck Tingle comes to your earbuds [Boing Boing]

The good folks from Night Vale have launched Pounded in the Butt By My Own Podcast, a new audio treat in which guest-readers read the extremely NSFW and utterly delightful erotic fiction of Chuck Tingle (previously).

Wells Fargo gives its CEO a $4.6m raise on flat earnings and more scandals [Boing Boing]

Wells Fargo CEO Tim Sloan has only been on the job since October, but he's earned a 35%, $4.6m raise, despite flat earnings and a series of scandals since Sloan took over from the cartoonishly villainous John Stumpf. (more…)


Clint Adams: Don't feed them after midnight [Planet Debian]

“Hello,” said Adrian, but Adrian was lying.

“My name is Adrian,” said Adrian, but Adrian was lying.

“Hold on while I fellate this 魔鬼,” announced Adrian.

Spaniard doing his thing

Posted on 2018-03-15
Tags: bgs


Artificial Intelligence and the Attack/Defense Balance [Schneier on Security]

Artificial intelligence technologies have the potential to upend the longstanding advantage that attack has over defense on the Internet. This has to do with the relative strengths and weaknesses of people and computers, how those all interplay in Internet security, and where AI technologies might change things.

You can divide Internet security tasks into two sets: what humans do well and what computers do well. Traditionally, computers excel at speed, scale, and scope. They can launch attacks in milliseconds and infect millions of computers. They can scan computer code to look for particular kinds of vulnerabilities, and data packets to identify particular kinds of attacks.

Humans, conversely, excel at thinking and reasoning. They can look at the data and distinguish a real attack from a false alarm, understand the attack as it's happening, and respond to it. They can find new sorts of vulnerabilities in systems. Humans are creative and adaptive, and can understand context.

Computers -- so far, at least -- are bad at what humans do well. They're not creative or adaptive. They don't understand context. They can behave irrationally because of those things.

Humans are slow, and get bored at repetitive tasks. They're terrible at big data analysis. They use cognitive shortcuts, and can only keep a few data points in their head at a time. They can also behave irrationally because of those things.

AI will allow computers to take over Internet security tasks from humans, and then do them faster and at scale. Here are possible AI capabilities:

  • Discovering new vulnerabilities­ -- and, more importantly, new types of vulnerabilities­ in systems, both by the offense to exploit and by the defense to patch, and then automatically exploiting or patching them.
  • Reacting and adapting to an adversary's actions, again both on the offense and defense sides. This includes reasoning about those actions and what they mean in the context of the attack and the environment.
  • Abstracting lessons from individual incidents, generalizing them across systems and networks, and applying those lessons to increase attack and defense effectiveness elsewhere.
  • Identifying strategic and tactical trends from large datasets and using those trends to adapt attack and defense tactics.

That's an incomplete list. I don't think anyone can predict what AI technologies will be capable of. But it's not unreasonable to look at what humans do today and imagine a future where AIs are doing the same things, only at computer speeds, scale, and scope.

Both attack and defense will benefit from AI technologies, but I believe that AI has the capability to tip the scales more toward defense. There will be better offensive and defensive AI techniques. But here's the thing: defense is currently in a worse position than offense precisely because of the human components. Present-day attacks pit the relative advantages of computers and humans against the relative weaknesses of computers and humans. Computers moving into what are traditionally human areas will rebalance that equation.

Roy Amara famously said that we overestimate the short-term effects of new technologies, but underestimate their long-term effects. AI is notoriously hard to predict, so many of the details I speculate about are likely to be wrong­ -- and AI is likely to introduce new asymmetries that we can't foresee. But AI is the most promising technology I've seen for bringing defense up to par with offense. For Internet security, that will change everything.

This essay previously appeared in the March/April 2018 issue of IEEE Security & Privacy.


Microsoft: Poisoned Torrent Client Triggered Coin Miner Outbreak [TorrentFreak]

First released in 2010, MediaGet has been around for a while. Initially, the torrent client was available in Russian only, but the team later expanded its reach across the world.

While it’s a relatively small player, it has been installed on millions of computers in recent years. It still has a significant reach, which is what Microsoft also found out recently.

This week the Windows Defender Research team reported that a poisoned version of the BitTorrent client was used to start the Dofoil campaign, which attempted to offload hundreds of thousands of malicious cryptocurrency miners.

Although Windows Defender caught and blocked the culprit within milliseconds, the team further researched the issue to find out how this could have happened.

It turns out that the update process for the application was poisoned. This then enabled a signed version of MediaGet to drop off a compromised version, as can be seen in the diagram below.

“A signed mediaget.exe downloads an update.exe program and runs it on the machine to install a new mediaget.exe. The new mediaget.exe program has the same functionality as the original but with additional backdoor capability,” Microsoft’s team explains.

The update poisoning

The malicious MediaGet version eventually triggered the mass coin miner outbreak. Windows Defender Research stresses that the poisoned version was signed by a third-party software company, not MediaGet itself.

Once the malware was launched the client built a list of command-and-control servers, using embedded NameCoin DNS servers and domains with the non-ICANN-sanctioned .bit TLD, making it harder to shut down.

More detailed information on the attack and how Dofoil was used to infect computers can be found in Microsoft’s full analysis.

MediaGet informs TorrentFreak that hackers compromised the update server to carry out their attack.

“Hackers got access to our update server, using an exploit in the Zabbix service and deeply integrated into our update mechanics. They modified the original version of Mediaget to add their functionality,” MediaGet reveals.

The company says that roughly five percent of all users were affected by the compromised update servers. All affected users were alerted and urged to update their software.

The issue is believed to be fully resolved at MediaGet’s end and they’re working with Microsoft to take care of any copies that may still be floating around in the wild.

“We patched everything and improved our verification system. To all the poisoned users we sent the message about an urgent update. Also, we are in contact with Microsoft, they will clean up all the poisoned versions,” MediaGet concludes.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Four short links: 15 March 2018 [All - O'Reilly Media]

Traffic Attacks, VR/AR Audio, Travel Bot, and Location Codes

  1. Adversarial Traffic (Paper A Day) -- What if an adversary—perhaps just a single vehicle—tries to game the system? Maybe to try and speed their own passage through traffic light junctions, or perhaps to deliberately cause congestion and bring traffic to a halt. The authors look at data spoofing attacks on the I-SIG system, targeting design or implementation issues in the signal control algorithm (i.e., not relying on implementation bugs). [...] Using just a single attack vehicle, the total traffic delay at a junction can be increased by up to 68.1%, completely reversing the expected benefit of the I-SIG system. Attacks can also completely jam entire approaches—in the example below, vehicles queuing in left-turn lanes spill over and block through lanes, causing massive traffic jams. 22% of vehicles take over seven minutes to get through the junction, when it should take 30 seconds.
  2. Resonance Audio -- Google's VR/AR audio kit is now open source.
  3. Travel Price Drop Bot -- Flight and hotel prices change all the time. DoNotPay finds travel confirmations from past bookings in your inbox. When the price drops, our robot lawyer will find a legal loophole to negotiate a cheaper price or rebook you. From the Do Not Pay whizzes.
  4. Plus codes -- Google documents its system for representing locations (areas, actually) as short strings suitable for use as addresses in regions that lack formal addressing systems.

Continue reading Four short links: 15 March 2018.

Modernizing cybersecurity approaches [All - O'Reilly Media]

Utilizing machine intelligence to facilitate proactive cybersecurity operations.

Cybersecurity incidents are among the greatest concerns of businesses, government agencies, and private citizens today. In the modern world, protecting our data and information assets is nearly as important as maintaining the security of our physical assets. It should not be surprising, then, that data analytics play a key role in cybersecurity. Analytics and machine intelligence, a field concerned with producing machines able to autonomously perform tasks that would normally require human intelligence, can drive an organization from reactive to proactive when coupled with organizational change. This capability enables organizations of all types to move from simply measuring signals (data), to creating sentinels (machine learning algorithms), and then moving ahead to sense-making (actionable machine intelligence).

Extracting meaningful signals from mountains of data

The quantity and complexity of digital network information has skyrocketed in recent years due to the explosion of internet-connected devices, the rise of operational technologies (OT), and the growth of an interconnected global economy. With exponentially multiplying mountains of human- and machine-generated cybersecurity data, the ability to extract meaningful signals about potentially nefarious activities, and ultimately deter those activities, has become increasingly complex.

In other domains, such as marketing and e-commerce, businesses have been able to effectively apply data mining to create customer “journeys” in order to predict and recommend content or products to the end user. However, within cybersecurity operations, the ability to map the journey of an analyst or an adversary is inherently complex due to the dynamic nature of computer networks, the sophistication of adversaries, and the pervasiveness of technical and human factors that expose network vulnerabilities. Despite these challenges, there is hope for making meaningful progress. E-commerce marketing and cyber operations share one significant factor—the primary actor is a human being, whose interests, intents, motivations, and goals often manifest through their actions, behaviors, and other digital breadcrumbs.

Instrumenting advanced analytics for improved defenses

For modern cybersecurity operations to be effective, it’s necessary for organizations to monitor diverse data streams to identify strong activity signals. This includes monitoring network traffic data to find well-known patterns of common adversary activities, such as data exfiltration or beaconing. While these detection techniques are critical to cybersecurity operations, it is imperative to leverage such signals to predict future activities. Further capabilities could even be created to modify the behavior of the actor (or analyst) to the benefit of the organization and mission. This could include systems on networks that are trained to autonomously take action, such as blocking access to resources or redirecting traffic, based on a predicted behavior.

Modern attackers are too agile and creative for organizations to rely on passive descriptive analytics or reactive diagnostics techniques for protection. Rather, building an ability to forecast future outcomes through predictive analytics that utilize prior knowledge of events, particularly the precursor signals evident before an attack, are proactive measures. Operations centers can use machine learning models to build predictive analytics to guide defensive actions to prevent the event from occurring, or to neutralize its consequences. Examples of predictive approaches include active risk scoring methodologies, or using emerging trends data from cybersecurity incident reporting services to predict new attacks. By modeling patterns in data, it is possible to generate early warning signals in live network data that provide valuable lead time to mitigate vulnerabilities and stop a compromise before it occurs.

The next step up the analytics ladder is prescriptive analytics, which is the ability to map out what could be done to change a previously predicted undesired outcome. Fusing multiple data sources provides insight into what motivations, treatments, or conditions can be set proactively to move the user (i.e., analyst, threat actor) onto a new, more optimal trajectory that minimizes risk of compromise. Since the human actor is the common feature in most uses cases, it is unsurprising that the application of human behavioral sciences should be a common approach for prescriptive analytics. Data are being collected everywhere in nearly every human interaction, and insights gleaned from digitally monitored behaviors of threat actors can be used to gain the upper hand. This capability includes steering an actor away from sensitive assets and enabling offensive actions to deter current and future incidents.

Finally, the emerging field of cognitive analytics, which conceptually simulates human thought processes, considers all data streams (e.g., cyber network, social networks, global events, and any other contextual data) to see the big picture and gain understanding as to the right action to take at the right time in the right place. This ability may seem like a nebulous concept but is achievable with the data most organizations have on-hand already or to which they can easily get access. Generating the “360 view” of a situation changes the game by prompting new questions: what would a threat actor do in these circumstances? Where would a threat actor likely attack? Who is likely to be involved? How will they react to our defenses or countermeasures? Given a set of global events, what would a potential cyber actor be motivated to accomplish next? The real dramatic difference from early-stage descriptive analytics and mature-level cognitive analytics is this: in the former, the analytics are applied to answer prescribed known questions; in the latter, the analytics are applied to generate new unknown questions.

While each of these technical capabilities will be game changers for most organizations that invest in them, many groups will be unable to get to the starting line. An organization lacking a modern culture, job roles, leadership functions, and processes will cause most machine intelligence implementations to fall short of expectations. This may seem like a grim outlook, but it’s critical to develop an ecosystem that can nurture advanced analytical capabilities to ensure their success.

Organizational requirements for embracing machine intelligence

The structure and function of an organization are equally as important to the success of machine intelligence as the technology itself. A strong, flexible organization can turbocharge analytical capabilities by giving staff the freedom to innovate and the support required to operationalize analytics.

Foremost, organizations should build a culture of experimentation with democratized access to data. Data science shouldn’t be limited to research teams, but rather should be pervasive in an organization to enable more data-driven decision-making. To build such a culture, organizations should begin at the top by placing senior technical leadership in positions to steer the integration of machine intelligence capabilities into the enterprise, and institutionalize job roles focused on infusing it throughout the organization. Leadership should empower groups to gain insights and take action from analytics; the best analytics are those that build equity from the whole team, including cybersecurity experts, software developers, and data scientists.

Next, the organization needs to create opportunities for technologists, scientists, and engineers to have fun in the pursuit of creating high-end capabilities that make an impact to the cybersecurity posture. This includes sponsoring events like hackathons and competitions, where domain experts can interface directly with data scientists to quickly churn out new concepts. Pairing such events with recognition, rewards, financial incentives (and free pizza) will instill a sense of pride and caring about the impact to the institution. Talented personnel with the right skills to build “the right” machine intelligence capabilities are tough to find, so putting in the time to attract and retain those individuals is key to an organization’s success.

Lastly, organizations must apply concepts and processes that work well in other domains, and they must embrace an agile mindset. A fail-fast (i.e., learn-fast) environment will help to build iterative and incremental models that start small and mature over time. Rather than spending months or years building a monumental algorithm, project leaders should set short-term attainable objectives such as a minimum viable product (MVP) at each stage of development, then build proofs of value (POV). This approach will allow for incremental impacts to be made to cybersecurity quickly, rather than waiting a long time for some monolithic product that is already outdated when it finally gets integrated into the environment.

Many organizations might be hesitant to embrace machine intelligence as a core tenet of their business or mission operations. Organizations can start by automating routine tasks; this will free up time for subject matter experts and technical staff to focus on higher-order thinking and building advanced capabilities. From there, utilizing readily available data sets, they will be able to integrate well-known technologies as a foundation to build machine learning applications. A vast amount of research and experimental data is available for organizations to jumpstart a machine intelligence-driven approach to cybersecurity. Teams can compile use cases based on current cybersecurity challenges, and identify gaps in data collection and technology, creating a roadmap for the organization to direct capability development activities.

Ultimately, by integrating machine intelligence into cybersecurity business and mission operations, organizations will move past the frustrations that come with continually reacting to fait accompli infiltrations. They will advance to the satisfaction of achieving proactive insights and sense-making across a complex landscape, and thereby get ahead of cyber threat actors.

This post is part of a collaboration between O'Reilly and Booz Allen Hamilton. See our statement of editorial independence.

Continue reading Modernizing cybersecurity approaches.

Representative Line: Flushed Down the Pipe [The Daily WTF]

No matter how much I personally like functional programming, I know that it is not a one-size fits all solution for every problem. Vald M knows this too. Which is why they sent us an email that...


[1019] Squirreled Away [Twokinds]

Comic for March 15, 2018


We make our own taste, and call it reality [Seth Godin's Blog on marketing, tribes and respect]

Most of us say, "this is better, therefore I like it."

In fact, the converse is what actually happens. "I like it, therefore I'm assuring you (and me) that this is better."



One more week of the Humble Book Bundle: Mad Scientist by... [Humble Bundle Blog]

One more week of the Humble Book Bundle: Mad Scientist by Make:! 

Tap into your inner mad scientist with our Mad Scientist book bundle from Make:! It has almost $350 worth of ebooks – and we just added The Best of Make: Volume 2 and Make: Lego and Arduino Projects. 

This bundle is mad – MAD, I SAY!

Assets for Press and Partners

One Player Isn't Ready [Diesel Sweeties webcomic by rstevens]

sleep is dumb

Tonight's comic is a pop culture reference


Link [Scripting News]

As Trump's Saturday Night Massacre comes into focus, students walk out in protest and Pennsylvania voters throw a Repub out of a safe Repub district. Looks like the showdown is coming soon.


Page 22 [Flipside]

Page 22 is done.

Louis-Philippe Véronneau: Playing with water [Planet Debian]

H2o Flow gradient boosting job

I'm currently taking a machine learning class and although it is an insane amount of work, I like it a lot. I initially had planned to use R to play around with the database I have, but the teacher recommended I use H2o, a FOSS machine learning framework.

I was a bit sceptical at first since I'm already pretty good with R, but then I found out you could simply import H2o as an R library. H2o replaces most R functions by its own parallelized ones to cut down on processing time (no more doParallel calls) and uses an "external" server you have to run on the side instead of running R calls directly.

H2o Flow gradient boosting model

I was pretty happy with this situation, that is until I actually started using H2o in R. With the huge database I'm playing with, the library felt clunky and I had a hard time doing anything useful. Most of the time, I just ended up with long Java traceback calls. Much love.

I'm sure in the right hands using H2o as a library could have been incredibly powerful, but sadly it seems I haven't earned my black belt in R-fu yet.

H2o Flow variable importance weights

I was pissed for at least a whole day - not being able to achieve what I wanted to do - until I realised H2o comes with a WebUI called Flow. I'm normally not very fond of using web thingies to do important work like writing code, but Flow is simply incredible.

Automated graphing functions, integrated ETA when running resource intensive models, descriptions for each and every model parameters (the parameters are even divided in sections based on your familiarly with the statistical models in question), Flow seemingly has it all. In no time I was able to run 3 basic machine learning models and get actual interpretable results.

So yeah, if you've been itching to analyse very large databases using state of the art machine learning models, I would recommend using H2o. Try Flow at first instead of the Python or R hooks to see what it's capable of doing.

The only downside to all of this is that H2o is written in Java and depends on Java 1.7 to run... That, and be warned: it requires a metric fuckton of processing power and RAM. My poor server struggled quite a bit even with 10 available cores and 10Gb of RAM...

The Amiga Consciousness [OSNews]

There exists a global community, a loosely knit consciousness of individuals that crosses boundaries of language and artistic disciplines. It resides in both the online and physical space, its followers are dedicated, if not fervent. The object and to some extent, philosophy that unites these adherents, is a computer system called the Commodore Amiga. So why does a machine made by a company that went bankrupt in 1994 have a cult like following? Throughout this essay I will present to you, the reader, a study of qualitative data that has been collected at community events, social gatherings and conversations. The resulting narrative is intended to illuminate the origins of the community, how it is structured and how members participate in it. Game industry professionals, such as the person interviewed during the research for this paper, will attest to the properties, characteristics and creative application of the machine, and how this creativity plays a role in the sphere of their community. I will examine the bonds of the society, to determine if the creative linage of the computer plays a role in community interactions. The Amiga community is probably one of the most fascinating technology subcommunity out there. Lots of infighting, various competing Amiga operating systems, incredibly expensive but still outdated hardware, dubious ownership situations - it's all there. Yet, they keep going, they keep pushing out new software and new hardware, and they're in no danger of falling apart. Amazing.

A $1.6 billion Spotify lawsuit is based on player pianos [OSNews]

Spotify is finally gearing up to go public, and the company’s February 28th filing with the SEC offers a detailed look at its finances. More than a decade after Spotify’s launch in 2006, the world’s leading music streaming service is still struggling to turn a profit, reporting a net loss of nearly $1.5 billion last year. Meanwhile, the company has some weird lawsuits hanging over its head, the most eye-popping being the $1.6 billion lawsuit filed by Wixen Publishing, a music publishing company that includes the likes of Tom Petty, The Doors, and Rage Against the Machine. So, what happened here? Did Spotify really fail to pay artists to the tune of a billion dollars all the while losing money? Is digital streaming just a black hole that sucks up money and spits it out into the cold vacuum of space? The answer is complicated. The answer involves something called "player pianos". You can't make this stuff up.

Chris Slane's privacy-oriented editorial cartoons are painfully funny [Boing Boing]

Online privacy is pretty much a dumpster-fire, but it's a funny dumpster fire in the world of Kiwi editorial cartoonist Chris Slane, whose one-panel strips are hilarious in a kind of oh-shit-we're-doomed kind of way. (more…)


[$] Weekly Edition for March 15, 2018 []

The Weekly Edition for March 15, 2018 is available.

Wednesday, 14 March


The Humble Book Bundle: DIY Electronics by Wiley: Wiley is back,... [Humble Bundle Blog]

The Humble Book Bundle: DIY Electronics by Wiley: 

Wiley is back, and this time it’s DIY. Teach yourself electronics, learn Python, explore Beaglebone, go on an adventure with Raspberry Pi, and more with this book bundle!

Assets for Press and Partners

Solarpunk rising, or how to turn boring bureaucratic meetings into creative fodder [Charlie's Diary]

I'm not a Solarpunk, I just play one in real life, it seems. While Charlie's out doing important stuff, I decided I'd drop a brief, meandering essay in here for the regular crowd of commenters to say some variation on, "Why yes, that's (adjective) obvious," and to eventually turn the conversation around to the relative merits of either trains or 20th Century weapons systems, if we can get past comment 100.

As most of you know, I do a lot of environmentalism, so much so in fact that I'm not working any creative writing right now (except this!), just going to meetings and reading environmental impact reports (if you don't know anything about California's perennial punching bag, CEQA), well, don't bother, it's tedious). This post was inspired by what I saw in the process of the San Diego County Supervisors approving the most developer-friendly version of the County Climate Action Plan (CAP) that they could. The details of about seven hours of meetings really don't matter, but the universality of what happened might, at least a little.

The situation (try to stifle those yawns in back, they're impolite) is that California, like the rest of the civilized world and unlike much of the US, has the goal of actually meeting the Paris Accords, and we've got the sunshine to do it. At least in the southern part of the state. Maybe. Moreover, because California's kinda bureaucratic, every municipality has to have a general plan telling people what they can build and where. By 2020-ish, said general plans are also supposed to include a section on how each municipality will meet the state's goals on decarbonizing. Yay! Or rather, meh, because these documents tend to err on the side of vague aspiration, for reasons that will become obvious below.

So San Diego County just approved CAP 2.0, CAP 1.0 having been litigated out of existence a couple of years ago. As is usual with such decisions, the public got to testify before the supervisors voted. Here's a sampling of the testimony at the approval meeting, highly digested:

• the building industry claimed meeting the CAP is too expensive. Figures were disputed, but since almost everyone on the county planning commission is a (retired) developer and the supervisors are pro-development, "too expensive" got repeated ad nauseum, especially by one elder who ranted about spending more than $10k on solar at a home in Hawai'i. Obviously said personage hasn't checked the prices recently, but that's what you get with appointed commissioners.

• The urban forestry crowd wanted more trees in the streets to sequester carbon. Street trees have about the same lifespan as a plastic lawn chair, and San Diego is known for, well, droughts and such, but more street trees would be good, no? Who can argue against that?

• The enviros were talking about carbon farming, where we get carbon sequestered into farm soils or whatever. It sounded good, mostly because we don't know anything about it.

• The agriculture community, which has been ripping out avocado and orange orchards since there's no more water for them, pointed out that they could plant two orders of magnitude more trees than the urban foresters could, if only someone could give them more water. Oddly enough, that's a huge problem, given water wars, increasing urban demand for water, and all that. They're experimenting with putting carbon into soils here too, just to see if it'll work like we enviros keep saying. Good on them.

• What set us enviros off was the notion of carbon offsets (read "indulgences for polluters") where somebody whose project farts off a lot of greenhouse gases can buy something (a tree farm, or a marsh restoration, or a livestock manure digester that burns the methane into CO2, which is a less potent greenhouse gas) and claim credit for offsetting their emissions. Wanna buy some Congolese swampland or Indonesian peatlands? I'm not joking: these would be great places to buy, if only you could be sure that your investment wasn't just taken by FlyByNight Sequestration Opportunities (accepts only cryptocurrencies), with documentation returned to you in a one-ply roll with perforated sheets to tear off. Anyway, most of the local enviros want the County to make sure that carbon offsets happen only in San Diego county for preference, or in California, where we can theoretically see if anything useful was done with the money spent on the indulgence, excuse me, the carbon offset. Did I mention how easy it is to game the documentation? Sad that I'm so cynical as to think that would happen.

• Then there was stupid ol' me, commenting that it's effing hard to do carbon offsets in San Diego, because of that little drought and climate change thing, with a side helping of tree-killing beetle plagues. Everybody looked at me like I'd sprouted two heads and sold my soul to Koch, until I explained that carbon offsets only really work if you take the carbon out of the air for a century. That means century-old street trees, avocado trees, orange trees, pine trees in the mountains, farm soil holding carbon, restored marsh peat, whatever. The carbon bank has to keep the greenhouse gases out of the air for at least a century. Burn the trees, oxidize the marsh soils, or over-till the farmlands, and all the carbon goes back into the air just as climate change really gets roaring. The other enviros are starting to get it, kinda, while I'm left wishing I'd brought this up years ago, so they'd be more up to speed. Am I the only enviro around here who's had plant physiology and even published a paper modeling plant growth? Why yes, unfortunately, I am. Most of the people involved don't understand how the numbers work. Argh!

Alright, here's where you wake up again. The underlying theme of the above mess? Ignorance. How do all of us think San Diego will meet its climate goals? By dumping the carbon in some place we don't know much about, since all the options we do know about don't seem to be good enough. Surprise! Here's how it would work in real life:

We could put solar panels on every roof, as environmentalists (including me) have been ignorantly harping for years. Except that the median SD income is $64,000, the median home price is $540,000, solar panels are around $15-20,000, and most people can't afford them yet. So, um, yeah, that means more big solar panels out in the backcountry and the desert, probably trashing a lot of wildlands (and Ol' Cheeto Grande and his cronies just stuck their middle manipulative digits in to make that harder). It's easy to argue on a moral basis that each city should power itself. But to do that in the real world, you've got to solve little things like housing crises, so that each building owner can actually afford to buy and maintain the necessary infrastructure.

We could offset our emissions in the County, except that almost certainly means piping in more water from somewhere else, like it was 1950 or something. And yes, desalination is pretty effin' expensive, thanks so much for thinking about that option. Wanna eat a $10 avocado and pay to keep the tree producing it alive for the next century?

We could offset our emissions elsewhere, except that not getting scammed over the next century is kinda hard, especially since the people buying these indulgences might not really care if they're getting scammed or not.

Notice how just one City's flailing around to become sustainable could affect areas around the globe? To me, that's the real Solarpunk. Multiply that by around 3,500,000,000 (something like the number of people living in cities now), and you can see new ways in which urban demands are going to punk the 97 percent of Earth's surface that isn't urbanized.

With Solarpunk, we should be talking about aesthetics and aspirations, like, say, wooden skyscrapers, cities ditching problematic sewers and turning to using sustainably-sourced humanure production to sequester carbon by sending this urban carbon to be buried in the untilled soil on multi-ethnic, diversely gendered, community supported, sustainable farms. With all the pathogens nicely controlled, thank you very much. And definitely there must be shiny solar panels on every surface that points either to the equator or to the west.

Unfortunately, the real-world trends embodied in events like the meetings I regurgitated above mean that the next few decades are far more likely to be old-school Noir Solarpunk, with the wealthy and powerful forcing the rural and less advantaged to deal with the problems activists like me bring up, just as they have for decades now.

Kinda sucks, but that doesn't mean that the Solarpunks should give up their ideals. It just means that, if they want their work to have drama, tension, and (dare I say it?) relevance, then they need to stop dreaming about moving to Wakanda and go to more boring meetings. It's amazing what you can learn while you're trying to sit through those things.

Oh yeah, comments. Have fun chewing on this. It's not that we're doomed (help the thread hit 100 comments and you can bewail Teh Doom to your hearts' content after comment #100). Rather, it's that we don't really understand most of what's going on, and ignorance in action leads to, well, gonzo literature. In a slightly more realistic Solarpunk work, for every pastel-tinted "plyscraper," there's going to be a multinational solar plant where an irrigated farm in a poor rural county used to be, with food prices rising as a result. For every shipping container farmlet installed, there's going to be ten carbon offset scams. For every environmentalist wondering if the end of internal combustion means the end of a lot of the weeds that air pollution fertilized, there's going to be a score of urban planners frustrated by how hard it is to rebuild car-driven cities to accommodate a 100% electric fleet, when batteries turn out to be not as good as gas. And so it goes. This mess is where solar goes punk. What do you think it will look like?


[$] Discussing PEP 572 []

As is often the case, the python-ideas mailing list hosted a discussion about a Python Enhancement Proposal (PEP) recently. In some sense, this particular PEP was created to try to gather together the pros and cons of a feature idea that regularly crops up: statement-local bindings for variable names. But the discussion of the PEP went in enough different directions that it led to calls for an entirely different type of medium in which to have those kinds of discussions.


Disneyland announces a date for removal of sex-slave trafficking scene from the Pirates of the Caribbean ride [Boing Boing]

The Pirates of the Caribbean was the last ride Walt Disney personally supervised; it has undergone many replications and revisions over the years, but last year Disneyland Paris removed the "Buy a Bride" scene, in which we are treated to a lighthearted human trafficking auction in which captured women are auctioned to pirates as "brides." (more…)

Facebook once boasted of its ability to sway elections, now it has buried those pages [Boing Boing]

Facebook maintains a repository of success stories trumpeting the advertisers who have attained greatness by buying Facebook ads; most of these are businesses, but until recently, Facebook also trumpeted Florida Governor Rick Scott's use of Facebook ads to "boost Hispanic voter turnout in their candidate’s successful bid for a second term, resulting in a 22% increase in Hispanic support and the majority of the Cuban vote." (more…)

European Parliament ambushed by doctored version of pending internet censorship rules that sneaks filtering into all online services [Boing Boing]

For months, the European Parliament has been negotiating over a new copyright rule, with rightsholder organizations demanding that some online services implement censoring filters that prevent anyone from uploading text, sounds or images if they have been claimed by a copyright holder. (more…)


Cloudflare’s Cache Can ‘Substantially Assist’ Copyright Infringers, Court Rules [TorrentFreak]

As one of the leading CDN and DDoS protection services, Cloudflare is used by millions of websites across the globe.

This includes thousands of “pirate” sites, including the likes of The Pirate Bay, which rely on the U.S.-based company to keep server loads down.

Many rightsholders have complained about Cloudflare’s involvement with these sites and in 2016 adult entertainment publisher ALS Scan took it a step further by dragging the company to court.

ALS accused the CDN service of various types of copyright infringement, noting that several customers used Cloudflare’s servers to distribute pirated content. While Cloudflare managed to have several counts dismissed, the accusation of contributory copyright infringement remains.

With the case heading to trial, both sides have submitted motions for partial summary judgment on this contributory infringement claim. This week California District Court Judge George Wu ruled on the matter, denying the CDN provider’s motion in its entirety.

One of Cloudflare’s arguments was that it did not substantially assist copyright infringements because the sites would remain online even if they were terminated from the service. It can’t end the infringements entirely on its own, the company argued.

The Court disagreed with this assessment, noting that Cloudflare’s cache can be seen as a substantial infringement by itself, which is something the company has control over.

“First of all, as to the infringements that are the cache copies, Cloudflare does appear to have the master switch,” Judge Wu writes.

“Second of all, just because the infringing images will remain online, does not mean the assistance is insubstantial. If that were true, then liability based on server space would rely on whether or not an infringing site had, or could acquire a backup server.”

Cloudflare also stressed that there are no simple measures it could take in response to alleged copyright infringements. Removing a cached copy based on a takedown notice is not an option, the company argued, as that leaves sites and their users vulnerable to malicious attacks.

Judge Wu didn’t deny that terminating service to sites such as ‘ and’ could cause security issues but added that this doesn’t mean that it’s okay for Cloudflare to support illegal activity.

“[I]f Cloudflare’s logic were accepted, there would be no web content too illegal, or dangerous, to justify termination of its services. While Cloudflare may do amazing things for internet security, the Court would have a hard time accepting that Cloudflare’s security features give it license to assist in any online activity,” Judge Wu writes.

From the order

Moving on to ALS’ motion, which was also denied in part, the Court brings more bad news for Cloudflare. While the CDN provider keeps its safe harbor defense at trial, the Court ruled that the existence of cache copies can be sufficient to prove that Cloudflare assisted in the alleged copyright infringements.

“The Court would find that, as a legal matter, Cloudflare’s CDN Network, to the extent it is shown to have created, stored, and delivered cache copies of infringing images, substantially assisted in infringement,” the order reads.

“The reason is straightforward: without Cloudflare’s services those cache copies would not have been created and served to end users,’ a footnote clarifies.

The order doesn’t draw any conclusions about actual infringements. However, if ALS can prove to the jury that specific images were in Cloudflare’s cache, without permission, the “substantial assistance” element required for contributory liability is established.

If that happens, the only remaining element at trial is whether Cloudflare was aware of these infringements, which is where the takedown notices would come in.

The case will soon be in the hands of the jury and can still go in either direction. However, the order puts Cloudflare at a disadvantage as it can no longer argue that cached copies of infringing content by themselves are non-infringing. This will obviously be a concerns to other CDN providers as well, which makes this a landmark case.

A copy of Judge Wu’s ruling, obtained by TorrentFreak, is available here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Sven Hoexter: aput - simple upload script for a flat artifactory Debian repository [Planet Debian]

At work we're using Jfrog Artifactory to provide a Debian repository (among other kinds of repository). Using the WebUI sucks, uploading by cut&pasting a curl command is annoying too, so I just wrote down a few lines of shell to upload a single Debian binary package.

Expectation is a flat repository and that you edit the variables at the top to provide the repository URL, name and your API Key. So no magic involved.

Scenes from today's national gun control student walkout [Boing Boing]

Today at 10AM local time, students across America walked out of their classes for 17 minutes, in memoriam of the 17 students murdered in the Parkland massacre at Marjory Stoneman Douglas High School in Parkland, Florida, exactly one month ago. (more…)

Stephen Hawking's final words to the internet: robots aren't the problem, capitalism is [Boing Boing]

The last message Stephen Hawking posted to a public internet forum was an answer to a question in a Reddit AMA, querying how humanity will weather an age of technological unemployment. (more…)

Study finds that for-pay scholarly journals contribute virtually nothing to the papers they publish [Boing Boing]

In the open access debate, advocates for traditional, for-profit scholarly journals often claim that these journals add value to the papers they publish in the form of editorial services that improve their readability and clarity. (more…)


News Post: Realiti [Penny Arcade]

Tycho: I tried to explain it to Grub Dumpster a couple days ago.  Imagine the moment where you’re trying to push the final point in, say, Numbani.  The sled has turned the corner and the lip is now over the point.  It’s lipping.  Time is out, and it’s only desperate, frenzied roosting on the objective itself that is keeping this balloon in the air. I feel nothing. At least, not compared with him.  There’s something happening over the baseline but you’d need electrodes and probably a grant of some kind to discern it.  I know people who,…

Badass Army: revenge-porn survivors teach each other digital and legal self-defense [Boing Boing]

Battling Against Demeaning & Abusive Selfie Sharing (AKA the Badass Army) is an activist group founded by revenge porn survivor Katelyn Bowden to offer self-defense training against the tactics of traffickers in "involuntary pornography," particularly the loathsome denizens of Anon-IB. (more…)

Hereditary ultraestablishment Democrat Dan Lipinski funded by railroad industry, whose safety regs he has held at bay [Boing Boing]

Dan Lipinski literally inherited his Illinois seat from his father, and has held it since 2004, despite voting against a $15 minimum wage and against a woman's right to choose to have an abortion. (more…)


United moves on to literally killing puppies [Boing Boing]

United Airlines has repeatedly attained viral fame for its mistreatment of its passengers and their belongings, and has even dabbled in pet murder, but now the airline has crossed another item off its worst-airline bucket-list, murdering a passenger's puppy by insisting that a dog-carrier be stored in an overhead locker during a 3.5 hour flight, despite having received a $125 cabin pet fee and despite the carrier fitting comfortably under the seat. (more…)

Google launches "plus codes": open geocodes for locations that don't have street addresses [Boing Boing]

In much of the world, addresses are difficult to convey because they refer to locations on unnamed streets, in unnumbered buildings, in unincorporated townships, sometimes in disputed national boundaries (I have often corresponded with people in rural Costa Rica whose addresses were "So-and-so, Road Without Name, 300m west of the bus stop, village, nearest town, region"). (more…)


AMDFLAWS: a series of potentially devastating (but controversial) attacks on AMD processors [Boing Boing]

Israeli security research firm CTS-Labs has published a white paper detailing nine flaws in AMD processors that they claim leave users open to devastating attacks with no mitigation strategies; these attacks include a range of manufacturer-installed backdoors. (more…)


ACME v2 and Wildcard Certificate Support is Live []

Let's Encrypt has announced that ACMEv2 (Automated Certificate Management Environment) and wildcard certificate support is live. ACMEv2 is an updated version of the ACME protocol that has gone through the IETF standards process. Wildcard certificates allow you to secure all subdomains of a domain with a single certificate. (Thanks to Alphonse Ogulla)

GNOME 3.28 released []

GNOME 3.28 has been released. "This release brings a more beautiful font, an improved on-screen keyboard and a new 'Usage' application. Improvements to core GNOME applications include support for favorites in Files and the file chooser, a better month view in the Calendar, support for importing pictures from devices in Photos, and many more." See the release notes for details.

Today in GPF History for Wednesday, March 14, 2018 [General Protection Fault: The Comic Strip]

A call home reveals Ki's mother has concerns about her brother, Yoshi...


Security updates for Wednesday []

Security updates have been issued by Arch Linux (calibre, dovecot, and postgresql), CentOS (dhcp and mailman), Fedora (freetype, kernel, leptonica, mariadb, mingw-leptonica, net-snmp, nx-libs, util-linux, wavpack, x2goserver, and zsh), Gentoo (chromium), Oracle (389-ds-base, mailman, and qemu-kvm), Red Hat (389-ds-base, kernel, kernel-alt, libreoffice, mailman, and qemu-kvm), Scientific Linux (mailman), Slackware (firefox and samba), and Ubuntu (samba).

Umlauts in River5 [Scripting News]

Braintrust query: Christoph Knopf reports re River5 a problem with reading umlauts.

He offers a feed that illustrates the problem.

I wrote a simple Node app that reads the file using the standard request package, and what he reports is observed. The umlauts appear as ďż˝ characters.

I found this Stack Overflow thread that says the answer is to use iconv-lite. Others seem to confirm this is the way to go.

Before I contemplate making a change to River5, I wanted to get the opinion of the braintrust.

Thanks in advance.


[$] An introduction to RISC-V []

LWN has covered the open RISC-V ("risk five") processor architecture before, most recently in this article. As the ecosystem and tools around RISC-V have started coming together, a more detailed look is in order. In a series of two articles, guest author Richard W.M. Jones will look at what RISC-V is and follow up with an article on how we can now port Linux distributions to run on it.

Doctor my eyes [Scripting News]

I have an eye infection. This is a new thing for me since I started wearing contact lenses last year. This means I have to try to get along wearing glasses for a few days. My eyesight with glasses is terrible. Writing is kind of an adventure! So it may be slow going here for me, for the next few days until (hopefully) the infection clears up and I can resume life as a normally sighted person. I am of course getting help from my doctor.

Stop cherry-picking, start merging, Part 3: Avoiding problems by creating a new merge base [The Old New Thing]

The first two parts of the series discussed the bad things that can happen if you cherry-pick a change that is subsequently modified. If you're lucky, you get a merge conflict. If you're not lucky, your modification is simply ignored. If only there were a way to do a partial merge instead of a cherry-pick, the problems could have been avoided.

It turns out that git does support partial merges. It's just that nobody talks about it that way. You create a partial merge by doing full merge with a custom merge base.

At the start of our saga, we have a commit tree like this:

, which does show up -->
apple   apple
A ← M1   master
    F1   feature

From a common ancestor A, commit F1 happens on the feature branch, and commit M1 happens on the master branch. Now you realize that you need to apply a fix to both branches. You don't want to merge the entire feature branch into the master branch, because that would also pick up commit F1.

Here's the trick: Create a third branch and merge it into both the master and feature branches.

    apple       berry
    M1 ← ← ← M2   master
apple ↙︎     berry ↙︎
A ← ← ← P       patch
  ↖︎       ↖︎
    F1 ← ← ← F2   feature
    apple       berry

We created a new branch called patch based on the common ancestor commit A, and committed our fix to the patch branch as commit P. We then merged commit P into the master branch, and also into the feature branch, producing commits M2 and F2, respectively.

As before, work continues on both the master and feature branches, and eventually the root cause of the problem is determined, and the patch is reverted in the feature branch and a proper fix applied.

    apple       berry   berry
    M1 ← ← ← M2 ← M3   master
apple ↙︎     berry ↙︎
A ← ← ← P   patch
  ↖︎       ↖︎
    F1 ← ← ← F2 ← F3   feature
    apple       berry   apple

On the master branch, commit M3 does additional work unrelated to our patch. Meanwhile, in our feature branch, we figure out the proper fix and commit it as F3. Commit F3 changes the line back to apple (undoing our patch) as well as containing the proper fix.

Eventually, it comes time to merge the feature branch to the master branch. The merge chooses commit P as the merge base, since it is the most recent common ancestor. The commits involved in the three-way merge are P (the base), M3 (the head of the master branch) and F3 (the head of the feature branch). Let's erase all the other commits, since they don't participate in the merge.

    M3   master
berry ↙︎
    F3   feature

There is no change to the line in question in the master branch relative to the merge base, but in the feature branch, berry changed to apple. Therefore, the merge result will have apple.

    apple       berry   berry   apple
    M1 ← ← ← M2 ← M3 ← ← ← M4   master
apple ↙︎     berry ↙︎       ↙︎  
A ← ← ← P   patch
  ↖︎       ↖︎
    F1 ← ← ← F2 ← F3       feature
    apple       berry   apple

But wait, what about the changes in commits M1 and F1? They were bypassed by commit P, weren't they? Are those changes going to be lost?

Nope, those changes will be merged in just fine because they are also present in M3 and F3. This is the same situation you run into in normal day-to-day operation when you merge from the master to the feature branch periodically while you work on your feature:

X ← X1 ← X2 ← X3 ← X4 ← X5   master
  ↖︎       ↖︎       ↖︎
    T1 ← T2 ← T3 ← T4 ← T5   feature

In the above diagram (a brand new diagram unrelated to the previous diagrams), you created a feature branch from the master branch at some commit X. Work continues in the master branch as commits X1, X2, and so on. Simultaneously, work continues in the feature branch as commits T1, T2, and so on. But every so often, the feature branch takes a merge from the master branch, so that the two don't drift too far out of sync.

Suppose you are now ready to merge the feature branch back to the master branch. The last time the feature branch merged from the master branch was when it merged commit X4, resulting in commit T5 on the feature branch. This makes commit X4 the merge base. Are you worried that this upcoming merge will throw away the changes in commits T1 through T4, since the merge base commit X4 post-dates them? No, you aren't, because you know that the changes in T1 through T4 are also present in T5, and they will go into the master branch as part of the merge.

Okay, back to our original story. Creating the patch branch and merging it into both the master and feature branches preserves the connection between the two commits in the respective branches, and in particular identifies them as being two manifestations of the same underlying change (namely, commit P). The resulting merge of the two branches recognizes this relationship and doesn't double-apply the change.

Basically, the patch branch converts what was originally a cherry-pick into a merge. It was the cherry-pick that was the source of all the problems, and the fix is to get rid of the cherry-pick and use merges instead. The temporary patch branch gives us our partial merge.

That's the basic idea. There are still a lot of questions to answer, such as "How do I find the correct merge base?", "What if I pick the wrong merge base?", "What if I need to perform two cherry-picks?", or "What if I already did the cherry-pick; can I somehow repair the damage and prevent the future merge conflict or ABA problem?" We'll start delving into them next time.

DoNotPay bot launches a cheap airline ticket that automates the nearly impossible business of getting refunds when prices fall [Boing Boing]

The DoNotPay bot (previously) is a versatile consumer advocacy chatbot created by UK-born Stanford computer science undergrad Joshua Browder, with its origins in a bot to beat malformed and improper traffic tickets, helping its users step through the process of finding ways to invalidate the tickets and saving its users millions in the process. (more…)

Happy 20th, Kottke! [Boing Boing]

Jason Kottke's blog turns 20 today (our online incarnation is a mere 18.3 years old, though we go back in print by another decade-plus); he celebrates with a lovely essay that recalls some of his thoughts in 2008, when he celebrated his tenth by speculating on whether he'd still be going in 2018, 2028 or 2038: "I had a personal realization recently: isn’t so much a thing I’m making but a process I’m going through. A journey. A journey towards knowledge, discovery, empathy, connection, and a better way of seeing the world. Along the way, I’ve found myself and all of you. I feel so so so lucky to have had this opportunity."


Abhijith PA: Going to FOSSASIA 2018 [Planet Debian]

I will be attending FOSSASIA summit 2018 happening at Singapore. Thanks to Daniel Pocock, we have a Debian booth there. If you are attending please add your name to this wiki page or contact me personally. We can hangout at the booth.


Stephen Hawking and me [I, Cringely]

I only met Stephen Hawking twice, both times in the same day. Hawking, who died a few hours ago, was one of the great physicists of any era. He wrote books, was the subject of a major movie about his early life, and of course survived longer than any other amyotrophic lateral sclerosis (ALS) sufferer, passing away at 76 while Lou Gehrig didn’t even make it to 40. We’re about to be awash in Hawking tributes, so I want to share with you my short experience of the man and maybe give more depth to his character than we might take away from the evening news.

Several years ago I was booked to speak at a (pre-Intel) Window River Systems event at the Claremont Hotel in Oakland. The Claremont, like the Hotel Del Coronado in San Diego, is a huge old hotel built entirely of wood. Creaky old elevators and creaky old staircases connect all the floors but stairs are faster and I was in a hurry to give my speech because Jerry Fiddler was waiting. So I took the stairs two at a time then burst through a set of double doors and straight into…

Stephen Hawking.

You know how in moments of great danger time seems to slow down? That’s what happened to me. I pushed both doors open at once and there, perhaps a foot in front of me, was Hawking. My feet weren’t even touching the ground and there was this startled looking little guy in an electric wheel chair surrounded by four beautiful women.

It may have been a matter of pride, then, that Hawking outlived Hugh Hefner, given that they had such similar tastes.

The women moved to protect their charge but I was already bounding over the chair, somehow managing not to kill the most famous physicist on Earth. While I say that Hawking was startled, our eyes locked and I didn’t see any fear. He was well defended after all.

It may surprise you to know that I once almost killed Dick Feynman, too, but that’s a different story for another time.

There was no time to waste so I went straight on to my speech where I told this same story before the paint even had a chance to dry.

Hawking was at the Claremont for a physics event and that evening I ran into him again (metaphorically, not literally) holding court in the bar. Still surrounded by his four comely assistants, Hawking was parallel parked at the bar drinking a double martini through a straw that was at least two feet long.

I apologized for almost killing him and bought Hawking another double martini for his trouble.

He drank it.

Digital Branding
Web Design Marketing


Feeds | Oxford Reproducibility Lectures [Planet GridPP]

Oxford Reproducibility Lectures s.aragon 14 March 2018 - 11:55am

34665071964_79a52d2bba_z.jpgBy Ana Todorović, Oxford University

The 600+ Companies PayPal Shares Your Data With [Schneier on Security]

One of the effects of GDPR -- the new EU General Data Protection Regulation -- is that we're all going to be learning a lot more about who collects our data and what they do with it. Consider PayPal, that just released a list of over 600 companies they share customer data with. Here's a good visualization of that data.

Is 600 companies unusual? Is it more than average? Less? We'll soon know.


Four short links: 14 March 2018 [All - O'Reilly Media]

Flow Diagrams, Multiplayer Editing, Inductive Programming, and Manipulating Data

  1. SankeyMATIC -- A Sankey diagram depicts flows of any kind, where the width of each flow pictured is based on its quantity. [...] SankeyMATIC builds on the open source tool D3.js and its Sankey library, which are very powerful but require a fair amount of work and expertise to use. SankeyMATIC unlocks the capabilities of the D3 Sankey tool for anyone to use.
  2. Type in Tandem -- Decentralized, cross-editor, collaborative text-editing. A protocol, back end, and plugins for various editors.
  3. Approaches and Applications of Inductive Programming -- Inductive programming (IP) addresses the automated or semi-automated generation of computer programs from incomplete information such as input-output examples, constraints, computation traces, demonstrations, or problem-solving experience. I'm watching because I believe that a lot of software will eat itself.
  4. Your Data is Being Manipulated (danah boyd) -- text of a talk given to Strata last year. No amount of excluding certain subreddits, removing of categories of tweets, or ignoring content with problematic words will prepare you for those who are hell-bent on messing with you.

Continue reading Four short links: 14 March 2018.

CodeSOD: Lightweight Date Handling [The Daily WTF]

Darlene has a co-worker who discovered a problem: they didn’t know or understand any of the C++ libraries for manipulating dates and times. Checking the documentation or googling it is way too much...


Dolby Labs Sues Adobe For Copyright Infringement [TorrentFreak]

Adobe has some of the most recognized software products on the market today, including Photoshop which has become a household name.

While the company has been subjected to more than its fair share of piracy over the years, a new lawsuit accuses the software giant itself of infringement.

Dolby Laboratories is best known as a company specializing in noise reduction and audio encoding and compression technologies. Its reversed double ‘D’ logo is widely recognized after appearing on millions of home hi-fi systems and film end credits.

In a complaint filed this week at a federal court in California, Dolby Labs alleges that after supplying its products to Adobe for 15 years, the latter has failed to live up to its licensing obligations and is guilty of copyright infringement and breach of contract.

“Between 2002 and 2017, Adobe designed and sold its audio-video content creation and editing software with Dolby’s industry-leading audio processing technologies,” Dolby’s complaint reads.

“The basic terms of Adobe’s licenses for products containing Dolby technologies are clear; when Adobe granted its customer a license to any Adobe product that contained Dolby technology, Adobe was contractually obligated to report the sale to Dolby and pay the agreed-upon royalty.”

Dolby says that Adobe promised it wouldn’t sell its any of its products (such as Audition, After Effects, Encore, Lightroom, and Premiere Pro) outside the scope of its licenses with Dolby. Those licenses included clauses which grant Dolby the right to inspect Adobe’s records through a third-party audit, in order to verify the accuracy of Adobe’s sales reporting and associated payment of royalties.

Over the past several years, however, things didn’t go to plan. The lawsuit claims that when Dolby tried to audit Adobe’s books, Adobe refused to “engage in even basic auditing and information sharing practices,” a rather ironic situation given the demands that Adobe places on its own licensees.

Dolby’s assessment is that Adobe spent years withholding this information in an effort to hide the full scale of its non-compliance.

“The limited information that Dolby has reviewed to-date demonstrates that Adobe included Dolby technologies in numerous Adobe software products and collections of products, but refused to report each sale or pay the agreed-upon royalties owed to Dolby,” the lawsuit claims.

Due to the lack of information in Dolby’s possession, the company says it cannot determine the full scope of Adobe’s infringement. However, Dolby accuses Adobe of multiple breaches including bundling licensed products together but only reporting one sale, selling multiple products to one customer but only paying a single license, failing to pay licenses on product upgrades, and even selling products containing Dolby technology without paying a license at all.

Dolby entered into licensing agreements with Adobe in 2003, 2012 and 2013, with each agreement detailing payment of royalties by Adobe to Dolby for each product licensed to Adobe’s customers containing Dolby technology. In the early days when the relationship between the companies first began, Adobe sold either a physical product in “shrink-wrap” form or downloads from its website, a position which made reporting very easy.

In late 2011, however, Adobe began its transition to offering its Creative Cloud (SaaS model) under which customers purchase a subscription to access Adobe software, some of which contains Dolby technology. Depending on how much the customer pays, users can select up to thirty Adobe products. At this point, things appear to have become much more complex.

On January 15, 2015, Dolby tried to inspect Adobe’s books for the period 2012-2014 via a third-party auditing firm. But, according to Dolby, over the next three years “Adobe employed various tactics to frustrate Dolby’s right to audit Adobe’s inclusion of Dolby Technologies in Adobe’s products.”

Dolby points out that under Adobe’s own licensing conditions, businesses must allow Adobe’s auditors to allow the company to inspect their records on seven days’ notice to confirm they are not in breach of Adobe licensing terms. Any discovered shortfalls in licensing must then be paid for, at a rate higher than the original license. This, Dolby says, shows that Adobe is clearly aware of why and how auditing takes place.

“After more than three years of attempting to audit Adobe’s Sales of products containing Dolby Technologies, Dolby still has not received the information required to complete an audit for the full time period,” Dolby explains.

But during this period, Adobe didn’t stand still. According to Dolby, Adobe tried to obtain new licensing from Dolby at a lower price. Dolby stood its ground and insisted on an audit first but despite an official demand, Adobe didn’t provide the complete set of books and records requested.

Eventually, Dolby concluded that Adobe had “no intention to fully comply with its audit obligations” so called in its lawyers to deal with the matter.

“Adobe’s direct and induced infringements of Dolby Licensing’s copyrights in the Asserted Dolby Works are and have been knowing, deliberate, and willful. By its unauthorized copying, use, and distribution of the Asserted Dolby Works and the Adobe Infringing Products, Adobe has violated Dolby Licensing’s exclusive rights..,” the lawsuit reads.

Noting that Adobe has profited and gained a commercial advantage as a result of its alleged infringement, Dolby demands injunctive relief restraining the company from any further breaches in violation of US copyright law.

“Dolby now brings this action to protect its intellectual property, maintain fairness across its licensing partnerships, and to fund the next generations of technology that empower the creative community which Dolby serves,” the company concludes.

Dolby’s full complaint can be found here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Hitsville [Seth Godin's Blog on marketing, tribes and respect]

The latest episode my Akimbo podcast is about hits. (Click then scroll down to see all the episodes, or, even better, subscribe for free...)

The hit that sweeps an industry, like a thresher through a wheat field. The one that everyone is talking about. The lines down the street, the box office record, the career maker.

Isn't that what creators dream of?

There are three ways a hit happens:

  1. Many people who love a particular medium (music, the theatre, books) coalesce around a single new title.
  2. Many people who rarely spend time in that medium make this title the one thing they're going to engage in.
  3. A few people consume that title over and over again.

It's rare to invent something that works on all three levels. Black Panther is not like the DaVinci Code which is not like the Grateful Dead.

You can build something that the cool kids love. You can build something that the bystanders love. Or you can build a cult favorite. Best to do it on purpose.

[PS we're going to record an episode of Akimbo today. The episode is going to be about 'live' and of course, we're going to record it live at 10 am ET. Feel free to tune in and join us.]

[Also, Bernadette's new book is now available. It's worth your time.]



Fish In The Sea, p6 [Ctrl+Alt+Del Comic]

Ctrl+Alt+Del is being sponsored this week by our friends over at SUMO! SUMO is celebrating their 15th anniversary with 15% off their collection of comfy bean bag chairs. I’ve sat in (and fallen asleep in) a lot of their bean bag chairs over the years, and I highly recommend them to anyone looking for high-end lounge furniture!

For books that ship from the US warehouse, click here.

For delivery to Australia or New Zealand, use the buttons below.

To Australia Book + Shipping = $90.

To New Zealand, Book + Shipping = $110.


Comic: Realiti [Penny Arcade]

New Comic: Realiti

Laura Arjona Reina: WordPress for Android and short blog posts [Planet Debian]

I use for my social network interactions and from time to time I post short thoughts there.

I usually reserve my blog for longer posts including links etc.

That means that it’s harder for me to publish in my blog.

OTOH my daily commute time may be enough to craft short posts. I bring my laptop with me but it’s common that I
open kate, begin to write, and arrive my destination with my post almost finished but unpublished. Or, second variant, I cannot sit so I cannot type in the metro and pass the time reading or thinking.

I’ve just installed WordPress for Android and hopefully that helps me to write short posts in my commute time and publish quicker. Let’s try and see what happens 🙂


Comment about this post in this thread.


Norbert Preining: Replacing a lost Yubikey [Planet Debian]

Some weeks ago I lost my purse with everything in there, from residency card, driving license, credit cards, cash cards, all kind of ID cards, and last but not least my Yubikey NEO. Being Japan I did expect that the purse will show up in a few days, most probably the money gone but all the cards intact. Unfortunately not this time. So after having finally reissued most of the cards, I also took the necessary procedures concerning the Yubikey, which contained my GnuPG subkeys, and was used as second factor for several services (see here and here).

Although the GnuPG keys on the Yubikey are considered safe from extraction, I still decided to revoke them and create new subkeys – one of the big advantage of subkeys, one does not start at zero but just creates new subkeys instead of running around trying to get signatures again.

Other things that have to be made is removing the old Yubikey from all the services where it has been used as second factor. In my case that were quite a lot (Google, Github, Dropbox, NextCloud, WordPress, …). BTW, you have a set of backup keys saved somewhere for all the services you are using, right? It helps a lot getting into the system.

GnuPG keys renewal

To remind myself of what is necessary, here are the steps:

  • Get your master key from the backup USB stick
  • revoke the three subkeys that are on the Yubikey
  • create new subkeys
  • install the new subkeys onto a new Yubikey, update keyservers

All of that is quite straight-forward: Use gpg --expert --edit-key YOUR_KEY_ID, after this you select the subkey with key N, followed by a revkey. You can select all three subkeys and revoke them at the same time: just type key N for each of the subkeys (where N is the index starting from 0 of the key).

Next create new subkeys, here you can follow the steps laid out in the original blog. In the same way you can move them to a new Yubikey Neo (good that I bought three of them back then!).

Last but not least you have to update the key-servers with your new public key, which is normally done with gpg --send-keys (again see the original blog).

The most tricky part was setting up and distributing the keys on my various computers: The master key remains as usual on offline media only. On my main desktop at home I have the subkeys available, while on my laptop I only have stubs pointing at the Yubikey. This needs a bit of shuffling around, but should be obvious somehow when looking at the previous blogs.

Full disk encryption

I had my Yubikey also registered as unlock device for the LUKS based full disk encryption. The status before the update was as follows:

$ cryptsetup luksDump /dev/sdaN

Version:        1
Cipher name:    aes

Key Slot 0: ENABLED
Key Slot 1: DISABLED
Key Slot 2: DISABLED
Key Slot 3: DISABLED
Key Slot 4: DISABLED
Key Slot 5: DISABLED
Key Slot 6: DISABLED
Key Slot 7: ENABLED

I was pretty sure that the Slot for the old Yubikey was Slot 7, but I wasn’t sure. So I first registered the new Yubikey in slot 6 with

yubikey-luks-enroll -s 6 -d /dev/sdaN

and checked that I can unlock during boot using the new Yubikey. Then I cleared the slot information in slot 7 with

cryptsetup luksKillSlot /dev/sdaN 7

and again made sure that I can boot using my passphrase (in slot 0) and the new Yubikey (in slot6).

TOTP/U2F second factor authentication

The last step was re-registering the new Yubikey with all the favorite services as second factor, removing the old key on the way. In my case the list comprises several WordPress sites, GitHub, Google, NextCloud, Dropbox and what else I have forgotten.

Although this is the nearly worst case scenario (ok, the main key was not compromised!), everything went very smooth and easy, to my big surprise. Even my Debian upload ability was not interrupted considerably. All in all it shows that having subkeys on a Yubikey is a very useful and effective solution.


One more week of the Humble Software Bundle: MAGIX Sounds of... [Humble Bundle Blog]

One more week of the Humble Software Bundle: MAGIX Sounds of Music! 

The hills are alive with the… well, you know. Our software bundle has over $1,110 worth of software and goodies. You’ll get ACID Pro 7, SOUND FORGE Audio Studio 10, SOUND FORGE Pro 11, and more resources to fill your heart with the sound of music 🎶

Assets for Press and Partners

Girl Genius for Wednesday, March 14, 2018 [Girl Genius]

The Girl Genius comic for Wednesday, March 14, 2018 has been posted.



Dirk Eddelbuettel: Rcpp 0.12.16: A small update [Planet Debian]

The sixteenth update the 0.12.* series of Rcpp landed on CRAN earlier this evening after a few days of gestation in incoming/ at CRAN.

Once again, this release follows the 0.12.0 release from July 2016, the 0.12.1 release in September 2016, the 0.12.2 release in November 2016, the 0.12.3 release in January 2017, the 0.12.4 release in March 2016, the 0.12.5 release in May 2016, the 0.12.6 release in July 2016, the 0.12.7 release in September 2016, the 0.12.8 release in November 2016, the 0.12.9 release in January 2017, the 0.12.10.release in March 2017, the 0.12.11.release in May 2017, the 0.12.12 release in July 2017, the 0.12.13.release in late September 2017, the 0.12.14.release in November 2017, and the 0.12.15.release in January 2018 making it the twentieth release at the steady and predictable bi-montly release frequency.

Rcpp has become the most popular way of enhancing GNU R with C or C++ code. As of today, 1316 packages on CRAN depend on Rcpp for making analytical code go faster and further, along with another 91 in BioConductor.

Compared to other releases, this release contains a relatively small change set, but between Kirill, Kevin and myself a few things got cleaned up and solidified. Full details are below.

Changes in Rcpp version 0.12.16 (2018-03-08)

  • Changes in Rcpp API:

    • Rcpp now sets and puts the RNG state upon each entry to an Rcpp function, ensuring that nested invocations of Rcpp functions manage the RNG state as expected (Kevin in #825 addressing #823).

    • The R::pythag wrapper has been commented out; the underlying function has been gone from R since 2.14.0, and ::hypot() (part of C99) is now used unconditionally for complex numbers (Dirk in #826).

    • The long long type can now be used on 64-bit Windows (Kevin in #811 and again in #829 addressing #804).

  • Changes in Rcpp Attributes:

    • Code generated with cppFunction() now uses .Call() directly (Kirill Mueller in #813 addressing #795).
  • Changes in Rcpp Documentation:

    • The Rcpp FAQ vignette is now indexed as 'Rcpp-FAQ'; a stale Gmane reference was removed and entry for getting compilers under Conda was added.

    • The top-level now has a Support section.

    • The Rcpp.bib reference file was refreshed to current versions.

Thanks to CRANberries, you can also look at a diff to the previous release. As always, details are on the Rcpp Changelog page and the Rcpp page which also leads to the downloads page, the browseable doxygen docs and zip files of doxygen output for the standard formats. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Security researchers publish Ryzen flaws [OSNews]

Through the advent of Meltdown and Spectre, there is a heightened element of nervousness around potential security flaws in modern high-performance processors, especially those that deal with the core and critical components of company business and international infrastructure. Today, CTS-Labs, a security company based in Israel, has published a whitepaper identifying four classes of potential vulnerabilities of the Ryzen, EPYC, Ryzen Pro, and Ryzen Mobile processor lines. AMD is in the process of responding to the claims, but was only given 24 hours of notice rather than the typical 90 days for standard vulnerability disclosure. No official reason was given for the shortened time. Nothing in technology is safe. As always, my advice is to treat any data on a phone or computer as potentially compromisable.

Trump blocks Broadcom's bid for Qualcomm [OSNews]

President Trump on Monday blocked Broadcom's $117 billion bid for the chip maker Qualcomm, citing national security concerns and sending a clear signal that he was willing to take extraordinary measures to promote his administration’s increasingly protectionist stance. In a presidential order, Mr. Trump said "credible evidence" had led him to believe that if Singapore-based Broadcom were to acquire control of Qualcomm, it "might take action that threatens to impair the national security of the United States." The acquisition, if it had gone through, would have been the largest technology deal in history. This US administration would eventually stumble onto doing the right thing - infinite monkeys and all that - so here we are. To explain why this is a good move, Ben Thompson's article about this issue is a fantastic, must-read explainer. There is a certain amount of irony here: the government is intervening in the private market to stop the sale of a company that is being bought because of government-granted monopolies. Sadly, I doubt it will occur to anyone in government to fix the problem at its root, and Qualcomm would be the first to fight against the precise measures - patent overhaul - that would do more than anything to ensure the company remains independent and incentivized to spend even more on innovation, because its future would depend on innovation to a much greater degree than it does now. The reality is that technology has flipped the entire argument for patents - that they spur innovation - completely on its head. The very nature of technology - that costs are fixed and best maximized over huge user-bases, along with the presence of network effects - mean there are greater returns to innovation than ever before. The removal of most technology patents would not reduce the incentive to innovate; indeed, given that a huge number of software patents in particular are violated on accident (unsurprising, given that software is ultimately math), their removal would spur more. And, as Qualcomm demonstrates, one could even argue such a shift would be good for national security.


Q.U.B.E. 2 is available now in the Humble Store! Financed by... [Humble Bundle Blog]

Q.U.B.E. 2 is available now in the Humble Store! 

Financed by Humble Bundle, Q.U.B.E. 2 is the sequel to the hit first-person puzzle game Q.U.B.E. You are Amelia Cross, a stranded archaeologist who has awoken among the ruins of an ancient alien landscape. With the distant help of another survivor you must manipulate the structure of this mysterious world and find a way back home.

Savage Love [The Stranger, Seattle's Only Newspaper: Savage Love]

Maybe he just doesn't know what love feels like. by Dan Savage

I'm a 33-year-old woman from Melbourne, Australia, dating a 24-year-old man. We've been dating for about eight months; it is exclusive and official. He's kind and sweet, caring and giving, and his penis is divine. The thing is, he confessed to me recently that he doesn't really "feel." The way he explained it is, the only emotions he feels are fear and anxiousness that he'll disappoint the people he cares about. He says he's never been in love. He said his dad is the same way. The only time I see him really "feel" is when he's high, which he is semi-frequently. He uses MDMA and he comes alive—he seems the way a "normal" person does when they're in love. But when he's sober, it's like he's trying to mimic the things a person in love would say or do. I recently confessed I am falling in love with him and told him I wasn't saying this with any expectation of him feeling the same; I just wanted him to know. He responded that he cares for me a lot—but that's it. I'm now worried that he'll never love me. I don't want kids, so time isn't critical for me, but I don't want to be with someone who won't ever love me.

Lacking One Vaunted Emotion

You didn't use the P-word (psychopath) or the S-word (sociopath), LOVE, but both came to mind as I was reading your letter. Someone who isn't capable of feeling? Isn't that textbook P-word/S-word stuff?

"The fear with someone who doesn't 'feel' is that they may be a psychopath or a sociopath, terms that are used interchangeably," said Jon Ronson, author of The Psychopath Test: A Journey Through the Madness Industry. "And lots of the items on the psychopath checklist relate to an inability to experience deep emotions—like Shallow Affect, Lack of Empathy and Lack of Remorse. However, I have good news for LOVE! This line: 'The only emotions he feels are fear and anxiousness that he'll disappoint the people he cares about' is the critical one. Psychopaths do not feel anxiety. In fact, my favorite thing a psychologist said to me about this was: 'If you're worried you may be psychopath, that means you aren't one.' Also, psychopaths don't care about disappointing loved ones! All those emotions that relate to an overactive amygdala—fear, remorse, guilt, regret, empathy—psychopaths don't feel them."

So your boyfriend's not a psychopath. Not that you asked. But, you know, just in case you were worried. Anyway...

My hunch is that your boyfriend's problem isn't an inability to feel love, LOVE, but an inability to recognize the feelings he's having as love. (Or potentially love, as it's only been eight months.) What is romantic love but a strong desire to be with someone? The urge to be sweet to them, to take care of them, to do for them? Maybe he's just going through the motions with you—a conscious mimic-it-till-you-make it strategy—or maybe the double whammy of a damaged dad and that toxic masculinity stuff sloshing around out there left him blocked, LOVE, or emotionally constipated. And while MDMA can definitely be abused—moderation in all things, kids, including moderation—the effect it has on him is a hopeful sign. MDMA is not an emotional hallucinogen; the drug has been used in couples counseling and to treat PTSD, not because it makes us feel things that aren't there (in the way a hallucinogen makes us see things that aren't there), but because it allows genuine feelings to surface and, for a few hours, to be felt intensely. So he can feel love—he just has to learn how to tap into those feelings and/or recognize them without an assist from MDMA.

Jon Ronson had one last bit of advice for you, LOVE: "Marry him and his divine penis!"

I agree with Jon, of course, but a long, leisurely engagement is definitely in order. You've only been seeing this guy and his divinity dick for eight months—don't propose to him for at least another year, LOVE, and make marriage conditional upon him seeing a shrink four times as often as he sees his MDMA dealer.

Follow Jon Ronson on Twitter @jonronson, read all of his books (So You've Been Publicly Shamed? is urgently required reading for anyone who spends time online), and check out his amazing podcast, The Butterfly Effect. To access all things Jon Ronson, go to

My boyfriend of 1.5 years shared (several months into dating) that he has a fantasy of having a threesome. I shared that I had also fantasized about this but I never took my fantasies seriously. Right away, he started sending me Craigslist posts from women and couples looking for casual sex partners. I told him I wasn't interested in doing anything for real. A few months later, we went on vacation and I said I wanted to get a massage. He found a place that did "sensual" couples massage. I wanted nothing to do with this. During sex, he talks about the idea of someone else being around. This does turn me on and I like thinking about it when we are messing around. But I don't want to have any other partners. I'm like a mashup of Jessica Day, Leslie Knope, and Liz Lemon if that gives you an idea of how not-for-me this all is. When I say no to one idea, he comes up with another one. I would truly appreciate some advice.

Boyfriend Into Group Sex I'm Not 

Short answer: Sexual compatibility is important. It's particularly important in a sexually-exclusive relationship. You want a sexually-exclusive relationship; your boyfriend doesn't want a sexually-exclusive relationship—so you two aren't sexually compatible, BIGSIN, and you should break up.

Slightly longer answer: Your boyfriend did the right thing by laying his kink cards on the table early in the relationship—he's into threesomes, group sex, and public sex—and you copped to having fantasies about threesomes, BIGSIN, but not a desire to experience one. He took that as an opening: Maybe if he could find the right person/couple/scenario/club, you would change your mind. Further fueling his false hopes: You get turned on when he talks about having "someone else around" when you two have sex. Now, lots of people who very much enjoy threesomes and/or group sex were unsure or hesitant at first, but gave in to please (or shut up) a partner and wound up being glad they did. If you're certain you could never be one of those people—reluctant at first but happy your partner pressed the issue—you need to shut this shit down, Liz Lemon style. Tell him no more dirty talking about this shit during sex, no more entertaining the idea at all. Being with you means giving up this fantasy, BIGSIN, and if he's not willing to give it up—and to shut up about it—then you'll have to break up.

I'm an 18-year-old woman who has been with my current boyfriend for a year, but this has been an issue across all of my sexual relationships. In order to reach climax, I have to fantasize about kinky role-play-type situations. I don't think I want to actually act out the situations/roles because of the degrading/shameful feelings they dredge up, but the idea of other people doing them is so hot. This frustrates me because it takes me out of the moment with my partner. I'm literally thinking about other people during sex when I should be thinking about him! What can I do to be more in the moment?

Distracted Earnest Girlfriend Requires A Different Excitement

Actually, doing the kinky role-play-type things you "have to" fantasize about in order to come would help you feel more connected to your boyfriend—but to do that, DEGRADE, you need to stop kink-shaming yourself. So instead of thinking of those kinky role-play-type things as degrading or shameful, think of them as exciting and playful. Exciting because they excite you (duh), and playful because that's literally what kinky role-play-type things are: play. It's cops and robbers for grown-ups with your pants off, DEGRADE, but this game doesn't end when your mom calls you in for dinner, it ends when you come. So long as you suppress your kinks—so long as you're in flight from the stuff that really arouses you—your boyfriend will never truly know you and you'll never feel truly connected to him.

On the Lovecast, a sexy toy review that will send you packing:


[ Comment on this story ]

[ Subscribe to the comments on this story ]

Tuesday, 13 March


The case for a deliberate data strategy in today’s attention-deficit economy [All - O'Reilly Media]

Anoop Dawar shares principles successful companies are using to inspire an insight-driven ethos and build data-competent organizations.

Continue reading The case for a deliberate data strategy in today’s attention-deficit economy.

Amazing Tales: a storytelling game with dice for kids and grownups [Boing Boing]

Tim Harford (previously) turned me on to Martin Lloyd's Amazing Tales, a storytelling RPG designed to be played between a grownup games-master and one or more kids. (more…)


Reproducible builds folks: Reproducible Builds: Weekly report #150 [Planet Debian]

Here's what happened in the Reproducible Builds effort between Sunday March 4 and Saturday March 10 2018:

diffoscope development

diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues.

Mattia Rizzolo backported version 91 to the Debian backports repository.

In addition, Juliana — our Outreachy intern — continued her work on parallel processing.

Bugs filed

In addition, package reviews have been added, 44 have been updated and 26 have been removed in this week, adding to our knowledge about identified issues.

Lastly, two issue classification types have been added: development

Weekly QA work

During our reproducibility testing, FTBFS bugs have been detected and reported by:

  • Adrian Bunk (49)
  • Antonio Terceiro (1)
  • James Cowgill (1)
  • Ole Streicher (1)


This week's edition was written by Bernhard M. Wiedemann, Chris Lamb, Holger Levsen, Mattia Rizzolo & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Read the Prologue of Head On! [Whatever]

That’s right! The prologue to my upcoming novel is up at You can read it now! Here’s the link!

Oh, and, hello. I’ve had a busy day away from the Internets. Hope you’re well.



Every day is a good day when you paint with the Humble Bob Ross... [Humble Bundle Blog]

Every day is a good day when you paint with the Humble Bob Ross Bundle. 

In this happy little bundle, you’ll find the digital debut of Bob Ross: The Happy Painter, The Joy of Painting ebooks, and more from the iconic PBS artist. Create your own masterpiece in Corel Painter Essentials 6 (with exclusive Bob Ross-inspired brushes!) or have a happy time playing Passpartout, Drawful 2, and more.

Assets for Press and Partners

[$] Designing ELF modules []

The bpfilter proposal posted in February included a new type of kernel module that would run as a user-space program; its purpose is to parse and translate iptables rules under the kernel's control but in a contained, non-kernel setting. These "ELF modules" were reposted for review as a standalone patch set in early March. That review has happened; it is a good example of how community involvement can improve a special-purpose patch and turn it into a more generally useful feature.

Flash, Windows Users: It’s Time to Patch [Krebs on Security]

Adobe and Microsoft each pushed critical security updates to their products today. Adobe’s got a new version of Flash Player available, and Microsoft released 14 updates covering more than 75 vulnerabilities, two of which were publicly disclosed prior to today’s patch release.

The Microsoft updates affect all supported Windows operating systems, as well as all supported versions of Internet Explorer/Edge, Office, Sharepoint and Exchange Server.

All of the critical vulnerabilities from Microsoft are in browsers and browser-related technologies, according to a post from security firm Qualys.

“It is recommended that these be prioritized for workstation-type devices,” wrote Jimmy Graham, director of product management at Qualys. “Any system that accesses the Internet via a browser should be patched.”

The Microsoft vulnerabilities that were publicly disclosed prior to today involve Microsoft Exchange Server 2010 through 2016 editions (CVE-2018-0940) and ASP.NET Core 2.0 (CVE-2018-0808), said Chris Goettl at Ivanti. Microsoft says it has no evidence that attackers have exploited either flaw in active attacks online.

But Goettl says public disclosure means enough information was released publicly for an attacker to get a jump start or potentially to have access to proof-of-concept code making an exploit more likely. “Both of the disclosed vulnerabilities are rated as Important, so not as severe, but the risk of exploit is higher due to the disclosure,” Goettl said.

Microsoft says by default, Windows 10 receives updates automatically, “and for customers running previous versions, we recommend they turn on automatic updates as a best practice.” Microsoft doesn’t make it easy for Windows 10 users to change this setting, but it is possible. For all other Windows OS users, if you’d rather be alerted to new updates when they’re available so you can choose when to install them, there’s a setting for that in Windows Update.

Adobe’s Flash Player update fixes at least two critical bugs in the program. Adobe said it is not aware of any active exploits in the wild against either flaw, but if you’re not using Flash routinely for many sites, you probably want to disable or remove this awfully buggy program.

Just last month Adobe issued a Flash update to fix two vulnerabilities that were being used in active attacks in which merely tricking a victim into viewing a booby-trapped Web site or file could give attackers complete control over the vulnerable machine. It would be one thing if these zero-day flaws in Flash were rare, but this is hardly an isolated occurrence.

Adobe is phasing out Flash entirely by 2020, but most of the major browsers already take steps to hobble Flash. And with good reason: It’s a major security liability. Chrome also bundles Flash, but blocks it from running on all but a handful of popular sites, and then only after user approval.

For Windows users with Mozilla Firefox installed, the browser prompts users to enable Flash on a per-site basis. Through the end of 2017 and into 2018, Microsoft Edge will continue to ask users for permission to run Flash on most sites the first time the site is visited, and will remember the user’s preference on subsequent visits.

The latest standalone version of Flash that addresses these bugs is  for Windows, Mac, Linux and Chrome OS. But most users probably would be better off manually hobbling or removing Flash altogether, since so few sites actually require it still. Disabling Flash in Chrome is simple enough. Paste “chrome://settings/content” into a Chrome browser bar and then select “Flash” from the list of items. By default it should be set to “Ask first” before running Flash, although users also can disable Flash entirely here or whitelist and blacklist specific sites.


An important Samba 4 security release []

Anybody running Samba 4 servers probably wants to take a look at this alert and upgrade their systems. "CVE-2018-1057: On a Samba 4 AD DC the LDAP server in all versions of Samba from 4.0.0 onwards incorrectly validates permissions to modify passwords over LDAP allowing authenticated users to change any other users' passwords, including administrative users."

Numerous vulnerabilities in AMD processors []

A company called CTS has disclosed a long series of vulnerabilities in AMD processors. "The chipset is a central component on Ryzen and Ryzen Pro workstations: it links the processor with hardware devices such as WiFi and network cards, making it an ideal target for malicious actors. The Ryzen chipset is currently being shipped with exploitable backdoors that could let attackers inject malicious code into the chip, providing them with a safe haven to operate from." See the associated white paper for more details.

Update: there are a lot of questions circulating about the actual severity of these vulnerabilities and the motivations of the people reporting them. It may not be time to panic quite yet.


Firefox 59 released []

Mozilla has released Firefox 59, the next iteration of Firefox Quantum. From the release notes: "On Firefox for desktop, we’ve improved page load times, added tools to annotate and crop your Firefox Screenshots, and made it easier to arrange your Top Sites on the Firefox Home page. On Firefox for Android, we’ve added support for sites that stream video using the HLS protocol."


Playboy Wants to Know Who Downloaded Their Playmate Images From Imgur [TorrentFreak]

Late last year Playboy filed a copyright lawsuit against the popular blog Boing Boing.

The site had previously published an article linking to an archive of Playboy centerfold images, which the adult magazine saw as problematic.

Boing Boing’s parent company Happy Mutants was accused of various counts of copyright infringement, with Playboy claiming that it exploited their playmates’ images for commercial purposes.

The California district court was not convinced, however. In an order last month, Judge Fernando Olguin noted that it is not sufficient to argue that Boing Boing merely ‘provided the means’ to carry out copyright-infringing activity. There also has to be a personal action that ‘assists’ the infringing activity.

“For example, the court is skeptical that plaintiff has sufficiently alleged facts to support either its inducement or material contribution theories of copyright infringement,” Judge Olguin wrote.

Playboy was given the option to file a new complaint before the end of February, or else the case would be dismissed. The magazine publisher decided to let the matter go, for now, and didn’t file a new complaint.

That doesn’t mean that they’ll completely pass on the issue though. Instead of only going after Boing Boing, Playboy is now digging up information on the people who posted the infringing content on Imgur and YouTube.

Last week the California Court asked why PlayBoy hadn’t responded after the latest order. The company replied that it thought no response was needed and that the case would be dismissed automatically, but it included another interesting note.

“Plaintiff has elected to pursue third party subpoenas under, inter alia, the Digital Millennium Copyright Act Section 512(h) in order to obtain further facts before determining how to proceed on its claims against Happy Mutants,” Playboy writes.

Looking through the court dockets, we observed that Playboy requested DMCA subpoenas against both Imgur and YouTube. In both cases, the company demands information that can identify the uploaders, including email addresses, phone numbers, and other documents or information.

With Imgur, it goes even further. Here, Playboy also requests information on people “who downloaded any photos” from the Imgur gallery in question. That could be quite a long list as anyone would have to download the images in order to see them. This could include millions of people.

Playboy subpoena against Imgur

A broad request like this goes further than we’ve ever seen. However, soon after the requests came in, the clerk granted both subpoenas.

At this point, it’s unclear whether Playboy also intends to go after the uploaders directly. It informed the California District Court that these “further facts” will help to determine whether it will pursue its claims against Boing Boing, which means that it must file a new complaint.

It’s worth mentioning, however, that the subpoenas were obtained early last month before the case was dismissed.

Alternatively, Playboy can pursue the Imgur and YouTube uploaders directly, which is more likely to succeed than the infringement claims against Boing Boing. That’s only an option if Imgur and YouTube have sufficient information to identify the infringers in question, of course.

The allegedly infringing centerfold video is no longer listed on YouTube. The Imgur gallery, which was viewed more than two million times, is no longer available either.


Playboy’s latest filing mentioning the DMCA subpoenas can be found here (pdf). We also obtained copies of the Youtube (pdf + attachment) and Imgur (pdf + attachment) subpoenas themselves.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Thomas Lange: build service now supports creation of VM disk images [Planet Debian]

A few days ago, I've added a new feature to the build service.

Additionally to creating an installation image, can now build bootable disk images. These disk images can be booted in a VM like KVM, Virtualbox or VMware or openstack.

You can define a disk image size, select a language, set a user and root password, select a Debian distribution and enable backports just by one click. It's possible to add your public key for access to the root account without a password. This can also be done by just specifying your GitHub account. Several disk formats are supports, like raw (compressed with xz or zstd), qcow2, vdi, vhdx and vmdk. And you can add your own list of packages, you want to have inside this OS. After a few minutes the disk image is created and you will get a download link, including a log the the creation process and a link to the FAI configuration that was used to create your customized image.

The new service is available at

If you have any comments, feature requests or feedback, do not hesitate to contact me.


Today in GPF History for Tuesday, March 13, 2018 [General Protection Fault: The Comic Strip]

"Bowling is a great way for you to check out my butt without reprimand."


Link [Scripting News]

I'm looking for a few people to help test a new product. You should be an RSS user, have an OPML subscription list or two (or more), and are ready to read docs, report problems, and ask questions. If you'd like to participate please post a note here. Thanks! đŸ’Ľ


DragonFFI: FFI/JIT for the C language using Clang/LLVM [LLVM Project Blog]


A foreign function interface is "a mechanism by which a program written in one programming language can call routines or make use of services written in another".
In the case of DragonFFI, we expose a library that allows calling C functions and using C structures from any languages. Basically, we want to be able to do this, let's say in Python:
import pydffi
CU = pydffi.FFI().cdef("int puts(const char* s);");
CU.funcs.puts("hello world!")
or, in a more advanced way, for instance to use libarchive directly from Python:
import pydffi
CU = pydffi.FFI().cdef("#include <archive.h>")
a = funcs.archive_read_new()
assert a
This blog post presents related works, their drawbacks, then how Clang/LLVM is used to circumvent these drawbacks, the inner working of DragonFFI and further ideas.
The code of the project is available on GitHub: Python 2/3 wheels are available for Linux/OSX x86/x64. Python 3.6 wheels are available for Windows x64. On all these architectures, just use:
$ pip install pydffi
and play with it :)

See below for more information.

Related work

libffi is the reference library that provides a FFI for the C language. cffi is a Python binding around this library that also uses PyCParserto be able to easily declare interfaces and types. Both these libraries have limitations, among them:
  • libffi does not support the Microsoft x64 ABI under Linux x64. It isn't that trivial to add a new ABI (hand-written ABI, get the ABI right, ...), while a lot of effort have already been put into compilers to get these ABIs right.
  • PyCParser only supports a very limited subset of C (no includes, function attributes, ...).
Moreover, in 2014, Jordan Rose and John McCall from Apple made a talk at the LLVM developer meeting of San José about how Clang can be used for C interoperability. This talk also shows various ABI issues, and has been a source of inspiration for DragonFFI at the beginning.

Somehow related, Sean Callanan, who worked on lldb, gave a talk in 2017 at the LLVM developer meeting of San José on how we could use parts of Clang/LLVM to implement some kind of eval() for C++. What can be learned from this talk is that debuggers like lldb must also be able to call an arbitrary C function, and uses debug information among other things to solve it (what we also do, see below :)).

DragonFFI is based on Clang/LLVM, and thanks to that it is able to get around these issues:
  • it uses Clang to parse header files, allowing direct usage of a C library headers without adaptation;
  • it support as many calling conventions and function attributes as Clang/LLVM do;
  • as a bonus, Clang and LLVM allows on-the-fly compilation of C functions, without relying on the presence of a compiler on the system (you still need the headers of the system's libc thought, or MSVCRT headers under Windows);
  • and this is a good way to have fun with Clang and LLVM! :)
Let's dive in!

Creating an FFI library for C

Supporting C ABIs

A C function is always compiled for a given C ABI. The C ABI isn't defined per the official C standards, and is system/architecture-dependent. Lots of things are defined by these ABIs, and it can be quite error prone to implement.

To see how ABIs can become complex, let's compile this C code:

typedef struct {
short a;
int b;
} A;

void print_A(A s) {
printf("%d %d\n", s.a, s.b);

Compiled for Linux x64, it gives this LLVM IR:

target datalayout = "e-m:e-i64:64-f80:128-n8:16:32:64-S128"
target triple = "x86_64-pc-linux-gnu"

@.str = private unnamed_addr constant [7 x i8] c"%d %d\0A\00", align 1

define void @print_A(i64) local_unnamed_addr {
%2 = trunc i64 %0 to i32
%3 = lshr i64 %0, 32
%4 = trunc i64 %3 to i32
%5 = shl i32 %2, 16
%6 = ashr exact i32 %5, 16
%7 = tail call i32 (i8*, ...) @printf(i8* getelementptr inbounds ([7 x i8], [7 x i8]* @.str, i64 0, i64 0), i32 %6, i32 %4)
ret void

What happens here is what is called structure coercion. To optimize some function calls, some ABIs pass structure values through registers. For instance, an llvm::ArrayRef object, which is basically a structure with a pointer and a size (see, is passed through registers (though this optimization isn't guaranteed by any standard).

It is important to understand that ABIs are complex things to implement and we don't want to redo this whole work by ourselves, particularly when LLVM/Clang already know how.

Finding the right type abstraction

We want to list every types that is used in a parsed C file. To achieve that goal, various information are needed, among which:
  • the function types, and their calling convention
  • for structures: field offsets and names
  • for union/enums: field names (and values)
On one hand, we have seen in the previous section that the LLVM IR is too Low Level (as in Low Level Virtual Machine) for this. On the other hand, Clang's AST is too high level. Indeed, let's print the Clang AST of the code above:
|-RecordDecl 0x5561d7f9fc20 <a.c:1:9, line:4:1> line:1:9 struct definition
| |-FieldDecl 0x5561d7ff4750 <line:2:3, col:9> col:9 referenced a 'short'
| `-FieldDecl 0x5561d7ff47b0 <line:3:3, col:7> col:7 referenced b 'int'
We can see that there is no information about the structure layout (padding, ...). There's also no information about the size of standard C types. As all of this depends on the backend used, it is not surprising that these informations are not present in the AST.

The right abstraction appears to be the LLVM metadata produced by Clang to emit DWARF or PDB structures. They provide structure fields offset/name, various basic type descriptions, and function calling conventions. Exactly what we need! For the example above, this gives (at the LLVM IR level, with some inline comments):

target triple = "x86_64-pc-linux-gnu"
%struct.A = type { i16, i32 }
@.str = private unnamed_addr constant [7 x i8] c"%d %d\0A\00", align 1

define void @print_A(i64) local_unnamed_addr !dbg !7 {
%2 = trunc i64 %0 to i32
%3 = lshr i64 %0, 32
%4 = trunc i64 %3 to i32
tail call void @llvm.dbg.value(metadata i32 %4, i64 0, metadata !18, metadata !19), !dbg !20
tail call void @llvm.dbg.declare(metadata %struct.A* undef, metadata !18, metadata !21), !dbg !20
%5 = shl i32 %2, 16, !dbg !22
%6 = ashr exact i32 %5, 16, !dbg !22
%7 = tail call i32 (i8*, ...) @printf(i8* getelementptr inbounds ([...] @.str, i64 0, i64 0), i32 %6, i32 %4), !dbg !23
ret void, !dbg !24

; DISubprogram defines (in our case) a C function, with its full type
!7 = distinct !DISubprogram(name: "print_A", scope: !1, file: !1, line: 6, type: !8, [...], variables: !17)
; This defines the type of our subprogram
!8 = !DISubroutineType(types: !9)
; We have the "original" types used for print_A, with the first one being the
; return type (null => void), and the other ones the arguments (in !10)
!9 = !{null, !10}
!10 = !DIDerivedType(tag: DW_TAG_typedef, name: "A", file: !1, line: 4, baseType: !11)
; This defines our structure, with its various fields
!11 = distinct !DICompositeType(tag: DW_TAG_structure_type, file: !1, line: 1, size: 64, elements: !12)
!12 = !{!13, !15}
; We have here the size and name of the member "a". Offset is 0 (default value)
!13 = !DIDerivedType(tag: DW_TAG_member, name: "a", scope: !11, file: !1, line: 2, baseType: !14, size: 16)
!14 = !DIBasicType(name: "short", size: 16, encoding: DW_ATE_signed)
; We have here the size, offset and name of the member "b"
!15 = !DIDerivedType(tag: DW_TAG_member, name: "b", scope: !11, file: !1, line: 3, baseType: !16, size: 32, offset: 32)
!16 = !DIBasicType(name: "int", size: 32, encoding: DW_ATE_signed)


DragonFFI first parses the debug information included by Clang in the LLVM IR it produces, and creates a custom type system to represent the various function types, structures, enumerations and typedefs of the parsed C file. This custom type system has been created for two reasons:
  • create a type system that gathers only the necessary informations from the metadata tree (we don't need the whole debug informations)
  • make the public headers of the DragonFFI library free from any LLVM headers (so that the whole LLVM headers aren't needed to use the library)
Once we've got this type system, the DragonFFI API for calling C functions is this one:

DFFI FFI([...]);
// This will declare puts as a function that returns int and takes a const
// char* as an argument. We could also create this function type by hand.
CompilationUnit CU = FFI.cdef("int puts(const char* s);", [...]);
NativeFunc F = CU.getFunction("puts");
const char* s = "hello world!";
void* Args[] = {&s};
int Ret;, Args);

So, basically, a pointer to the returned data and an array of void* is given to DragonFFI. Each void* value is a pointer to the data that must be passed to the underlying function. So the last missing piece of the puzzle is the code that takes this array of void* (and pointer to the returned data) and calls puts, so a function like this:

void call_puts(void* Ret, void** Args) {
*((int*)Ret) = puts((const char*) Args[0]);

We call these "function wrappers" (how original! :)). One advantage of this signature is that it is a generic signature, which can be used in the implementation of DragonFFI. Supposing we manage to compile at run-time this function, we can then call it trivially as in the following:

typedef void(*puts_call_ty)(void*, void**);
puts_call_ty Wrapper = /* pointer to the compiled wrapper function */;
Wrapper(Ret, Args);

Generating and compiling a function like this is something Clang/LLVM is able to do. For the record, this is also what libffi mainly does, by generating the necessary assembly by hand. We optimize the number of these wrappers in DragonFFI, by generating them for each different function type. So, the actual wrapper that would be generated for puts is actually this one:

void __dffi_wrapper_0(int32_t( __attribute__((cdecl)) *__FPtr)(char *), int32_t *__Ret, void** __Args) {
*__Ret = (__FPtr)(*((char **)__Args[0]));

For now, all the necessary wrappers are generated when the DFFI::cdef or DFFI::compile APIs are used. The only exception where they are generated on-the-fly (when calling CompilationUnit::getFunction) is for variadic arguments. One possible evolution is to let the user chooses whether he wants this to happen on-the-fly or not for every declared function.

Issues with Clang

There is one major issue with Clang that we need to hack around in order to have the DFFI::cdef functionality: unused declarations aren't emitted by Clang (even when using -g -femit-all-decls).

Here is an example, produced from the following C code:

typedef struct {
short a;
int b;
} A;

void print_A(A s);
$ clang -S -emit-llvm -g -femit-all-decls -o - a.c |grep print_A |wc -l

The produced LLVM IR does not contain a function named print_A! The hack we temporarily use parses the clang AST and generates temporary functions that looks like this:

void __dffi_force_decl_print_A(A s) { }

This forces LLVM to generate an empty function named __dffi_force_decl_print_A with the good arguments (and associated debug informations).

This is why DragonFFI proposes another API, DFFI::compile. This API does not force declared-only functions to be present in the LLVM IR, and will only expose functions that end up naturally in the LLVM IR after optimizations.

If someone has a better idea to handle this, please let us know!

Python bindings

Python bindings were the first ones to have been written, simply because it's the "high level" language I know best.  Python provides its own set of challenges, but we will save that for another blog post.  These Python bindings are built using pybind11, and provides their own set of C types. Lots of example of what can be achieved can be found here and here.

Project status

DragonFFI currently supports Linux, OSX and Windows OSes, running on Intel 32 and 64-bits CPUs. Travis is used for continuous integration, and every changes is validated on all these platforms before being integrated.

The project will go from alpha to beta quality when the 0.3 version will be out (which will bring Travis and Appveyor CI integration and support for variadic functions). The project will be considered stable once these things happen:
  • user and developer documentations exist!
  • another foreign language is supported (JS? Ruby?)
  • the DragonFFI main library API is considered stable
  • a non negligible list of tests have been added
  • all the things in the TODO file have been done :)

Various ideas for the future

Here are various interesting ideas we have for the future. We don't know yet when they will be implemented, but we think some of them could be quite nice to have.

Parse embedded DWARF information

As the entry point of DragonFFI are DWARF informations, we could imagine parsing these debug informations from shared libraries that embed them (or provide them in a separate file). The main advantage is that all the necessary information for doing the FFI right are in one file, the header files are no longer required. The main drawback is that debug informations tend to take a lot of space (for instance, DWARF informations take 1.8Mb for libarchive 3.32 compiled in release mode, for an original binary code size of 735Kb), and this brings us to the next idea.

Lightweight debug info?

The DWARF standard allows to define lots of information, and we don't need all of them in our case. We could imagine embedding only the necessary DWARF objects, that is just the necessary types to call the exported functions of a shared library. One experiment of this is available here: This is an LLVM optimisation pass that is inserted at the end of the optimisation pipeline, and parse metadata to only keep the relevant one for DragonFFI. More precisely, it only keeps the dwarf metadata related to exported and visible functions, with the associated types. It also keeps debug information of global variables, even thought these ones aren't supported yet in DragonFFI. It also does some unconventional things, like replacing every file and directory by "_", to save space. "Fun" fact, to do this, it borrows some code from the LLVM bitcode "obfuscator" included in recent Apple's clang version, that is used to anonymize some information from the LLVM bitcode that is sent with tvOS/iOS applications (see for more information).

Enough talking, let's see some preliminary results (on Linux x64):
  • on libarchive 3.3.2, DWARF goes from 1.8Mb to 536Kb, for an original binary code size of 735Kb
  • on zlib 1.2.11, DWARF goes from 162Kb to 61Kb, for an original binary code size of 99Kb
The instructions to reproduce this are available in the README of the LLVM pass repository.
We can conclude that defining this "light" DWARF format could be a nice idea. One other thing that could be done is defining a new binary format, that would be thus more space-efficient, but there are drawbacks going this way:
  • debug informations are well supported on every platform nowadays: tools exist to parse them, embed/extract them from binary, and so on
  • we already got DWARD and PDB:
Nevertheless, it still could be a nice experiment to try and do this, figuring out the space won and see if this is worth it!

As a final note, these two ideas would also benefit to libffi, as we could process these formats and create libffi types!

JIT code from the final language (like Python) to native function code

One advantage of embedding a full working C compiler is that we could JIT the code from the final language glue to the final C function call, and thus limit the performance impact of this glue code.
Indeed, when a call is issued from Python, the following things happen:
  • arguments are converted from Python to C according to the function type
  • the function pointer and wrapper and gathered from DragonFFI
  • the final call is made
All this process involves basically a loop on the types of the arguments of the called function, which contains a big switch case. This loop generates the array of void* values that represents the C arguments, which is then passed to the wrapper. We could JIT a specialised version of this loop for the function type, inline the already-compiled wrapper and apply classical optimisation on top of the resulting IR, and get a straightforward conversion code specialized for the given function type, directly from Python to C.

One idea we are exploring is combining easy::jit (hello fellow Quarkslab teammates!) with LLPE to achieve this goal.

Reducing DragonFFI library size

The DragonFFI shared library embed statically compiled versions of LLVM and Clang. The size of the final shared library is about 55Mb (stripped, under Linux x64). This is really really huge, compared for instance to the 39Kb of libffi (also stripped, Linux x64)!

Here are some idea to try and reduce this footprint:
  • compile DragonFFI, Clang and LLVM using (Thin) LTO, with visibility hidden for both Clang and LLVM. This could have the effect of removing code from Clang/LLVM that isn't used by DragonFFI.
  • make DragonFFI more modular: - one core module that only have the parts from CodeGen that deals with ABIs. If the types and function prototypes are defined "by hand" (without DFFI::cdef), that's more or less the only part that is needed (with LLVM obviously) - one optional module that includes the full clang compiler (to provide the DFFI::cdef and DFFI::compile APIs)
Even with all of this, it seems to be really hard to match the 39Kb of libffi, even if we remove the cdef/compile API from DragonFFI. As always, pick the right tool for your needs :)


Writing the first working version of DragonFFI has been a fun experiment, that made me discover new parts of Clang/LLVM :) The current goal is to try and achieve a first stable version (see above), and experiment with the various cited ideas.

It's a really long road, so feel free to come on #dragonffi on FreeNode for any questions/suggestions you might have, (inclusive) or if you want to contribute!


Thanks to Serge «sans paille» Guelton for the discussions around the Python bindings, and for helping me finding the name of the project :) (one of the most difficult task). Thanks also to him, Fernand Lone-Sang and Kévin Szkudlapski for their review of this blog post!

Getting the most from the AI Business Summit [All - O'Reilly Media]

The AI Conference in NY will feature tutorials, conference sessions, and executive briefings to help business leaders understand and plan for AI technologies.

Artificial intelligence has certainly been the most heavily hyped topic of the past year. There’s a lot of smoke and confusion—and, incidentally, a lot of fire. Businesses are using AI to transform themselves: they’re using it to help their employees do more meaningful work, they’re using it to satisfy customers, and they’re using it to find opportunities they didn’t know existed.

But with all the frenzy, there’s also been a lot of thrash. Many execs want to “develop an AI strategy,” but they don’t know what that strategy is, or what it’s for. You won’t get anywhere in AI without a concrete plan that relates to your current business environment and needs. AI won’t magically solve your problems.

Nor can we, but we can give you the tools you need to think about solutions. At the AI Business Summit (a part of O’Reilly’s Artificial Intelligence Conference in New York), we have three days of tutorials, conference sessions, and executive briefings designed to help business leaders understand and plan for these new technologies. These sessions are led by executives who have real-world experience with artificial intelligence, in industries as diverse as manufacturing, telecommunications, and ecommerce. They will be sharing what worked for them as well as what didn’t. Learn from their success—and from their failures.

Getting ahead of the curve

If you think you’re behind the AI adoption curve, you’ve got a lot of company. But you’re also not where you want to be. These sessions teach you how to get ahead of the curve: how to get on track integrating AI into your business. You’ll need need to understand how to make your business work smarter: how to integrate AI into all aspects of your business, how to build an organization that is continually learning how to improve, and how to put AI to work today, and not as part of a rose-colored future.

Learning from experience

Like any other discipline, AI is best learned through experience. That applies to business leadership as well as to developers. How do you get experience when you’re just starting out? From others who’ve taken the path before you. In these sessions, executives and business leaders share their experiences growing into AI: what worked for them, what didn’t, and how it transformed their business.

Design for AI

In the past two years, “design thinking” has become an important buzzword. How do you integrate design into your products from the beginning? Designing for AI brings its own challenges: how do you build systems that are partners for humans, rather than overlords? How do you build systems that can effectively collaborate with your staff? And how do you help AI systems to expose their decision-making process, which might be more valuable than a simple answer? We’ll be looking at questions like these in this group of sessions.

Joining the conversation

Devices like Amazon’s Echo have made it clear that the future is voice-driven. Humans have never adapted well to applications that force them to sit at a keyboard; we want to be able to talk, and have the system understand what we want to do. But how do businesses use chatbots to become better? These sessions will teach you how to think about conversational businesses.

Building a better world

Despite fears about super-intelligences that are hostile to humans and fill the world with paper clips, the future will be what we make it, not what AI makes it. So, what sorts of decisions do we, as business leaders, need to make to ensure we have the future we want? There is a lot to consider: how we use data appropriately, how we develop our legal systems, and how we rethink our workplaces so that AIs are our assistants, not our overlords.

Continue reading Getting the most from the AI Business Summit.

[$] JupyterLab: ready for users []

In the recent article about Jupyter and its notebooks, we mentioned that a new interface, called JupyterLab, existed in what its developers described as an "early preview" stage. About two weeks after that article appeared, Project Jupyter made a significant announcement: JupyterLab is "ready for users". Users will find a more integrated environment for scientific computation that is also more easily extended. JupyterLab takes the Jupyter Notebook to a level of functionality that will propel it well into the next decade—and beyond.

Security updates for Tuesday []

Security updates have been issued by Debian (samba), Fedora (tor), openSUSE (glibc, mysql-connector-java, and shadow), Oracle (dhcp), Red Hat (bind, chromium-browser, and dhcp), Scientific Linux (dhcp), and SUSE (java-1_7_0-openjdk, java-1_8_0-ibm, and java-1_8_0-openjdk).


Stop cherry-picking, start merging, Part 2: The merge conflict that never happened (but should have) [The Old New Thing]

Last time, we saw how editing the code affected by a cherry-pick creates a potential merge conflict that doesn't become realized until the original commit and its cherry-picked doppelgänger meet in a merge somewhere, which could be far away from the branches that contained the original commit and its cherry-pick.

But you know what's worse than a merge conflict?

No merge conflict.

Let's set up the same situation as last time:

, which does show up -->
apple   apple       berry
A &larr; M1 &larr; &larr; &larr; M2   master
  ↖&#xfe0e;       &#x22f0;
    F1 &larr; F2       feature
    apple   berry

Suppose this feature branch has been around for a while, merging its changes back into the master branch when it reaches a stability milestone, Our diagram begins with the point just after the most recent merge back to the master branch, where the feature branch has started its work on the next milestone's worth of features.

Let's suppose that the line that contains the word apple is in a configuration file that controls the feature. Both the master branch and feature branch make commits (M1 and F1, respectively) which are unrelated to the configuration file.

Suppose you now discover a serious problem in the feature that is causing it to go haywire. To stop the immediate problem, you make a commit F2 to the feature branch which sets the configuration file to berry, which has the effect of shutting off the feature.

(In real life, the change would be more like changing




but I'm sticking with apple and berry so that it lines up better with yesterday's examples.)

Okay, you disable the feature in the feature branch, verify that it doesn't have any unexpected side effects, and cherry-pick the fix into the master branch. Phew, this stops the bleeding and buys you time to figure out what went wrong and come up with a fix.

(If your workflow is to apply the fix to the master branch and then cherry-pick it into the feature branch, then great, do it that way. The story is the same.)

Work continues in the master branch while you investigate the problem. Later, you come up with the real fix in the feature branch, which involves re-enabling the feature (by setting the line to apple) and fixing the root cause in some other place. The commit graph now looks like this:

apple   apple       berry   berry
A &larr; M1 &larr; &larr; &larr; M2 &larr; M3   master
  ↖&#xfe0e;       &#x22f0;
    F1 &larr; F2 &larr; &larr; &larr; F3   feature
    apple   berry       apple

In the master branch, an additional unrelated commit M3 was made on top of M2. In the feature branch, an additional commit F3 was made on top of F2, and F3 changes berry back to apple, as well as fixing the root cause of the issue.

Okay, now you want to merge the feature branch into the master branch so that the temporary fix can be replaced by the real fix. But when you do the merge, this happens:

apple   apple       berry   berry   berry
A &larr; M1 &larr; &larr; &larr; M2 &larr; M3 &larr; M4   master
  ↖&#xfe0e;       &#x22f0;       ↙&#xfe0e; ⚠
    F1 &larr; F2 &larr; &larr; &larr; F3       feature
    apple   berry       apple

The master branch merged from the feature branch, producing commit M4, but in commit M4, the line still says berry! The temporary fix is still in place in the master branch. Actually, it's worse than that. The berry part of the temporary fix is in place in the master branch, but so too is the permanent fix in the other part of commit F3! It's possible that these two partial fixes don't interact well with each other, in which case you're in the even worse position that the feature is broken in the master branch but works in your feature branch.

Today, we'll investigate what happened. Next time, we'll investigate how to prevent this from happening in the future.

Let's go back to the state of the repo before we tried to merge the feature branch into the master branch:

apple   apple       berry   berry
A &larr; M1 &larr; &larr; &larr; M2 &larr; M3   master
  ↖&#xfe0e;       &#x22f0;
    F1 &larr; F2 &larr; &larr; &larr; F3   feature
    apple   berry       apple

Now we perform the merge. Git looks for a merge base, which is commit A, the most recent common ancestor between the two branches. Git then performs a three-way merge using A as the base, M3 as HEAD, and F3 as the inbound change. All that matters now is the delta between the base and the two terminal commits, so let's remove the irrelevant commits from the diagram.

apple   berry
A &larr; M3   master
    F3   feature

In the simplified diagram, we still have our common merge base at commit A (where we started with apple) but all we see is commit M3 in the master branch (where we have berry) and commit F3 in the feature branch (where we have apple).

Comparing the base to the head of the master branch, we see that apple changed to berry. Comparing the base to the head of the feature branch, we see that apple didn't change at all. Since the line did not change in the feature branch, it means that the merge from the feature branch will not change the line either. The result is that the line remains unchanged by the merge, so it remains at its current value in the master branch of berry.

It gets worse: If you subsequently merge from the master branch into the feature branch, the incorrect line propagates into the feature branch.

apple   apple       berry   berry   berry
A &larr; M1 &larr; &larr; &larr; M2 &larr; M3 &larr; M4       master
  ↖&#xfe0e;       &#x22f0;       ↙&#xfe0e;   ↖&#xfe0e; ⚠⚠
    F1 &larr; F2 &larr; &larr; &larr; F3 &larr; &larr; &larr; F4   feature
    apple   berry       apple       berry

For the merge from the master branch to the feature branch, the common merge base is commit F3, which is also the head of the feature branch. In commit F3, the line is apple. In the head of the master branch, it is berry, and that change propagates to the feature branch. As a result, in the new commit F4 in the feature branch, the line is now berry. (I chose to use a non-fast-forward merge, but you would see the same thing if it were a fast-forward merge.)

Most people think of cherry-picks as "anticipatory partial merges", where you want to merge part of a source branch into your destination branch. The expectation is that if you later decide to merge the rest of the source branch into the destination branch, it will merge in only the new parts.

And if you are careful not to touch the lines affected by the cherry-pick until the two sides of the cherry-pick finally merge, that's what happens, because the merge will see that both sides modified the file in the same way, and the two commits are coalesced.

But if you make additional changes to the affected line in either of the branches, then instead of coalescing, the two changes are added together. And if your additional changes to the affected line have the effect of canceling out the cherry-picked change, then you don't even get a merge conflict to inform you that something weird happened. (Internally on our team, we call this the ABA problem because the line started with A, changed to B, the B got cherry-picked away, and then the line changed back to A prior to the merge back to the master branch.)

The master branch applied a change, and the feature branch applied the change, and the feature branch reverted the change. Mathematically, you performed two changes and one revert, so the net effect is still a +1 in favor of the change.

Okay, so the problem is that we wanted to do a partial merge from the feature branch back into the master branch. Too bad there's no such thing as a partial merge.

Or is there?

Next time, we'll show how to perform a partial merge.

Bonus chatter: Normally, merging twice produces the same result as merging once, just with more merge conflicts (because you have to resolve the conflict twice, once at each merge). But in this scenario, we get different results, neither of which raise merge conflicts. If we had performed two merges from the feature branch into the master branch, first by merging commit F2, then again by merging commit F3, then we would have had two clean merges, but the result would have been different:

apple   apple       berry   berry   berry   apple
A &larr; M1 &larr; &larr; &larr; M2 &larr; M3 &larr; M4.1 &larr; M4.2   master
  ↖&#xfe0e;       &#x22f0; &larr; &larr; &larr; ↲&#xfe0e;   ↙&#xfe0e;
    F1 &larr; F2 &larr; &larr; &larr; F3 &larr; &larr;   feature
    apple   berry       apple

This is troubling because it means that changing your policy on how often you merge can result in different final results, without any warnings from git.

More bonus chatter: Note that the "revert" need not be an actual revert. It might merely happen to resemble a revert. For example, suppose you start with

char* predefined_items[4] = {

You decide that you need a fifth item, so you add the fifth item and bump the array size:

char* predefined_items[5] = {
 "end table",

Another branch cherry-picks this because it needs the end table. Meanwhile, you realize that you don't need the bed any more, so you remove it and drop the array size to four.

char* predefined_items[4] = {
 "end table",

When these two changes merge, the result will be

char* predefined_items[5] = {
 "end table",

Notice that the length of the predefined_items array is five, even though there are only four entries in it.


Petter Reinholdtsen: First rough draft Norwegian and Spanish edition of the book Made with Creative Commons [Planet Debian]

I am working on publishing yet another book related to Creative Commons. This time it is a book filled with interviews and histories from those around the globe making a living using Creative Commons.

Yesterday, after many months of hard work by several volunteer translators, the first draft of a Norwegian Bokmål edition of the book Made with Creative Commons from 2017 was complete. The Spanish translation is also complete, while the Dutch, Polish, German and Ukraine edition need a lot of work. Get in touch if you want to help make those happen, or would like to translate into your mother tongue.

The whole book project started when Gunnar Wolf announced that he was going to make a Spanish edition of the book. I noticed, and offered some input on how to make a book, based on my experience with translating the Free Culture and The Debian Administrator's Handbook books to Norwegian Bokmål. To make a long story short, we ended up working on a Bokmål edition, and now the first rough translation is complete, thanks to the hard work of Ole-Erik Yrvin, Ingrid Yrvin, Allan Nordhøy and myself. The first proof reading is almost done, and only the second and third proof reading remains. We will also need to translate the 14 figures and create a book cover. Once it is done we will publish the book on paper, as well as in PDF, ePub and possibly Mobi formats.

The book itself originates as a manuscript on Google Docs, is downloaded as ODT from there and converted to Markdown using pandoc. The Markdown is modified by a script before is converted to DocBook using pandoc. The DocBook is modified again using a script before it is used to create a Gettext POT file for translators. The translated PO file is then combined with the earlier mentioned DocBook file to create a translated DocBook file, which finally is given to dblatex to create the final PDF. The end result is a set of editions of the manuscript, one English and one for each of the translations.

The translation is conducted using the Weblate web based translation system. Please have a look there and get in touch if you would like to help out with proof reading. :)

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.


E-Mailing Private HTTPS Keys [Schneier on Security]

I don't know what to make of this story:

The email was sent on Tuesday by the CEO of Trustico, a UK-based reseller of TLS certificates issued by the browser-trusted certificate authorities Comodo and, until recently, Symantec. It was sent to Jeremy Rowley, an executive vice president at DigiCert, a certificate authority that acquired Symantec's certificate issuance business after Symantec was caught flouting binding industry rules, prompting Google to distrust Symantec certificates in its Chrome browser. In communications earlier this month, Trustico notified DigiCert that 50,000 Symantec-issued certificates Trustico had resold should be mass revoked because of security concerns.

When Rowley asked for proof the certificates were compromised, the Trustico CEO emailed the private keys of 23,000 certificates, according to an account posted to a Mozilla security policy forum. The report produced a collective gasp among many security practitioners who said it demonstrated a shockingly cavalier treatment of the digital certificates that form one of the most basic foundations of website security.

Generally speaking, private keys for TLS certificates should never be archived by resellers, and, even in the rare cases where such storage is permissible, they should be tightly safeguarded. A CEO being able to attach the keys for 23,000 certificates to an email raises troubling concerns that those types of best practices weren't followed.

I am croggled by the multiple layers of insecurity here.

BoingBoing post.


CodeSOD: And Now You Have Two Problems [The Daily WTF]

We all know the old saying: “Some people, when confronted with a problem, think ‘I know, I’ll use regular expressions.’ Now they have two problems.” The quote has a long and storied history, but...


Four short links: 13 March 2018 [All - O'Reilly Media]

El Paquete, Ingenious Malware, Region Evacuation, and Deck Template

  1. Inside El Paquete, Cuba’s Social Network -- In a country where the government keeps tight control over the media, citizens are able to access an extraordinary amount of information from around the world in the form of a terabyte-sized weekly file dump, Bring Your Own Hard Drive. Nation-scale sneakernet. (via Andy Baio)
  2. The Slingshot APT FAQ --malware that disabled disk defragmentation because it stores its own encrypted filesystem in the unused sectors of yours.
  3. Netflix Region Evacuation -- This article describes how we re-imagined region failover from what used to take close to an hour to less than 10 minutes, all while remaining cost neutral.
  4. YC Seed Deck Template -- The deck below is a template for how I think companies should build seed decks. While the main target for this template is a company raising its seed round, the deck is not all that different from best practices for a Series A deck—which we’ll release next.

Continue reading Four short links: 13 March 2018.


300 seconds [Seth Godin's Blog on marketing, tribes and respect]

Not stalling.


How many decisions or commitments would end up more positively if you had a five-minute snooze button on hand?

Esprit d'escalier* isn't as hard to live with as its opposite. The hasty one-liner, the rushed reaction, the action we end up regretting--all of them can be eliminated with judicious use of the snooze button. It's a shame there isn't one built in to our computers when we're communicating online...

When in doubt, go for a walk around the block.


*The feeling we get when we think of a witty response on the way home instead of at the dinner party, when it would have been the perfect put-down.



FeedRSSLast fetchedNext fetched after
XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March
a bag of four grapes XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March
A Smart Bear: Startups and Marketing for Geeks XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March
All - O'Reilly Media XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March
Anarcho's blog XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March
Ansible XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March
Bad Science XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March
Black Doggerel XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March
Blog – Official site of Stephen Fry XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March
Boing Boing XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March
Broodhollow XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March
Charlie Brooker | The Guardian XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March
Charlie's Diary XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March
Chasing the Sunset - Comics Only XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March
Clay Shirky XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March
Coding Horror XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March
Cory Doctorow's XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March
Ctrl+Alt+Del Comic XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March
Cyberunions XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March
David Mitchell | The Guardian XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March
DC's Improbable Science XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March
Debian GNU/Linux System Administration Resources XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March
Deeplinks XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March
Diesel Sweeties webcomic by rstevens XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March
Dork Tower XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March
Edmund Finney's Quest to Find the Meaning of Life XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March
Eerie Cuties XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March
EFF Action Center XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March
Erin Dies Alone XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March
Events XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March
Falkvinge on Liberty XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March
Flipside XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March
Free software jobs XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March
Full Frontal Nerdity by Aaron Williams XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March
General Protection Fault: The Comic Strip XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March
George Monbiot XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March
Girl Genius XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March
God Hates Astronauts XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March
Graeme Smith XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March
Groklaw XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March
Hackney Anarchist Group XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March;_render=rss XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March^876&maxPrice=240000&minBedrooms=2&displayPropertyType=houses&oldDisplayPropertyType=houses&primaryDisplayPropertyType=houses&oldPrimaryDisplayPropertyType=houses&numberOfPropertiesPerPage=24 XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March
Humble Bundle Blog XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March
I, Cringely XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March
Irregular Webcomic! XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March
Joel on Software XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March
Judith Proctor's Journal XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March
Krebs on Security XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March
Lambda the Ultimate - Programming Languages Weblog XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March
LFG Comics XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March
LLVM Project Blog XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March
Loomio Blog XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March
Menage a 3 XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March
Mimi and Eunice XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March
Neil Gaiman's Journal XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March
Nina Paley's Blog XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March
O Abnormal – Scifi/Fantasy Artist XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March
Oglaf! -- Comics. Often dirty. XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March
Order of the Stick XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March
Original Fiction – XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March
OSNews XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March
Paul Graham: Unofficial RSS Feed XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March
Penny Arcade XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March
Penny Red XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March
PHD Comics XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March
Phil's blog XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March
Planet Debian XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March
Planet GridPP XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March
Planet Lisp XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March
Property is Theft! XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March
QC RSS XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March
Scenes From A Multiverse XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March
Schneier on Security XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March
SCHNEWS.ORG.UK XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March
Scripting News XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March
Seth Godin's Blog on marketing, tribes and respect XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March
Skin Horse XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March
Starslip by Kris Straub XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March
Tales From the Riverbank XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March
The Adventures of Dr. McNinja XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March
The Bumpycat sat on the mat XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March
The Command Line XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March
The Daily WTF XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March
The Monochrome Mob XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March
The Non-Adventures of Wonderella XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March
The Old New Thing XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March
The Open Source Grid Engine Blog XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March
The Phoenix Requiem XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March
The Rogues Gallery XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March
The Stranger, Seattle's Only Newspaper: Savage Love XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March
TorrentFreak XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March
towerhamletsalarm XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March
Twokinds XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March
UK Indymedia Features XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March
Uploads from ne11y XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March
Uploads from piasladic XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March
Wayward Sons: Legends - Sci-Fi Full Page Webcomic - Updates Daily XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March
What If? XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March
Whatever XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March
Whitechapel Anarchist Group XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March
WIL WHEATON dot NET XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March
wish XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March XML 10:13, Saturday, 17 March 10:53, Saturday, 17 March