## Monday, 29 November

### 11:49

"Don't use magic numbers," is a good rule for programming. But like any rule, you shouldn't blindly apply it. We know what happens when people do, however: we get constants that might...

### 05:14

The post 1561 appeared first on Looking For Group.

### 04:28

Do you ever miss hanging out with people before smartphones?

***

Did I mention I've got a small run of Fuck It sweatpants back in stock? Because I do. They have pockets.

## Sunday, 28 November

### 22:21

If you could make an open Facebook without using carbon-spewing blockchain tech why would you burn all that carbon? Do you care about the survival of our species?

Yesterday I wrote a bit about BingeWorthy, the app I wrote a couple of years ago that helps you find new shows to binge based on the ones you and your friends like. We need more users for it to really achieve its potential.

I would be happy to do a deal with a tech company on this product. I think Twitter would be great, but there are lots of companies where it could find a good home. I just want to see millions of people using so I can meet all the people who like the same stuff I do. I think we'd take over the world, figuratively. 😄

A network of people based on their entertainment tastes would make sense for a growing service like Hulu with lots of highly rated shows, that wants to grow their service. Part of the philosophy that "people come back to places that send them away."

BTW -- here's the readout of people most like me in their ratings. I don't know @alisonjfields but we seem to like the same stuff. @jsavin is #1. I also like to see @nakedjen and @leolaporte high on the list.

### 18:28

New Drummer verb that generates Markdown from an outline or part of an outline.

### 16:14

So this last week encompassed the trip to Austin, my birthday, and  Thanksgiving with my brother’s family in St. Louis.

I managed to get pretty good work done on all days save Thanksgiving itself.

I honestly don’t mind working on my birthday, or holidays. I love cartooning.

Anyway, work was accomplished six of the seven days this past week. Work continues to progress slower than I’d like, or could have imagined. But, baby steps.

Here’s much of what I did yesterday, on the car drive from St. Louis back to Madison (my wife drove):

Coloring the Sistine Pages is the last major task on THE TAO OF IGOR. There are still some minor ones to get knocked off, starting with making the corrections the proofreaders pointed out.

I fly to London Tuesday, for my sister’s 50th birthday. I’ll probably get a lot of things finished off there. I had hoped this would all be finished, and that this trip could be a bit more leisurely. But we’re getting there.

• John

### 05:35

Shaenon: It’s officially holiday shopping season, and that means it’s time for special sales. From now through December 31, all orders of Narbonic and/or Skin Horse print books from the Couscous Store come with the original art for one (1) daily strip. One strip per order, Shaenon’s choice. Happy holidays!

Channing: Ever get jealous of those dudes or ladies or enby frienbys who have original Shaenon art on their walls? This is the easiest way on earth to join their hallowed ranks. Will it make you rich and popular? Well, we aren’t saying that it won’t make you rich and popular.

## Saturday, 27 November

### 21:14

I wonder why the tech industry hasn't done something like BingeWorthy. I think it's because they create data mining tools for advertisers. This is a data mining for users. The more users, the better the data. A real incentive to get your friends on board. BW should really be a product offered by Netflix, Metacritic, Twitter, Facebook, even Google. What show to watch is a question millions of people have, esp this time of year. This is what the platform should look like. Share profiles w friends. An interesting twist, it's not just about finding shows to binge, it's about finding friends with similar taste.I was surprised to find that Jake Savin, who I worked with a long time ago, and I, have very similar taste. We like and dislike the same things.

Maybe they should do interviews with Covid doctors on CNN from the room where intubated patients are waiting to die.

## Friday, 26 November

### 22:35

Sometimes people deliberately mispronounce or misspell my name, I guess theyre trying to embarrass me? I have a name almost as bad as the Boy Named Sue. It’s hard to get under my skin that way. 😀

I tried watching Get Back, but after five minutes of holy shit those are the young Beatles acting like normal people it became fairly boring. I guess I like a plot? Did it get better?

Twitter Blue gives you a pretty lame undo command. I guess it’s impossible to do it for real with their server architecture.

One of the stars of The Great, the guy who plays emperor Peter, is a dead ringer for a young Elon Musk. Same mannerisms and look, but he’s funnier and of course a better actor. One of the tech companies or car companies should hire him to do ads.

### 20:42

There has been some brouhaha about the state of Common Lisp IDEs, and a few notable reactions to that, so I’m adding my two Euro cents to the conversation. What is a community ? It’s a common mistake to refer to some people doing a certain thing as a “community”, and it’s easy to imagine ridiculous examples: the community of suburban lawn-mowing dwellers, the community of wearers of green jackets, the community of programmers-at-large etc…

### 19:28

I added The Great as a favorite on my BingeWorthy profile. I'm well into Season 2. It's a pseudo-history dramedy in the spirit of Succession or The Death of Stalin. It's about a Russian empress who overthrows her husband the emperor. As they say, hilarity ensues. 😄

### 12:14

While Error'd and TDWTF do have an international following, and this week's offerings are truly global, we are unavoidably mired in American traditions. Tomorrow, we begin the celebration of...

## Thursday, 25 November

### 19:28

You can get anything you want at Alice's Restaurant. ❤️

I am a NYT subscriber, but can't access Wirecutter. It's weird, the site seems broken -- when I try to log in it just sends me back to the page that says I have run out of free articles. It could be they want more money for WC, if so, the answer is no. I don't read a lot of NYT articles as it is. I have a very bad feeling about them, and it gets worse every year. That took a lot of work on their part because I was raised on the NYT.

### 16:28

It seems that my article about the existence in the Lisp community of rather noisy people who seem to enjoy complaining rather than fixing things has atracted some interest. Some things in it were unclear, and some other things seem to have been misinterpreted: here are some corrections and clarifications.

First of all some people pointed out, correctly, that LispWorks is expensive if you live in a low-income country. That’s true: I should have been clearer that I believe the phenonenon I am describing is exclusively a rich-world one. I may be incorrect but I have never heard anyone from a non-rich-world country doing this kind of destructuve whining.

It may also have appeared that I am claiming that all Lisp people do this: I’m not. I think the number of people is very small, and that it has always been small. But they are very noisy and even a small number of noisy people can be very destructive.

Some people seem to have interpreted what I wrote as saying that the current situation was fine and that Emacs / SLIME / SLY was in fact the best possible answer. Given that my second sentence was

[Better IDEs] would obviously be desirable.

this is a curious misreading. Just in case I need to make the point any more strongly: I don’t think that Emacs is some kind of be-all and end-all: better IDEs would be very good. But I also don’t think Emacs is this insurmountable barrier that people pretend it is, and I also very definitely think that some small number of people are claiming it is because they want to lose.

I should point out that this claim that it is not an insurmountable barrier comes from some experience: I have taught people Common Lisp, for money, and I’ve done so based on at least three environments:

• LispWorks;
• Something based around Emacs and a CL running under it;
• Genera.

None of those environments presented any significant barrier. I think that LW was probably the most liked but none of them got in the way or put people off.

In summary: I don’t think that the current situation is ideal, and if you read what I wrote as saying that you need to read more carefully. I do think that the current situation is not going to deter anyone seriously interested and is very far from the largest barrier to becoming good at Lisp. I do think that, if you want to do something to make the situation better then you should do it, not hang around on reddit complaining about how awful it is, but that there are a small number of noisy people who do exactly that because, for them, no situation would be ideal because what they want is to avoid being able to get useful work done. Those people, unsurprisingly, often become extremely upset when you confront them with this awkward truth about themselves. They are also extremely destructive influences on any discussion around Lisp. (Equivalents of these noisy people exist in other areas, of course.) That’s one of the reasons I no longer participate in the forums where these people tend to exist.

(Thanks to an ex-colleague for pointing out that I should perhaps post this.)

### 14:56

An example of a social custom from two points of view.

• It's wrong to bring sex into a work relationship.
• I think it's equally wrong to bring business into a sexual relationship.

It has happened, esp in Silicon Valley or New York, when a woman I was dating, or interested in dating, revealed that she was really there for the business. I want to understand up front. If it's business, fine -- I'll evaluate it one way, if it's sexual, another. It's a really awkward situation where someone I was attracted to sexually, wants to do business -- and wouldn't have gotten a meeting if it were clearly about business.

Honestly I think the two sides are the same thing, but we view one as wrong, and don't have explicit rules for the second.

Another example.

• If you work for me, or we're equals in a business relationship, either way -- no one is accountable to the other for what they do with personal time. Your boss can't call you up on the weekend and expect you to tell them what you're doing. You don't have to tell them anything, unless it impacts your ability to do the work.
• I think it's equally wrong when a person who reports to me, or is an equal, reports to me on what they're doing in their personal time. It creates a conflict, am I expected to reciprocate, because I won't. I draw a solid line between work and personal time. If I share personal things in a work context, they are no longer personal.

These concerns are spelled out more carefully in professions like medicine, academics, law. A therapist isn't allowed to date a patient. A lawyer can't represent both sides in a legal battle. When the lines cross, there's trouble ahead.

### 14:14

I just started Get Back, the first few minutes of the first episode, enough to know that it's going to be a heavy emotional experience for me. As a kid I had so much invested in the Beatles. Each period of their existence marked some big period or event in my life, things that I've mostly buried, but come right to the surface with (for example) Paul and George flopping their heads in the chorus to Twist and Shout. To see the Beatles, alive and young and at the peak of their creativity, in a way we couldn't see them at the time, that's like reading a letter from a long-dead relative. The people are gone, the experiences were seminal, and recoverable.

I'm watching The Great on Hulu, and it's really good. Didn't get the best reviews. I'm only on episode 4. It's about Catherine The Great of Russia. The story is pretty heavy, but believe it or not, it's a comedy!

Last year on Thanksgiving we were in the midst of the worst of the pandemic, and the best we could do is stay home and hope we don't get infected. This year it's very different -- thanks to the vaccine. I got my first dose of Moderna on January 20, the day we inaugurated the first post-insurrection president. What a relief, in two ways. The Orange Tyrant who tried to overthrow the US government was gone, and I was on my way to being protected from the virus he let run rampant through the country we all have been told is the greatest on earth, but did the worst job of protecting its citizens. I hope that was the low point, but who knows!

I also was thankful for my car. It's funny that I thought about that post just the other day driving my new Tesla around the neighborhood, marveling at its intelligence, muscle, and how much it is like the computers I use, the exercise bike I just bought, and the phone I carry everywhere with me. The Silicon Valley design ethos, which I am schooled in and in a small way helped develop, is eating the world, as venture capitalist Marc Andreessen said so well. The Subaru I drove last year is a fine machine, and in some ways it's more comfortable than the Tesla, and it'll probably do better if we get a blizzard in the mountains. But its software is a jumble of poorly designed components that don't work well with each other from a UI standpoint. In that area the Tesla is perfect. It's a system designed to be understood by a modern user. The pre-Tesla car manufacturers have a long way to go to catch up to Tesla.

I am thankful for the first users of Drummer. We have reached a critical mass where I can add features, and get feedback from users, and that gives us the ability to move forward. I think this happened because I decided to put my head down on Drummer and work until it was really ready. There still are problems with the software, it is far from perfect, and I'm thankful for the users' patience, but mostly I'm thankful that there are users to be patient.

At 66, health is no longer something I can take for granted. it takes work to keep going, and I've had my troubles this year, but basically I'm still here, and most of my body is working fine. I'm thankful for that.

### 13:21

It's a holiday in the US, so while we're gathering with friends and family, reminiscing about old times, let's look back on the far off year of 2004, with this classic WTF. Original --...

### 12:56

I maintain a web application written in Common Lisp, used by real world© clients© (incredible I know), and I finally got to finish two little additions:

• add pagination to the list of products
• cleanup the HTML I get from webscraping (so we finally fetch a book summary, how cool) (for those who pay for it, we can also use a third-party book database).

The HTML cleanup part is about how to use LQuery for the task. Its doc shows the remove function from the beginning, but I have had difficulty to find how to use it. Here’s how. (see issue #11)

## Cleanup HTML with lquery

https://shinmera.github.io/lquery/

LQuery has remove, remove-attr, remove-class, remove-data. It seems pretty capable.

Let’s say I got some HTML and I parsed it with LQuery. There are two buttons I would like to remove (you know, the “read more” and “close” buttons that are inside the book summary):

(lquery:$*node* ".description" (serialize)) ;; HTML content... <button type=\"button\" class=\"description-btn js-descriptionOpen\"><span class=\"mr-005\">Lire la suite</span><i class=\"far fa-chevron-down\" aria-hidden=\"true\"></i></button> <button type=\"button\" class=\"description-btn js-descriptionClose\"><span class=\"mr-005\">Fermer</span><i class=\"far fa-chevron-up\" aria-hidden=\"true\"></i></button></p>")   On GitHub, @shinmera tells us we can simply do: ($ *node* ".description" (remove "button") (serialize))



Unfortunately, I try and I still see the two buttons in the node or in the output. What worked for me is the following:

• first I check that I can access these HTML nodes with a CSS selector:
(lquery:$*NODE* ".description button" (serialize)) ;; => output   • now I use remove. This returns the removed elements on the REPL, but they are corrcetly removed from the node (a global var passed as parameter): (lquery:$ *NODE* ".description button" (remove) (serialize))
;; #("<button type=\"button\" class=\"description-btn js-descriptionOpen\"><span class=\"mr-005\">Lire la suite</span><i class=\"far fa-chevron-down\" aria-hidden=\"true\"></i></button>"



Now if I check the description field:

(lquery:$*NODE* ".description" (serialize)) ;; ... ;; </p>")   I have no more buttons \o/ Now to pagination. ## Pagination This is my 2c, hopefully this will help someone do the same thing quicker, and hopefully we’ll abstract this in a library... On my web app I display a list of products (books). We have a search box with a select input in order to filter by shelf (category). If no shelf was chosen, we displayed only the last 200 most recent books. No need of pagination, yet... There were only a few thousand books in total, so we could show a shelf entirely, it was a few hundred books by shelf maximum. But the bookshops grow and my app crashed once (thanks, Sentry and cl-sentry). Here’s how I added pagination. You can find the code here and the Djula template there. The goal is to get this and if possible, in a re-usable way: I simply create a dict object with required data: • the current page number • the page size • the total number of elements • the max number of buttons we want to display • etc (defun make-pagination (&key (page 1) (nb-elements 0) (page-size 200) (max-nb-buttons 5)) "From a current page number, a total number of elements, a page size, return a dict with all of that, and the total number of pages. Example: (get-pagination :nb-elements 1001) ;; => (dict :PAGE 1 :NB-ELEMENTS 1001 :PAGE-SIZE 200 :NB-PAGES 6 :TEXT-LABEL \"Page 1 / 6\" ) " (let* ((nb-pages (get-nb-pages nb-elements page-size)) (max-nb-buttons (min nb-pages max-nb-buttons))) (serapeum:dict :page page :nb-elements nb-elements :page-size page-size :nb-pages nb-pages :max-nb-buttons max-nb-buttons :text-label (format nil "Page ~a / ~a" page nb-pages)))) (defun get-nb-pages (length page-size) "Given a total number of elements and a page size, compute how many pages fit in there. (if there's a remainder, add 1 page)" (multiple-value-bind (nb-pages remainder) (floor length page-size) (if (plusp remainder) (1+ nb-pages) nb-pages))) #+(or) (assert (and (= 30 (get-nb-pages 6000 200)) (= 31 (get-nb-pages 6003 200)) (= 1 (get-nb-pages 1 200))))   You call it: (make-pagination :page page :page-size *page-length* :nb-elements (length results))   then pass it to your template, which can {% include %} the template given above, which will create the buttons (we use Bulma CSS there). When you click a button, the new page number is given as a GET parameter. You must catch it in your route definition, for example: (easy-routes:defroute search-route ("/search" :method :get) (q shelf page) ...)   Finally, I updated my web app (while it runs, it’s more fun and why shut it down? It’s been 2 years I do this and so far all goes well (I try to not upgrade the Quicklisp dist though, it went badly once, because of external, system-wide dependencies)) (see this demo-web-live-reload). That’s exactly the sort of things that should be extracted in a library, so we can focus on our application, not on trivial things. I started that work, but I’ll spend more time next time I need it... call it “needs driven development”. Happy lisping. ### 07:07 The post 1560 appeared first on Looking For Group. ### 05:42 ### 05:28 ### 01:00 ## Wednesday, 24 November ### 19:28 Grouping For Looks is a page-by-page retelling of the Looking For Group saga through the lens of a mirror universe where Cale is a goateed tyrant and Richard is a holy soul trying to set him on a good path. […] The post GFL – Page 0073 appeared first on Looking For Group. ### 12:21 Lucio C inherited a large WordPress install, complete with the requisite pile of custom plugins to handle all the unique problems that the company had. Problems, of course, that weren't unique at... ### 05:28 ## Tuesday, 23 November ### 19:35 This time I think I have it, for real. I swear. ;-) Here's the deal... • All the action is in Old School. • A new head-level attribute, flSinglespaceMarkdown. • Default: false -- which means we generate two newlines per outline node, with one exception. • If a node has an flSinglespaceMarkdown att set true, its subs are single-spaced. The assumption is that most often people will write in the outliner one paragraph per headline. but there are exceptional cases where you need more control over the markdown text we generate, and need to do your own double spacing, so you tell Old School to just do one. Comment here. ### 17:21 For whatever reason I'm not very talkative this week. Not sure why. Often when this happens it's followed by a period of idearrhea. No apologies, I'm still here, feeling more pensive. Happy Thanksgiving! :-) ### 15:49 ### 11:56 Mike's company likes to make sure their code is well documented. Every important field, enumeration, method, or class has a comment explaining what it is. You can see how much easier it makes... ### 05:14 ### 01:35 GNU Parallel 20211122 ('Peng Shuai') [stable] has been released. It is available for download at: lbry://@GnuParallel:4 No new functionality was introduced so this is a good candidate for a stable release. Quote of the month: GNU parallel 便利すぎ -- @butagannen@twitter 豚顔面 New in this release: • Bug fixes and man page updates. News about GNU Parallel: Get the book: GNU Parallel 2018 http://www.lulu.com/shop/ole-tange/gnu-parallel-2018/paperback/product-23558902.html GNU Parallel - For people who live life in the parallel lane. If you like GNU Parallel record a video testimonial: Say who you are, what you use GNU Parallel for, how it helps you, and what you like most about it. Include a command that uses GNU Parallel if you feel like it. ## About GNU Parallel GNU Parallel is a shell tool for executing jobs in parallel using one or more computers. A job can be a single command or a small script that has to be run for each of the lines in the input. The typical input is a list of files, a list of hosts, a list of users, a list of URLs, or a list of tables. A job can also be a command that reads from a pipe. GNU Parallel can then split the input and pipe it into commands in parallel. If you use xargs and tee today you will find GNU Parallel very easy to use as GNU Parallel is written to have the same options as xargs. If you write loops in shell, you will find GNU Parallel may be able to replace most of the loops and make them run faster by running several jobs in parallel. GNU Parallel can even replace nested loops. GNU Parallel makes sure output from the commands is the same output as you would get had you run the commands sequentially. This makes it possible to use output from GNU Parallel as input for other programs. For example you can run this to convert all jpeg files into png and gif files and have a progress bar: parallel --bar convert {1} {1.}.{2} ::: *.jpg ::: png gif Or you can generate big, medium, and small thumbnails of all jpeg files in sub dirs: find . -name '*.jpg' | parallel convert -geometry {2} {1} {1//}/thumb{2}_{1/} :::: - ::: 50 100 200 You can find more about GNU Parallel at: http://www.gnu.org/s/parallel/ You can install GNU Parallel in just 10 seconds with:$ (wget -O - pi.dk/3 || lynx -source pi.dk/3 || curl pi.dk/3/ || \
fetch -o - http://pi.dk/3 ) > install.sh
$sha1sum install.sh | grep c82233e7da3166308632ac8c34f850c0 12345678 c82233e7 da316630 8632ac8c 34f850c0$ md5sum install.sh | grep ae3d7aac5e15cf3dfc87046cfc5918d2
ae3d7aac 5e15cf3d fc87046c fc5918d2
$sha512sum install.sh | grep dfc00d823137271a6d96225cea9e89f533ff6c81f 9c5198d5 31a3b755 b7910ece 3a42d206 c804694d fc00d823 137271a6 d96225ce a9e89f53 3ff6c81f f52b298b ef9fb613 2d3f9ccd 0e2c7bd3 c35978b5 79acb5ca$ bash install.sh

Watch the intro video on http://www.youtube.com/playlist?list=PL284C9FF2488BC6D1

Walk through the tutorial (man parallel_tutorial). Your command line will love you for it.

When using programs that use GNU Parallel to process data for publication please cite:

O. Tange (2018): GNU Parallel 2018, March 2018, https://doi.org/10.5281/zenodo.1146014.

If you like GNU Parallel:

• Give a demo at your local user group/team/colleagues
• Request or write a review for your favourite blog or magazine
• Request or build a package for your favourite distribution (if it is not already there)
• Invite me for your next conference

If you use programs that use GNU Parallel for research:

• Please cite GNU Parallel in you publications (use --citation)

If GNU Parallel saves you money:

GNU sql aims to give a simple, unified interface for accessing databases through all the different databases' command line clients. So far the focus has been on giving a common way to specify login information (protocol, username, password, hostname, and port number), size (database and table size), and running queries.

The database is addressed using a DBURL. If commands are left out you will get that database's interactive shell.

When using GNU SQL for a publication please cite:

O. Tange (2011): GNU SQL - A Command Line Tool for Accessing Different Databases Using DBURLs, ;login: The USENIX Magazine, April 2011:29-32.

GNU niceload slows down a program when the computer load average (or other system activity) is above a certain limit. When the limit is reached the program will be suspended for some time. If the limit is a soft limit the program will be allowed to run for short amounts of time before being suspended again. If the limit is a hard limit the program will only be allowed to run when the system is below the limit.

## Monday, 22 November

### 22:21

A script that generates the list of Drumkit verbs. For people who worked on outlines in Frontier, have a look. I found a way to work on outlines as JavaScript structs. It's very efficient and much easier to program.

### 18:35

It’s been a weekend.

Saturday morning I had to catch a 7 am flight, to get me to Austin, TX, for the celebration of the life of my dear friend and colleague Andrew Hackard. I nearly missed my connecting flight, but the God of Travel was on my side, and I made it to town with time to spare.

It was a beautiful and hugely passionate ceremony. Fortunately, I was staying with my friends the Cargills, so I was able to return to their place and collapse, emotionally and physically, once it was over. Another friend turned up, and we talked until the night was close enough to kiss the dawn.

I did not take my laptop with me, but had my iPad Pro along, so I was able to work during the flights, and while at my friends’ place.

One of the last things I needed to do was make Stell’s Sabine Wren costume consistent.  It’s not in many scenes, but it’s important to get right. I never liked how I drew it for the book, but I was happy with its look in the recent DORK TOWER Halloween strips. So I was able to recolor it, and now that’s done.

Here’s a screen -cap from my iPad Pro, when it was a work in progress. TAO OF IGOR version on the left – the two Web strip samples on the right.

All comments are now in from the proofreaders, and it’s now time to make the corrections on pages 1-160, and send out the next batch for them to look at.

• John

### 16:35

Someone asked about better Lisp IDEs on reddit. Such things would obviously be desirable. But the comments are entirely full the usual sad endless droning from people who need there always to be something preventing them from doing what they pretend to want to do, and are happy to invent such barriers where none really exist. comp.lang.lisp lives on in spirit if not in fact.

More…

### 12:21

Linda found some C# code that generates random numbers. She actually found a lot of code which does that, because the same method was copy/pasted into a half dozen places. Each of those places was a...

### 06:49

The post 1559 appeared first on Looking For Group.

### 03:42

Thanksgiving isn't as much fun as it used to be.

## Sunday, 21 November

### 19:56

Here’s a thread reader integrated into Twitter. Next: eliminate the numbers, add links, simple styling, let me give it a title (optional) and we’re ready to rock. Think they’ll have that by say 2030?

### 19:14

Drummer, an outliner, now allows you to write blog posts in Markdown.

The question was -- how?

This is what we came up with:

• - One newline per outline node.
• - Indentation belongs to the author, generates nothing.

I think this works and is true to the philosophy of MD.

### 17:00

There are so many interesting things about the Tesla, and I'm just getting started learning about them. It's as if I didn't get a PC until the IBM PC came out. Or maybe later. Here's something I figured out the other night, driving on a mountain road with lots of turns and ups and downs, a road I'm quite familiar with -- the Tesla makes you a better driver. This is something my Subaru does too, you only have to hold the steering wheel loosely while navigating the turns and ups and downs. The road is well-marked and the car knows what to do. The Tesla even more so. It's like my iPhone which takes better pictures in 2021 than the 2011 model did. We like to say "no filters" but the truth is they're just really good filters that make you think you're the photographer, but the phone did the hard work. Same with the Tesla. I like driving it in the same way. It flatters me to think I'm that good a driver. I know the truth though -- I didn't become a better driver, I just got a car that makes me feel that I did. And who wouldn't love that.

### 14:42

Going to do some more cleanup work this morning on Markdown support in Old School.

In the meantime, the thread continues. If you're planning on using Markdown to write blog posts in Drummer, you might want to tune in, because things are being locked down today.

There are only two rules that govern how Old School generates source for the Markdown processor:

• One newline per outline node.
• Indentation belongs to the author, generates nothing.

I sent a private note to John Gruber a few days ago, letting him know this discussion is going on. As far as I know we are setting prior art for outliners used to generate Markdown for publishing. If there is prior art, I would love to know about it now, asap.

This is different from the Obsidian model also used by LogSeq -- which uses Markdown as a file format for outlines.

Update: Here's a list of this morning's changes to Markdown support for publishing.

### 13:56

I love my new Tesla. I'm looking for excuses to drive places. I haven't felt this way about driving since I was a teenager with a new license.

Google's search engine really doesn't like this blog. I tried searching for Scott Love on my blog. It didn't find the instance where I wrote about him here earlier this month.

Yesterday I did my tenth Peloton ride. The exercise is making me stronger. I don't know why I didn't want to do the classes. I love my teacher, Emma Lovewell. She's super nerdy. When we're doing something particularly strenuous, I close my eyes, focus on my breathing, in through the nose, out through the mouth, and I listen to her say that this pain is worth it because it'll become strength when the workout is over. It's so true. I don't particularly care for the music she chooses, I don't know any of it, and it's not catchy. But she's a lot younger than I am, so I guess this is the kind of music the young folk listen to? I'd love it if there were a Tom Petty ride or a Grateful Dead ride. I can see riding to Refugee or US Blues.

### 05:35

Shaenon: There’s a week left to help Kickstart the Skin Horse 2022 calendar, and I thought I’d share some of the pages. This has been really fun to put together. Why didn’t I do a calendar every year?

Channing: The monthly wallpapers are always great, and a compendium of them in calendar form is a great idea. I can say that without ego because I didn’t have the idea.

### 01:49

Poll: If a person from NY uses the word "fuck" in a sentence that means..

Does anyone seriously think that Facebook-the-company wants to fuel hate in the world? Yet you see articles that say exactly that from supposedly credible news orgs.

## Saturday, 20 November

### 17:21

The Repubs get more obnoxious all the time. There doesn't seem to be any limit.

### 16:21

Although Verbose is one of few logging libraries that work with threaded applications (See Comparison of Common Lisp Logging Libraries), I had some trouble getting it to work in my application. I have a Hunchentoot web application which handles each request in a separate thread that is built as a standalone executable. Getting Verbose to work in Slime was trivial but once I built the standalone, it kept crashing.

The Verbose documentation provides all the information needed to make this setup work but not in a step-by-step fashion so this took me some time to figure out.

To work with threaded applications Verbose must run inside a thread of its own. It tries to make life easier for the majority case by starting its thread as soon as it is loaded. Creating a standalone application requires that the running lisp image contains only a single running thread. The Verbose background thread prevents the binary from being built. This can be remedied by preventing Verbose from immediately starting its background thread and then manually start it inside the application.

When Verbose is loaded inside Slime it prints to the REPL's *standard-output* without fuss but when I loaded it inside my standalone binary it caused the application to crash. I did not investigate the *standard-output* connection logic but I discovered that you must tell Verbose explicitly about the current *standard-output* in a binary otherwise it won't work.

Steps:

1. (pushnew :verbose-no-init *features*)

This feature must be set before the Verbose system is loaded. It prevents Verbose from starting its main background thread, which it does by default immediately when it is loaded.

I added this form in the .asd file immediately before my application system definition. While executing code inside the .asd file is considered bad style it provided the cleanest way for me to do this otherwise I would have to do it in multiple places to cover all the use cases for development flows and building the production binary. There may be a better way to set *features* before a system is loaded but I have not yet discovered it.

2. (v:output-here *standard-output*)

This form makes Verbose use the *standard-output* as it currently exists. Leaving out this line was the cause of my application crashes. I am not sure what the cause is but I suspect Verbose tries to use Slime's version of *standard-output* if you don't tell it otherwise, even when it is not running in Slime.

This must be done before starting the Verbose background thread.

3. (v:start v:*global-controller*)

4. (v:info :main "Hello world!")

Start logging.

I use systemd to run my applications. Systemd recommends that applications run in the foreground and print logs to the standard output. The application output is captured and logged in whichever way systemd is configured. On default installations this is usually in /var/log/syslog in the standard logging format which prepends the timestamp and some other information. Verbose also by default prints the timestamp in the logged message, which just adds noise and makes syslog difficult to read.

Verbose's logging format can be configured to be any custom format by subclassing its message class and providing the proper formatting method. This must be done before any other Verbose configuration.

Combining all the code looks like below.

In app.asd:

(pushnew :verbose-no-init *features*)

(defsystem #:app
...)



In app.lisp:

(defclass log-message (v:message) ())

(defmethod v:format-message ((stream stream) (message log-message))
(format stream "[~5,a] ~{<~a>~} ~a"
(v:level message)
(v:categories message)
(v:format-message NIL (v:content message))))

(defun run ()
(setf v:*default-message-class* 'log-message)
(v:output-here *standard-output*)
(v:start v:*global-controller*)
(v:info :main "Hello world!")

...)



### 15:28

People learning Lisp often try to learn how to write macros by taking an existing function they have written and turning it into a macro. This is a mistake: macros and functions serve different purposes and it is almost never useful to turn functions into macros, or macros into functions.

Let’s say you are learning Common Lisp1, and you have written a fairly obvious factorial function based on the natural mathematical definition: if $$n \in \mathbb{N}$$, then

$n! = \begin{cases} 1 &n \le 1\\ n \times (n - 1)! &n > 1 \end{cases}$

So this gives you a fairly obvious recursive definition of factorial:

(defun factorial (n)
(if (<= n 1)
1
(* n (factorial (1- n )))))


And so, you think you want to learn about macros so can you write factorial as a macro? And you might end up with something like this:

(defmacro factorial (n)
(if (<= ,n 1)
1
(* ,n (factorial ,(1- n )))))


And this superficially seems as if it works:

> (factorial 10)
3628800


But it doesn’t, in fact, work:

> (let ((x 3))
(factorial x))

Error: In 1- of (x) arguments should be of type number.


Why doesn’t this work and can it be fixed so it does? If it can’t what has gone wrong and how are macros meant to work and what are they useful for?

It can’t be fixed so that it works. trying to rewrite functions as macros is a bad idea, and if you want to learn what is interesting about macros you should not start there.

To understand why this is true you need to understand what macros actually are in Lisp.

## What macros are: a first look

A macro is a function whose domain and range is syntax.

Macros are functions (quite explicitly so in CL: you can get at the function of a macro with macro-function, and this is something you can happily call the way you would call any other function), but they are functions whose domain and range is syntax. A macro is a function whose argument is a language whose syntax includes the macro and whose value, when called on an instance of that language, is a language whose syntax doesn’t include the macro. It may work recursively: its value may be a language which includes the same macro but in some simpler way, such that the process will terminate at some point.

So the job of macros is to provide a family of extended languages built on some core Lisp which has no remaining macros, only functions and function application, special operators & special forms involving them and literals. One of those languages is the language we call Common Lisp, but the macros written by people serve to extend this language into a multitude of variants.

As an example of this I often write in a language which is like CL, but is extended by the presence of a number of extra constructs, one of which is called ITERATE (but it predates the well-known one and is not at all the same):

(iterate next ((x 1))
(if (< x 10)
(next (1+ x))
x)


is equivalent to

(labels ((next (x)
(if (< x 10)
(next (1+ x))
x)))
(next 1))


Once upon a time when I first wrote iterate, it used to manually optimize the recursive calls to jumps in some cases, because the Symbolics I wrote it on didn’t have tail-call elimination. That’s a non-problem in LispWorks2. Anyone familiar with Scheme will recognise iterate as named let, which is where it came from (once, I think, it was known as nlet).

iterate is implemented by a function which maps from the language which includes it to a language which doesn’t include it, by mapping the syntax as above.

So compare this with a factorial function: factorial is a function whose domain is natural numbers and whose range is also natural numbers, and it has an obvious recursive definition. Well, natural numbers are part of the syntax of Lisp, but they’re a tiny part of it. So implementing factorial as a macro is, really, a hopeless task. What should

(factorial (+ x y (f z)))


Actually do when considered as a mapping between languages? Assuming you are using the recursive definition of the factorial function then the answer is it can’t map to anything useful at all: a function which implements that recursive definition simply has to be called at run time. The very best you could do would seem to be this:

(defun fact (n)
(if (< n 3)
n
(* n (fact (1- n)))))

(defmacro factorial (expression)
(fact ,expression))


And that’s not a useful macro (but see below).

So the answer is, again, that macros are functions which map between languages and they are useful where you want a new language: not just the same language with extra functions in it, but a language with new control constructs or something like that. If you are writing functions whose range is something which is not the syntax of a language built on Common Lisp, don’t write macros.

## What macros are: a second look

Macroexpansion is compilation.

A function whose domain is one language and whose range is another is a compiler for the language of the domain, especially when that language is somehow richer than the language of the range, which is the case for macros.

But it’s a simplification to say that macros are this function: they’re not, they’re only part of it. The actual function which maps between the two languages is made up of macros and the macroexpander provided by CL itself. The macroexpander is what arranges for the functions defined by macros to be called in the right places, and also it is the thing which arranges for various recursive macros to actually make up a recurscive function. So it’s important to understand that the macroexpander is a critical part of the process: macros on their own only provide part of it.

## An example: two versions of a recursive macro

People often say that you should not write recursive macros, but this prohibition on recursive macros is pretty specious: they’re just fine. Consider a language which only has lambda and doesn’t have let. Well, we can write a simple version of let, which I’ll call bind as a macro: a function which takes this new language and turns it into the more basic one. Here’s that macro:

(defmacro bind ((&rest bindings) &body forms)
((lambda ,(mapcar #'first bindings) ,@forms)
,@(mapcar #'second bindings)))


And now

> (bind ((x 1) (y 2))
(+ x y))
(bind ((x 1) (y 2)) (+ x y))
-> ((lambda (x y) (+ x y)) 1 2)
3


(These example expansions come via use of my trace-macroexpand package, available in a good Lisp near you: see appendix for configuration).

So now we have a language with a binding form which is more convenient than lambda. But maybe we want to be able to bind sequentially? Well, we can write a let* version, called bind*, which looks like this

(defmacro bind* ((&rest bindings) &body forms)
(if (null (rest bindings))
(bind ,bindings ,@forms)
(bind (,(first bindings))
(bind* ,(rest bindings) ,@forms))))


And you can see how this works: it checks if there’s just one binding in which case it’s just bind, and if there’s more than one it peels off the first and then expands into a bind* form for the rest. And you can see this working (here both bind and bind* are being traced):

> (bind* ((x 1) (y (+ x 2)))
(+ x y))
(bind* ((x 1) (y (+ x 2))) (+ x y))
-> (bind ((x 1)) (bind* ((y (+ x 2))) (+ x y)))
(bind ((x 1)) (bind* ((y (+ x 2))) (+ x y)))
-> ((lambda (x) (bind* ((y (+ x 2))) (+ x y))) 1)
(bind* ((y (+ x 2))) (+ x y))
-> (bind ((y (+ x 2))) (+ x y))
(bind ((y (+ x 2))) (+ x y))
-> ((lambda (y) (+ x y)) (+ x 2))
(bind* ((y (+ x 2))) (+ x y))
-> (bind ((y (+ x 2))) (+ x y))
(bind ((y (+ x 2))) (+ x y))
-> ((lambda (y) (+ x y)) (+ x 2))
4


You can see that, in this implementation, which is LW again, some of the forms are expanded more than once: that’s not uncommon in interpreted code: since macros should generally be functions (so, not have side-effects) it does not matter that they may be expanded multiple times. Compilation will expand macros and then compile the result, so all the overhead of macroexpansion happend ahead of run-time:

 (defun foo (x)
(bind* ((y (1+ x)) (z (1+ y)))
(+ y z)))
foo

> (compile *)
(bind* ((y (1+ x)) (z (1+ y))) (+ y z))
-> (bind ((y (1+ x))) (bind* ((z (1+ y))) (+ y z)))
(bind ((y (1+ x))) (bind* ((z (1+ y))) (+ y z)))
-> ((lambda (y) (bind* ((z (1+ y))) (+ y z))) (1+ x))
(bind* ((z (1+ y))) (+ y z))
-> (bind ((z (1+ y))) (+ y z))
(bind ((z (1+ y))) (+ y z))
-> ((lambda (z) (+ y z)) (1+ y))
foo
nil
nil

> (foo 3)
9


There’s nothing wrong with macros like this, which expand into simpler versions of themselves. You just have to make sure that the recursive expansion process is producing successively simpler bits of syntax and has a well-defined termination condition.

Macros like this are often called ‘recursive’ but they’re actually not: the function associated with bind* does not call itself. What is recursive is the function implicitly defined by the combination of the macro function and the macroexpander: the bind* function simply expands into a bit of syntax which it knows will cause the macroexpander to call it again.

It is possible to write bind* such that the macro function itself is recursive:

(defmacro bind* ((&rest bindings) &body forms)
(labels ((expand-bind (btail)
(if (null (rest btail))
(bind ,btail
,@forms)
(bind (,(first btail))
,(expand-bind (rest btail))))))
(expand-bind bindings)))


And now compiling foo again results in this output from tracing macroexpansion:

(bind* ((y (1+ x)) (z (1+ y))) (+ y z))
-> (bind ((y (1+ x))) (bind ((z (1+ y))) (+ y z)))
(bind ((y (1+ x))) (bind ((z (1+ y))) (+ y z)))
-> ((lambda (y) (bind ((z (1+ y))) (+ y z))) (1+ x))
(bind ((z (1+ y))) (+ y z))
-> ((lambda (z) (+ y z)) (1+ y))


You can see that now all the recursion happens within the macro function for bind* itself: the macroexpander calls bind*’s macro function just once.

While it’s possible to write macros like this second version of bind*, it is normally easier to write the first version and to allow the combination of the macroexpander and the macro function to implement the recursive expansion.

## Two historical uses for macros

There are two uses for macros — both now historical — where they were used where functions would be more natural.

The first of these is function inlining, where you want to avoid the overhead of calling a small function many times. This overhead was a lot on computers made of cardboard, as all computers were, and also if the stack got too deep the cardboard would tear and this was bad. It makes no real sense to inline a recursive function such as the above factorial: how would the inlining process terminate? But you could rewrite a factorial function to be explicitly iterative:

(defun factorial (n)
(do* ((k 1 (1+ k))
(f k (* f k)))
((>= k n) f)))


And now, if you have very many calls to factorial, you wanted to optimise the function call overhead away, and it was 1975, you might write this:

(defmacro factorial (n)
(let ((nv ,n))
(do* ((k 1 (1+ k))
(f k (* f k)))
((>= k nv) f))))


And this has the effect of replacing (factorial n) by an expression which will compute the factorial of n. The cost of that is that (funcall #'factorial n) is not going to work, and (funcall (macro-function 'factorial) ...) is never what you want.

Well, that’s what you did in 1975, because Lisp compilers were made out of the things people found down the sides of sofas. Now it’s no longer 1975 and you just tell the compiler that you want it to inline the function, please:

(declaim (inline factorial))
(defun factorial (n) ...)


and it will do that for you. So this use of macros is now purely historicl.

The second reason for macros where you really want functions is computing things at compile time. Let’s say you have lots of expressions like (factorial 32) in your code. Well, you could do this:

(defmacro factorial (expression)
(typecase expression
((integer 0)
(factorial/fn expression))
(number
(error "factorial of non-natural literal ~S" expression))
(t
(factorial/fn ,expression))))


So the factorial macro checks to see if its argument is a literal natural number and will compute the factorial of it at macroexpansion time (so, at compile time or just before compile time). So a function like

(defun foo ()
(factorial 32))


will now compile to simply return 263130836933693530167218012160000000. And, even better, there’s some compile-time error checking: code which is, say, (factorial 12.3) will cause a compile-time error.

Well, again, this is what you would do if it was 1975. It’s not 1975 any more, and CL has a special tool for dealing with just this problem: compiler macros.

(defun factorial (n)
(do* ((k 1 (1+ k))
(f k (* f k)))
((>= k n) f)))

(define-compiler-macro factorial (&whole form n)
(typecase n
((integer 0)
(factorial n))
(number
(error "literal number is not a natural: ~S" n))
(t form)))


Now factorial is a function and works the way you expect — (funcall #'factoial ...) will work fine. But the compiler knows that if it comes across (factorial ...) then it should give the compiler macro for factorial a chance to say what this expression should actually be. And the compiler macro does an explicit check for the argument being a literal natural number, and if it is computes the factorial at compile time, and the same check for a literal number which is not a natural, and finally just says ’I don’t know, call the function’. Note that the compiler macro itself calls factorial, but since the argument isn’t a literal there’s no recursive doom.

So this takes care of the other antique use of macros where you would expect functions. And of course you can combine this with inlining and it will all work fine: you can write functions which will handle special cases via compiler macros and will otherwise be inlined.

That leaves macros serving the purpose they are actually useful for: building languages.

## Appendix: setting up trace-macroexpand

(use-package :org.tfeb.hax.trace-macroexpand)

;;; Don't restrict print length or level when tracing
(setf *trace-macroexpand-print-level* nil
*trace-macroexpand-print-length* nil)

;;; Enable tracing
(trace-macroexpand)

;;; Trace the macros you want to look at ...
(trace-macro ...)

;;; ... and ntrace them
(untrace-macro ...)


1. All the examples in this article are in Common Lisp except where otherwise specified. Other Lisps have similar considerations, although macros in Scheme are not explicitly functions in the way they are in CL.

2. This article originated as a message on the lisp-hug mailing list for LispWorks users. References to ‘LW’ mean LispWorks, although everything here should apply to any modern CL. (In terms of tail call elimination I would define a CL which does not eliminate tail self-calls in almost all cases under reasonable optimization settings as pre-modern: I don’t use such implementations.)

People sometimes ask which is the best Lisp dialect? That’s a category error, and here’s why.

Programming in Lisp — any Lisp — is about building languages: in Lisp the way you solve a problem is by building a language — a jargon, or a dialect if you like — to talk about the problem and then solving the problem in that language. Lisps are, quite explicitly, language-building languages.

This is, in fact, how people solve large problems in all programming languages: Greenspun’s tenth rule isn’t really a statement about Common Lisp, it’s a statement that all sufficiently large software systems end up having some hacked-together, informally-specified, half-working language in which the problem is actually solved. Often people won’t understand that the thing they’ve built is in fact a language, but that’s what it is. Everyone who has worked on large-scale software will have come across these things: often they are very horrible, and involve much use of language-in-a-string1.

The Lisp difference is two things: when you start solving a problem in Lisp, you know, quite explicitly, that this is what you are going to do; and the language has wonderful tools which let you incrementally build a series of lightweight languages, ending up with one or more languages in which to solve the problem.

So, after that preface, why is this question the wrong one to ask? Well, if you are going to program in Lisp you are going to be building languages, and you want those languages not to be awful. Lisp makes it it far easier to build languages which are not awful, but it doesn’t prevent you doing so if you want to. And again, anyone who has dealt with enough languages built on Lisps will have come across some which are, in fact, awful.

If you are going to build languages then you need to understand how languages work — what makes a language habitable to its human users (the computer does not care with very few exceptions). That means you will need to be a linguist. So the question then is: how do you become a linguist? Well, we know the answer to that, because there are lots of linguists and lots of courses on linguistics. You might say that, well, those people study natural languages, but that’s irrelevant: natural languages have been under evolutionary pressure for a very long time and they’re really good for what they’re designed for (which is not the same as what programming languages are designed for, but the users — humans — are the same).

So, do you become a linguist by learning French? Or German? Or Latin? Or Cuzco Quechua? No, you don’t. You become a linguist by learning enough about enough languages that you can understand how languages work. A linguist isn’t someone who speaks French really well: they’re someone who understands that French is a Romance language, that German isn’t but has many Romance loan words, that English is closer to German than it is French but got a vast injection of Norman French, which in turn wasn’t that close to modern French, that Swiss German has cross-serial dependencies but Hochdeutsch does not and what that means, and so on. A linguist is someone who understands things about the structure of languages: what do you see, what do you never see, how do different languages do equivalent things? And so on.

The way you become a linguist is not by picking a language and learning it: it’s by looking at lots of languages enough to understand how they work.

If you want to learn to program in Lisp, you will need to become a linguist. The very best way to ensure you fail at that is to pick a ‘best’ Lisp and learn that. There is no best Lisp, and in order to program well in any Lisp you must be exposed to as many Lisps and as many other languages as possible.

If you think there’s a distinction between a ‘dialect’, a ‘jargon’ and a ‘language’ then I have news for you: there is. A language is a dialect with a standards committee. (This is stolen from a quote due to Max Weinrich that all linguists know:

אַ שפּראַך איז אַ דיאַלעקט מיט אַן אַרמיי און פֿלאָט

a shprakh iz a dyalekt mit an armey un flot.)

1. ‘Language-in-a-string’ is where a programming language has another programming language embedded in strings in the outer language. Sometimes programs in that inner programming language will be made up by string concatenation in the outer language. Sometimes that inner language will, in turn, have languages embedded in its strings. It’s a terrible, terrible thing.

Recently, the awesome-lisp-companies list was posted on HN, more people got to know it (look, this list is fan-cooked and we add companies when we learn about one, often by chance, don’t assume it’s anything “official” or exhaustive), and Alex Nygren informed us that his company Kina Knowledge uses Common Lisp in production:

We use Common Lisp extensively in our document processing software core for classification, extraction and other aspects of our service delivery and technology stack.

He very kindly answered more questions.

## Thanks for letting us know about Kina Knowledge. A few more words if you have time? What implementation(s) are you using?

We use SBCL for all our Common Lisp processes. It’s easier with the standardization on a single engine, but we also have gotten tied to it in some of our code base due to using the built in SBCL specific extensions. I would like, but have no bandwidth, to evaluate CCL as well, especially on the Windows platform, where SBCL is weakest. Since our clients use Windows systems attached to scanners, we need to be able to support it with a client runtime.

Development is on MacOS with Emacs or Ubuntu with Emacs for CL, and then JetBrains IDEs for Ruby and JS and Visual Studio for some interface code to SAP and such. We develop the Kina UI in Kina itself using our internal Lisp, which provides a similar experience to Emacs/SLY.

## What is not Lisp in your stack? For example, in “Kina extracts information from PDF, TIFFs, Excel, Word and more” as we read on your website.

Presently we use a Rails/Ruby environment for driving our JSON based API, and some legacy web functions. However, increasingly, once the user is logged in, they are interacting with a Common Lisp back end via a web socket (Hunchentoot and Hunchensocket) interacting with a Lisp based front end. Depending on the type of information extraction, the system uses Javascript, Ruby and Common Lisp. Ideally, I’d like to get all the code refactored into a prefix notation, targeting Common Lisp or DLisp (what we call our internal Lisp that compiles into Javascript).

## What’s your position on open-source: do you use open-source Lisp libraries, do you (plan to) open-source some?

Yes. We recently put our JSON-LIB (https://github.com/KinaKnowledge/json-lib) out on Github, which is our internal JSON parser and encoder and we want to open source DLisp after some clean-up work. Architecturally, DLisp can run in the browser, or in sandboxed Deno containers on the server side, so we can reuse libraries easily. It’s not dependent on a server-side component though to run.

Library wise, we strictly try and limit how many third party (especially from the NPM ecosystem) libraries we are dependent on, especially in the Javascript world. In CL, we use the standard stuff like Alexandria, Hunchentoot, Bordeaux Threads, and things like zip.

## How did hiring and forming lisp or non-lisp developers go? Did you look for experienced lispers or did you seek experienced engineers, even with little to no prior Lisp background?

Because we operate a lot in Latin America, I trained non-lisper engineers who speak Spanish on how to program Lisp, specifically our DLisp, since most customizations occur specifically for user interface and workflows around document centric processes, such as presenting linked documents and their data in specific ways. How the lisp way of thinking really depended on their aptitude with programming, and their English capabilities to understand me and the system. The user system is multilingual, but the development documentation is all in English. But it was really amazing when I saw folks who are experienced with Javascript and .Net get the ideas of Lisp and how compositional it can be as you build up towards a goal.

Besides, with DLisp, you can on the fly construct a totally new UI interaction - live - in minutes and see changes in the running app without the dreadful recompile-and-reload everything cycle that is typical. Instead, just recompile the function (analogous to C-c, C-c in Emacs), in the browser, and see the change. Then these guys would go out and interact with clients and build stuff. I knew once I saw Spanish functions and little DSLs showing up in organizational instances that they were able to make progress. I think it is a good way to introduce people to Lisp concepts without having to deal with the overhead of learning Emacs at the same time. I pushed myself through that experience when I first was learning CL, and now use Emacs every day for a TON of work tasks, but at the beginning it was tough, and I had to intentionally practice getting to the muscle memory that is required to be truly productive in a tool.

## How many lispers are working together, how big a codebase do you manage?

Right now, in our core company we have three people, two here in Virginia and one in Mexico City. We use partners that provide services such as scanning and client integration work. We are self-funded and have grown organically, which is freeing because we are not beholden to investor needs. We maintain maximum flexibility, at the expense of capital. Which is OK for us right now. Lisp allows us to scale dramatically and manage a large code base. I haven’t line counted recently, but it exceeds 100K lines across server and client, with > 50% in Lisp.

## Do you sometimes wish the CL (pro) world was more structured? (we have a CL Foundation but not so much active).

I really like the Common Lisp world. I would like it to be more popular, but at the same time, it is a differentiator for us. It is fast - our spatial classifier takes only milliseconds to come to a conclusion about a page (there is additional time prior to this step due to the OpenCV processing - but not too much) and identify it and doesn’t require expensive hardware. Most of our instances run on ARM-64, which at least at AWS, is 30% or so cheaper than x86-64. The s-expression structures align to document structures nicely and allow a nice representation that doesn’t lose fidelity to the original layouts and hierarchies. I am not as active as I would like to be in the Common Lisp community, mainly due to time and other commitments. I don’t know much about the CL foundation.

## And so, how did you end up with CL?

Our UI was first with the DLisp concepts. I was intrigued by Clojure for the server portion, but I couldn’t come to terms with the JVM and the heavyweight of it. The server-side application was outgrowing the Rails architecture in terms of what we wanted to do with it, and, at the time, 4 years ago, Ruby was slower. In fact, Ruby had become a processing bottleneck for us (though I am certain the code could have been improved too). I liked the idea of distributing binary applications as well, which we needed to do in some instances, and building a binary runtime of the software was a great draw, too.

I also liked how well CL is thought out, from a spec standpoint. It is stable both in terms of performance and change. I had been building components with TensorFlow and Python 3, but for what I wanted to do, I couldn’t see how I could get there with back propagation and the traditional “lets calculate the entire network state”. If you don’t have access to high end graphic cards, it’s just too slow and too heavy. I was able to get what we needed to do in CL after several iterations and dramatically improve speed and resource utilization. I am very happy with that outcome. We are in what I consider to be a hard problem space: we take analog representations of information, a lot of it being poor quality and convert it to clean, structured digital information. CL is the core of that for us.

Here is an example of our UI, where extractions and classification can be managed. This is described in DLisp which interacts with a Common Lisp back end via a web socket.

Here is the function for the above view being edited in Kina itself. We do not obfuscate our client code, and all code that runs on our clients’ computers is fully available to view and, with the right privileges, to modify and customize. You can see the Extract Instruction Language in the center pane, which takes ideas from the Logo language in terms of a cursor (aka the turtle) that can be moved around relative to the document. We build this software to be used by operations teams and having a description language that is understandable by non-programmers such as auditors and operations personnel, is very useful. You can redefine aspects of the view or running environment and the change can take effect on the fly. Beyond the Javascript boot scaffolding to get the system started up in the browser, everything is DLisp communicating with Common Lisp and, depending on the operation, Rails.

I hope this information is helpful!

It is, thanks again!

### 14:14

Video demo of a new Drummer feature. Headlines whose type is 'rss' can be expanded to reveal the items in the feed it connects to.

## Friday, 19 November

### 22:14

A new Drummer verb, rss.readFeed, returns a JavaScript object. This used to be a complex thing to do, but it's absolutely as simple as it can be. I did something new here. I'm only passing through values that are documented in the RSS 2.0 spec and the source namespace. That's not as draconian as it might sound, because I'm using a Node package written by my friend Dan MacTough that handles all the common feed flavors, including Atom. His code translates the variants to a common vocabulary. And I further winnow it down to just the basic concepts. To some extent, RSS grew in an unruly chaotic way, but imho the core is solid. After all this time, with Drummer as a new platform, I think it's a good time to pause and create (hopefully) a simpler future that can work better.

### 16:56

Something I've been hearing from employees of big tech companies for my entire career, going back to the late 70s --> "Who the fuck are you?"

That's what they say and do. So many examples. And almost all of them are gone. They were significant, maybe, for a few weeks. Then poof, some other asshole at some other tech giant comes along, and gets his or her (mostly his) few minutes to be a super asshole.

How much more would we get done if we lived up to the hype about supporting innovation. You can't do a lot of that as some random putz inside of a bigco.

You pretty much have to do what I've done, which is stay out of those monstrous structures, that is, if you want to actually do anything.

And then of course the jerks come along and kick over your sand castle.

And then they're gone in a few weeks.

So far I've survived them all. Knock wood, praise murph, etc.

### 15:28

A discussion about how Markdown should be processed for Drummer blogs. Basically, what role if any should indentation play, and how many newlines to generate for each line in the outline. My current position -- indentation should play no role in the Markdown we generate from the outline, it should be ignored. And we should generate one newline for every line in the outline. Note this is not how Drummer works now.

### 14:42

Here she is, my new blue Tesla Model 3.

Drives great.

A bit hard to get in and out of.

Very comfortable to drive.

Quiet.

It's charging now in the garage.

It's as powerful a car as I've ever driven, like my 2007 BMW 535i.

Most of the Tesla people at the Mt Kisco store were assholes, but they finally gave me someone who talked like a New Yorker. Most of the Tesla people have the same smarmy attitude that they're doing you some kind of favor to sell you a hugely expensive car, like at an Apple store. The guy we finally got talked to us like human beings who love computers and are excited to own a new kind of car.

Thanks to Peter Politi for taking the trip with me.

I’d forgotten I’d committed to taking over the Overture Center’s Instagram account, today, as I’ve got a piece in their “Everything Covid” exhibit. Less work done today than I’d hoped. Staying off the internet is usually a good thing. The first actual piece of work I managed to get to wasn’t until 11:30. My best days begin with me getting a ton of work done in the morning.

I realize I said I’d keep these things short, but I just wanted to talk a bit about my afternoon routine.

My 13-year-old gets out of school at 3:45. However, I have to be there by 3:10 to be one of the first in the car line, to pick her up, otherwise I won’t get her until 4 or 4:10. It’s just the way the car line works, at her school. So I try to bring my iPad Pro with me so I can at least get a little extra work done, while waiting.

After this, there were errands to run, three stops, quick ones, but still, we didn’t get home until 5:10.

Of course, now, supper preparation begins. It was family Pizza/Popcorn/Movie night, moved up from Friday, as I’m appearing on the Trash Heroes live-play 5E campaign, as a guest.

Anyway, if you want a fantastic pizza recipe, we now use King Arthur’s Cheesy Crispy Pan Pizza. It’s virtually foolproof (and I’m the fool to prove it.) Despite what the recipe dictates, I can start the dough in the morning – or even around lunchtime, in extreme circumstances (today) – and have amazing pizza out of the oven by 6 pm. for that night.

I’m still getting comments back from the proofreaders. These are easy changes, but I need to keep them all straight, so that Layout Guru Hal knows which pages need replacing.

Trying to find a non-ableist word to use, instead of “Lame.” I’m hoping “Weaksauce” works. If not, then “Weak.”

Spent a couple of hours fixing page 192.I’m not sure what the heck happened to it. A bad scan, a while back (Months? Years?), or a poor transfer. the line quality was incredibly jagged.

I was mostly able to fix it on the iPad Pro while the family was watching “The Book of Life” for movie night. But still, ARGH!

Heartening to see that pages 192-194 (drawn and scanned in at the same time) do not have this problem.

• John

Notes coming back from the proofreaders.

A lot of notes.

This is a good thing, of course. But I had assumed pages 1 – 160 were close to being DONE done.

Most embarrassing catch?

“First Edition: Published January, 2021.”

• John

### 12:28

Jani P. relates "I ran into this appropriate CAPTCHA when filling out a lengthy, bureaucratic visa application form." (For our readers unfamiliar with the Anglo argot, "fricking"...

## Thursday, 18 November

### 21:49

Today Volker Birk (https://fdik.org/) and I were speaking over lunch about object initialisation in C++ and about how weakly defined a program entry point is, because of objects with static storage duration. Volker wrote a short program whose output changes after reversing the order of two variable definitions, both out of a ‘main’ function whose entire body was ‘return 0;’. He discussed it in German (https://blog.fdik.org/2021-11/s1637238415), on his blog (https://blog.fdik.org). I was more annoyed by the fact that initialisation order is not guaranteed to respect functional dependency across compilation units. Here is my test case, where GCC and the GNU ... [Read more]

### 13:42

When you view a normal XML file in the browser, Chrome gives you a simple, readable XML viewer.

I want to be able to invoke that viewer from a JavaScript app running in the browser.

Some kind of JavaScript call like

• window.xmlViewer (xmltext);

The only other option is to rewrite it from scratch, which I'm open to doing but would rather not. Does anyone know if such a thing exists?

BTW, one of the things that kind of silently limits the size of RSS is the way browsers display RSS feeds. They can handle other XML types in a normal sensible way. But the RSS viewer is broken. I wish they would just let it be, and show it as a normal XML file.

### 12:56

I did a refresh on the ArtShow collection yesterday, a few hundred more classic paintings. Free to download, or use via web.

### 12:07

Alan was recently reviewing some of the scriptlets his company writes to publish their RPM installers. Some of the script quality has been… questionable in the past, so Alan wanted to do some code...

### 06:21

The post 1558 appeared first on Looking For Group.

### 00:07

Yesterday, I posted this to Instagram. My caption said that I could tell just by looking at those two guys that they used to be cool. That’s a reference, a […]

## Wednesday, 17 November

### 22:21

I'd love to discuss Markdown with people who know Markdown and use Drummer. We should have a Markdown nodetype. I've been playing around with it, but I'm not satisfied with what I have yet, not ready to lock it down. Here's an RFC with a link to an example.

### 19:14

Thinking of a Chromebook, for reading news over breakfast, possibly taking it with me to a coffee place to do a little writing.

Running an Electron app in Linux? Is this possible?

It should have a couple of USB ports, bluetooth, a nice keyboard, mike, camera, etc.

Looks nice. Light.

Under 200? Comment here. PS: I went with the Lenovo Chromebook Flex 5. ### 18:28 Hello there! Welcome to what I hope will be a sort of TAO OF IGOR Countdown Journal, as I finish off the last bits and pieces (most small – one large) of the new collection. Obviously, completing THE TAO OF IGOR’s taken far longer than I could have ever imagined. TLDR: because I didn’t get the work done. Also TLDR: and I badly underestimated how much work was left to do. But why was that? I have thoughts – many thoughts – but I’ll get to those after the book is done. The goal of this mini-journal is to make sure I cross something – anything – off THE TAO OF IGOR’s shrinking to-do list every single day: to keep me focused as it comes to its end. These entries won’t be long, as I need to finish the book rather than spend hours playing with prose here. But it will, I hope, be a way for folks to check in on how things are going, and what I’ve done that day (I can’t promise this will be daily, but that’s my goal). Every little thing checked off the list from here on out goes a long way to wrapping this project up. Today, thanks to help from layout guru (and great friend) Hal Mangold, Pages 1-160 went to the proofreaders. I had thought these had been thoroughly checked before, and many typos corrected, but the new proofreaders found some mistakes that had been missed. I’ll get to correcting those this weekend, once all comments are in. Pages 161-191 are nearly ready to go off to proofreaders. This will leave pages 192-208 (the end of the last chapter) to be prepared for the proofreaders, along with my Afterword, a page of thanks, a small memorial to two friends, and a post-credit scene that will end the book. THE TAO OF IGOR will officially lock in at 220 pages, by far the largest DORK TOWER collection yet. Everything is drawn. I’ve started coloring the so-dubbed “Sistine Pages,” the massive double-page spread, which is the only big item left. Here’s a fun little two-thirds of a page that occurs halfway through the book: dialog and bottom third of the page cut out to avoid spoilers. Readers of the DORK TOWER web strip will recognize Stell’s Sabine Wren costume. As the events in THE TAO OF IGOR take place a few years ago, this is kinda canonically and all-officially-like her introduction to the DORK TOWER universe. I’m not sure if I’ll post these entries at the end of the day, or the morning after. We’ll see which works best. Until then, I remain indebted to you all, and appreciate your support deeply, • John ### 17:07 Grouping For Looks is a page-by-page retelling of the Looking For Group saga through the lens of a mirror universe where Cale is a goateed tyrant and Richard is a holy soul trying to set him on a good path. […] The post GFL – Page 0072 appeared first on Looking For Group. ### 16:56 I've had my Peloton for a couple of weeks now. I've done a session every day for the last six days. At first I didn't think I wanted to do the classes, but the second time I tried a class I was intrigued, then by the third, I was hooked. The exercise on a Peloton is more than twice the exercise you get riding around the moutains on an E-bike. It's hard work going up the hills, but you get to just have fun going down, there's no work involved. So 20 minutes of Peloton riding equals 40 minutes of road riding. Only it's even more, because you're working harder. Always harder. My legs feel stronger and healthier, and overall I feel that way too. An entirely positive experience, and my idea of what exercise is about is shifting in an interesting way. And -- tomorrow I get my Tesla Model 3. So I expect some more major horizon busting is to come. ### 16:14 A podcast about Project Glorp. ### 15:28 The next Drummer-related project is called Glorp. Glorp is for you if you document projects on GitHub. It's good at managing docs across lots of projects, all in one outline. I'd like to get a small group of people who do a lot of doc-writing for GitHub to review the design and docs. If you're part of the test group, you are making a commitment to use the software, to report problems, to let us know if fixes worked. I'm looking forward to working on this software with some excellent developer docs kind of people. If you'd like to be part of the test group, please fill out this form. Glorp! PS: A podcast about Glorp. ### 11:49 As a general rule, don't invent your own file format until you have to, and even then, probably don't. But sometimes, you have to. Tim C's company was building a format they called... ### 08:28 ### 06:28 ### 05:42 ### 02:42 If I had a billion dollars, I'd buy a TLD and sell domains for 10 cents each, to see what would happen. ### 01:56 Malynda Hale: "Imagine if Kyle Rittenhouse were a Black teenager." There's a dichotomy in outliners -- 1. The outliner as a document, like a spreadsheet or word processing file. A file format. 2. The outliner as a file system, a container for documents, a way of organizing them. Drummer, used as a blogging tool is a solid #2. ### 00:07 ## Tuesday, 16 November ### 22:14 In 2006 I ran into PC columnist John Dvorak at the Apple store in downtown San Francisco. We got talking about how he was a troll, and he told me to turn on my video camera and he explained how trolling works. A classic. ### 16:14 Tuesday, YOU are the star! We curate our favourites from the previous week’s comments on lfg.co and Facebook and remind you how clever you are. Here are your top comments for Looking For Group pages 1555 – 1556 Looking For […] The post Top Comments – Pages 1555 – 1556 appeared first on Looking For Group. ### 13:14 In the past decade, the growth in low-code and no-code solutions—promising that anyone can create simple computer programs using templates—has become a multi-billion dollar industry that touches everything from data and business analytics to application building and automation. As more companies look to integrate low-code and no-code solutions into their digital transformation plan, the question emerges again and again: what will happen to programming? Programmers know their jobs won’t disappear with a broadscale low-code takeover (even low-code is built on code), but undeniably their roles as programmers will shift as more companies adopt low-code solutions. This report is for programmers and software development teams looking to navigate that shift and understand how low-code and no-code solutions will shape their approach to code and coding. It will be fundamental for anyone working in software development—and, indeed, anyone working in any business that is poised to become a digital business—to understand what low-code means, how it will transform their roles, what kinds of issues it creates, why it won’t work for everything, and what new kinds of programmers and programming will emerge as a result. ## Everything Is Low-Code Low-code: what does it even mean? “Low-code” sounds simple: less is more, right? But we’re not talking about modern architecture; we’re talking about telling a computer how to achieve some result. In that context, low-code quickly becomes a complex topic. One way of looking at low-code starts with the spreadsheet, which has a pre-history that goes back to the 1960s—and, if we consider paper, even earlier. It’s a different, non-procedural, non-algorithmic approach to doing computation that has been wildly successful: is there anyone in finance who can’t use Excel? Excel has become table stakes. And spreadsheets have enabled a whole generation of businesspeople to use computers effectively—most of whom have never used any other programming language, and wouldn’t have wanted to learn a more “formal” programming language. So we could think about low-code as tools similar to Excel, tools that enable people to use computers effectively without learning a formal programming language. Another way of looking at low-code is to take an even bigger step back, and look at the history of programming from the start. Python is low-code relative to C++; C and FORTRAN are low-code relative to assembler; assembler is low-code relative to machine language and toggling switches to insert binary instructions directly into the computer’s memory. In this sense, the history of programming is the history of low-code. It’s a history of democratization and reducing barriers to entry. (Although, in an ironic and unfortunate twist, many of the people who spent their careers plugging in patch cords, toggling in binary, and doing math on mechanical calculators were women, who were later forced out of the industry as those jobs became “professional.” Democratization is relative.) It may be surprising to say that Python is a low-code language, but it takes less work to accomplish something in Python than in C; rather than building everything from scratch, you’re relying on millions of lines of code in the Python runtime environment and its libraries. In taking this bigger-picture, language-based approach to understanding low-code, we also have to take into account what the low-code language is being used for. Languages like Java and C++ are intended for large projects involving collaboration between teams of programmers. These are projects that can take years to develop, and run to millions of lines of code. A language like bash or Perl is designed for short programs that connect other utilities; bash and Perl scripts typically have a single author, and are frequently only a few lines long. (Perl is legendary for inscrutable one-liners.) Python is in the middle. It’s not great for large programs (though it has certainly been used for them); its sweet spot is programs that are a few hundred lines long. That position between big code and minimal code probably has a lot to do with its success. A successor to Python might require less code (and be a “lower code” language, if that’s meaningful); it would almost certainly have to do something better. For example, R (a domain-specific language for stats) may be a better language for doing heavy duty statistics, and we’ve been told many times that it’s easier to learn if you think like a statistician. But that’s where the trade-off becomes apparent. Although R has a web framework that allows you to build data-driven dashboards, you wouldn’t use R to build an e-commerce or an automated customer service agent; those are tasks for which Python is well suited. Is it completely out of bounds to say that Python is a low-code language? Perhaps; but it certainly requires much less coding than the languages of the 1960s and ’70s. Like Excel, though not as successfully, Python has made it possible for people to work with computers who would never have learned C or C++. (The same claim could probably be made for BASIC, and certainly for Visual Basic.) But this makes it possible for us to talk about an even more outlandish meaning of low-code. Configuration files for large computational systems, such as Kubernetes, can be extremely complex. But configuring a tool is almost always simpler than writing the tool yourself. Kelsey Hightower said that Kubernetes is the “sum of all the bash scripts and best practices that most system administrators would cobble together over time”; it’s just that many years of experience have taught us the limitations of endless scripting. Replacing a huge and tangled web of scripts with a few configuration files certainly sounds like low-code. (You could object that Kubernetes’ configuration language isn’t Turing complete, so it’s not a programming language. Be that way.) It enables operations staff who couldn’t write Kubernetes from scratch, regardless of the language, to create configurations that manage very complicated distributed systems in production. What’s the ratio—a few hundred lines of Kubernetes configuration, compared to a million lines of Go, the language Kubernetes was written in? Is that low-code? Configuration languages are rarely simple, but they’re always simpler than writing the program you’re configuring. As examples go, Kubernetes isn’t all that unusual. It’s an example of a “domain-specific language” (DSL) constructed to solve a specific kind of problem. DSLs enable someone to get a task done without having to describe the whole process from scratch, in immense detail. If you look around, there’s no shortage of domain-specific languages. Ruby on Rails was originally described as a DSL. COBOL was a DSL before anyone really knew what a DSL was. And so are many mainstays of Unix history: awksed, and even the Unix shell (which is much simpler than using old IBM JCLs to run a program). They all make certain programming tasks simpler by relying on a lot of code that’s hidden in libraries, runtime environments, and even other programming languages. And they all sacrifice generality for ease of use in solving a specific kind of problem. So, now that we’ve broadened the meaning of low-code to include just about everything, do we give up? For the purposes of this report, we’re probably best off looking at the narrowest and most likely implementation of low-code technology and limiting ourselves to the first, Excel-like meaning of “low-code”—but remembering that the history of programming is the history of enabling people to do more with less, enabling people to work with computers without requiring as much formal education, adding layer upon layer of abstraction so that humans don’t need to understand the 0s and the 1s. So Python is low-code. Kubernetes is low-code. And their successors will inevitably be even lower-code; a lower-code version of Kubernetes might well be built on top of the Kubernetes API. Mirantis has taken a step in that direction by building an Integrated Development Environment (IDE) for Kubernetes. Can we imagine a spreadsheet-like (or even graphical) interface to Kubernetes configuration? We certainly can, and we’re fine with putting Python to the side. We’re also fine with putting Kubernetes aside, as long as we remember that DSLs are an important part of the low-code picture: in Paul Ford’s words, tools to help users do whatever “makes the computer go.” ### Excel (And Why It Works) Excel deservedly comes up in any discussion of low-code programming. So it’s worth looking at what it does (and let’s willfully ignore Excel’s immediate ancestors, VisiCalc and Lotus). Why has Excel succeeded? One important difference between spreadsheets and traditional programming languages is so obvious that it’s easily overlooked. Spreadsheets are “written” on a two-dimensional grid (Figure 1). Every other programming language in common use is a list of statements: a list of instructions that are executed more or less sequentially. Figure 1. A Microsoft Excel grid (source: Python for Excel) What’s a 2D grid useful for? Formatting, for one thing. It’s great for making tables. Many Excel files do that—and no more. There are no formulas, no equations, just text (including numbers) arranged into a grid and aligned properly. By itself, that is tremendously enabling. Add the simplest of equations, and built-in understanding of numeric datatypes (including the all-important financial datatypes), and you have a powerful tool for building very simple applications: for example, a spreadsheet that sums a bunch of items and computes sales tax to do simple invoices. A spreadsheet that computes loan payments. A spreadsheet that estimates the profit or loss (P&L) on a project. All of these could be written in Python, and we could argue that most of them could be written in Python with less code. However, in the real world, that’s not how they’re written. Formatting is a huge value, in and of itself. (Have you ever tried to make output columns line up in a “real” programming language? In most programming languages, numbers and texts are formatted using an arcane and non-intuitive syntax. It’s not pretty.) The ability to think without loops and a minimal amount of programming logic (Excel has a primitive IF statement) is important. Being able to structure the problem in two or three dimensions (you get a third dimension if you use multiple sheets) is useful, but most often, all you need to do is SUM a column. If you do need a complete programming language, there’s always been Visual Basic—not part of Excel strictly speaking, but that distinction really isn’t meaningful. With the recent addition of LAMBDA functions, Excel is now a complete programming language in its own right. And Microsoft recently released Power Fx as an Excel-based low-code programming language; essentially, it’s Excel equations with something that looks like a web application replacing the 2D spreadsheet. Making Excel a 2D language accomplished two things: it gave users the ability to format simple tables, which they really cared about; and it enabled them to think in columns and rows. That’s not sophisticated, but it’s very, very useful. Excel gave a new group of people the ability to use computers effectively. It’s been too long since we’ve used the phrase “become creative,” but that’s exactly what Excel did: it helped more people to become creative. It created a new generation of “citizen programmers” who never saw themselves as programmers—just more effective users. That’s what we should expect of a low-code language. It isn’t about the amount of code. It’s about extending the ability to create to more people by changing paradigms (1D to 2D), eliminating hard parts (like formatting), and limiting what can be done to what most users need to do. This is democratizing. ### UML UML (Unified Modeling Language) was a visual language for describing the design of object oriented systems. UML was often misused by programmers who thought that UML diagrams somehow validated a design, but it gave us something that we didn’t have, and arguably needed: a common language for scribbling software architectures on blackboards and whiteboards. The architects who design buildings have a very detailed visual language for blueprints: one kind of line means a concrete wall, another wood, another wallboard, and so on. Programmers wanted to design software with a visual vocabulary that was equally rich. It’s not surprising that vendors built products to compile UML diagrams into scaffolds of code in various programming languages. Some went further to add an “action language” that turned UML into a complete programming language in its own right. As a visual language, UML required different kinds of tools: diagram editors, rather than text editors like Emacs or vi (or Visual Studio). In modern software development processes, you’d also need the ability to check the UML diagrams themselves (not the generated code) into some kind of source management system; i.e., the important artifact is the diagram, not something generated from the diagram. But UML proved to be too complex and heavyweight. It tried to be everything to everybody: both a standard notation for high-level design and visual tool for building software. It’s still used, though it has fallen out of favor. Did UML give anyone a new way of thinking about programming? We’re not convinced that it did, since programmers were already good at making diagrams on whiteboards. UML was of, by, and for engineers, from the start. It didn’t have any role in democratization. It reflected a desire to standardize notations for high-level design, rather than rethink it. Excel and other spreadsheets enabled more people to be creative with computers; UML didn’t. ### LabVIEW LabVIEW is a commercial system that’s widely used in industry—primarily in research & development—for data collection and automation. The high-school FIRST Robotics program depends heavily on it. The visual language that LabVIEW is built on is called G, and doesn’t have a textual representation. The dominant metaphor for G is a control panel or dashboard (or possibly an entire laboratory). Inputs are called “controls”; outputs are called “indicators.” Functions are “virtual instruments,” and are connected to each other by “wires.” G is a dataflow language, which means that functions run as soon as all their inputs are available; it is inherently parallel. It’s easy to see how a non-programmer could create software with LabVIEW doing nothing more than connecting together virtual instruments, all of which come from a library. In that sense, it’s democratizing: it lets non-programmers create software visually, thinking only about where the data comes from and where it needs to go. And it lets hardware developers build abstraction layers on top of FPGAs and other low-level hardware that would otherwise have to be programmed in languages like Verilog or VHDL. At the same time, it is easy to underestimate the technical sophistication required to get a complex system working with LabVIEW. It is visual, but it isn’t necessarily simple. Just as in Fortran or Python, it’s possible to build complex libraries of functions (“virtual instruments”) to encapsulate standard tasks. And the fact that LabVIEW is visual doesn’t eliminate the need to understand, in depth, the task you’re trying to automate, and the hardware on which you’re automating it. As a purely visual language, LabVIEW doesn’t play well with modern tools for source control, automated testing, and deployment. Still, it’s an important (and commercially successful) step away from the traditional programming paradigm. You won’t see lines of code anywhere, just wiring diagrams (Figure 2). Like Excel, LabVIEW provides a different way of thinking about programming. It’s still code, but it’s a different kind of code, code that looks more like circuit diagrams than punch cards. Figure 2. An example of a LabVIEW schematic diagram (source: JKI) ### Copilot There has been a lot of research on using AI to generate code from human descriptions. GPT-3 has made that work more widely visible, but it’s been around for a while, and it’s ongoing. We’ve written about using AI as a partner in pair programming. While we were writing this report, Microsoft, OpenAI, and GitHub announced the first fruit of this research: Copilot, an AI tool that was trained on all the public code in GitHub’s codebase. Copilot makes suggestions while you write code, generating function bodies based on descriptive comments (Figure 3). Copilot turns programming on its head: rather than writing the code first, and adding comments as an afterthought, start by thinking carefully about the problem you want to solve and describing what the components need to do. (This inversion has some similarities to test-driven and behavior-driven development.) Still, this approach begs the question: how much work is required to find a description that generates the right code? Could technology like this be used to build a real-world project, and if so, would that help to democratize programming? It’s a fair question. Programming languages are precise and unambiguous, while human languages are by nature imprecise and ambiguous. Will compiling human language into code require a significant body of rules to make it, essentially, a programming language in its own right? Possibly. But on the other hand, Copilot takes on the burden of remembering syntax details, getting function names right, and many other tasks that are fundamentally just memory exercises. Figure 3. GitHub’s Copilot in action (source: Copilot) Salvatore Sanfilippo (@antirez) touched on this in a Twitter thread, saying “Every task Copilot can do for you is a task that should NOT be part of modern programming.” Copilot doesn’t just free you from remembering syntax details, what functions are stashed in a library you rarely use, or how to implement some algorithm that you barely remember. It eliminates the boring drudgery of much of programming—and, let’s admit it, there’s a lot of that. It frees you to be more creative, letting you think more carefully about that task you’re doing, and how best to perform it. That’s liberating—and it extends programming to those who aren’t good at rote memory, but who are experts (“subject matter experts”) in solving particular problems. Copilot is in its very early days; it’s called a “Technical Preview,” not even a beta. It’s certainly not problem-free. The code it generates is often incorrect (though you can ask it to create any number of alternatives, and one is likely to be correct). But it will almost certainly get better, and it will probably get better fast. When the code works, it’s often low-quality; as Jeremy Howard writes, language models reflect an average of how people use language, not great literature. Copilot is the same. But more importantly, as Howard says, most of a programmer’s work isn’t writing new code: it’s designing, debugging, and maintaining code. To use Copilot well, programmers will have to realize the trade-off: most of the work of programming won’t go away. You will need to understand, at a higher level, what you’re trying to do. For Sanfilippo, and for most good or great programmers, the interesting, challenging part of programming comes in that higher-level work, not in slinging curly braces. By reducing the labor of writing code, allowing people to focus their effort on higher-level thought about what they want to do rather than on syntactic correctness, Copilot will certainly make creative computing possible for more people. And that’s democratization. ### Glitch Glitch, which has become a compelling platform for developing web applications, is another alternative. Glitch claims to return to the copy/paste model from the early days of web development, when you could “view source” for any web page, copy it, and make any changes you want. That model doesn’t eliminate code, but offers a different approach to understanding coding. It reduces the amount of code you write; this in itself is democratizing because it enables more people to accomplish things more quickly. Learning to program isn’t fun if you have to work for six months before you can build something you actually want. It gets you interacting with code that’s already written and working from the start (Figure 4); you don’t have to stare at a blank screen and invent all the technology you need for the features you want. And it’s completely portable: Glitch code is just HTML, CSS, and JavaScript stored in a GitHub archive. You can take that code, modify it, and deploy it anywhere; you’re not stuck with Glitch’s proprietary app. Anil Dash, Glitch’s CEO, calls this “Yes code”, affirming the importance of code. Great artists steal from each other, and so do the great coders; Glitch is a platform that facilitates stealing, in all the best ways. Figure 4. Glitch’s prepopulated, comment-heavy React web application, which guides the user to using its code (source: Glitch) ### Forms and Templates Finally, many low-code platforms make heavy use of forms. This is particularly common among business intelligence (BI) platforms. You could certainly argue that filling in a form isn’t low-code at all, it’s just using a canned app; but think about what’s happening. The fields in the form are typically a template for filling in a complex SQL statement. A relational database executes that statement, and the results are formatted and displayed for the users. This is certainly democratizing: SQL expertise isn’t expected of most managers—or, for that matter, of most programmers. BI applications unquestionably allow people to do what they couldn’t do otherwise. (Anyone at O’Reilly can look up detailed sales data in O’Reilly’s BI system, even those of us who have never learned SQL or written programs in any language.) Painlessly formatting the results, including visualizations, is one of the qualities that made Excel revolutionary. Similarly, low-code platforms for building mobile and web apps—such as Salesforce, Webflow, Honeycode, and Airtable—provide non-programmers with drag-and-drop solutions for creating everything from consumer-facing apps to internal workflows via templated approaches and purport to be customizable, but are ultimately finite based on the offerings and capabilities of each particular platform. But do these templating approaches really allow a user to become creative? That may be the more important question. Templates arguably don’t. They allow the user to create one of a number (possibly a large number) of previously defined reports. But they rarely allow a user to create a new report without significant programming skills. In practice, regardless of how simple it may be to create a report, most users don’t go out of their way to create new reports. The problem isn’t that templating approaches are “ultimately finite”—that trade-off of limitations against ease comes with almost any low-code approach, and some template builders are extremely flexible. It’s that, unlike Excel, and unlike LabVIEW, and unlike Glitch, these tools don’t really offer new ways to think about problems. It’s worth noting—in fact, it’s absolutely essential to note—that these low-code approaches rely on huge amounts of traditional code. Even LabVIEW—it may be completely visual, but LabVIEW and G were implemented in a traditional programming language. What they’re really doing is allowing people with minimal coding skills to make connections between libraries. They enable people to work by connecting things together, rather than building the things that are being connected. That will turn out to be very important, as we’ll start to examine next. ## Rethinking the Programmer Programmers have cast themselves as gurus and rockstars, or as artisans, and to a large extent resisted democratization. In the web space, that has been very explicit: people who use HTML and CSS, but not sophisticated JavaScript, are “not real programmers.” It’s almost as if the evolution of the web from a Glitch-like world of copy and paste towards complex web apps took place with the intention of forcing out the great unwashed, and creating an underclass of coding-disabled. Low-code and no-code are about democratization, about extending the ability to be creative with computers and creating new citizen programmers. We’ve seen that it works in two ways: on the low end (as with Excel), it allows people with no formal programming background to perform computational tasks. Perhaps more significantly, Excel (and similar tools) allow a user to gradually work up the ladder to more complex tasks: from simple formatting to spreadsheets that do computation, to full-fledged programming. Can we go further? Can we enable subject matter experts to build sophisticated applications without needing to communicate their understanding to a group of coders? At the Strata Data Conference in 2019, Jeremy Howard discussed an AI application for classifying burns. This deep-learning application was trained by a dermatologist—a subject matter expert—who had no knowledge of programming. All the major cloud providers have services for automating machine learning, and there’s an ever-increasing number of AutoML tools that aren’t tied to a specific provider. Eliminating the knowledge transfer between the SME and the programmer by letting SMEs build the application themselves is the shortest route to building better software. On the high end, the intersection between AI and programming promises to make skilled programmers more productive by making suggestions, detecting bugs and vulnerabilities, and writing some of the boilerplate code itself. IBM is trying to use AI to automate translations between different programming languages; we’ve already mentioned Microsoft’s work on generating code from human-language descriptions of programming tasks, culminating with their Copilot project. This technology is still in the very early days, but it has the potential to change the nature of programming radically. These changes suggest that there’s another way of thinking about programmers. Let’s borrow the distinction between “blue-” and “white”-collar workers. Blue-collar programmers connect things; white-collar programmers build the things to be connected. This is similar to the distinction between the person who installs or connects household appliances and the person who designs them. You wouldn’t want your plumber designing your toilet; but likewise, you wouldn’t want a toilet designer (who wears a black turtleneck and works in a fancy office building) to install the toilet they designed. This model is hardly a threat to the industry as it’s currently institutionalized. We will always need people to connect things; that’s the bulk of what web developers do now, even those working with frameworks like React.js. In practice, there has been—and will continue to be—a lot of overlap between the “tool designer” and “tool user” roles. That won’t change. The essence of low-code is that it allows more people to connect things and become creative. We must never undervalue that creativity, but likewise, we have to understand that more people connecting things—managers, office workers, executives—doesn’t reduce the need for professional tools, any more than the 3D printers reduced the need for manufacturing engineers. The more people who are capable of connecting things, the more things need to be connected. Programmers will be needed to build everything from web widgets to the high-level tools that let citizen programmers do their work. And many citizen programmers will see ways for tools to be improved or have ideas about new tools that will help them become more productive, and will start to design and build their own tools. ## Rethinking Programmer Education Once we make the distinction between blue- and white-collar programmers, we can talk about what kinds of education are appropriate for the two groups. A plumber goes to a trade school and serves an apprenticeship; a designer goes to college, and may serve an internship. How does this compare to the ways programmers are educated? As complex as modern web frameworks like React.js may be (and we suspect they’re a very programmerly reaction against democratization), you don’t need a degree to become a competent web developer. The educational system is beginning to shift to take this into account. Boot camps (a format probably originating with Gregory Brown’s Ruby Mendicant University) are the programmer’s equivalent of trade schools. Many boot camps facilitate internships and initial jobs. Many students at boot camps already have degrees in a non-technical field, or in a technical field that’s not related to programming. Computer science majors in colleges and universities provide the “designer” education, with a focus on theory and algorithms. Artificial intelligence is a subdiscipline that originated in academia, and is still driven by academic research. So are disciplines like bioinformatics, which straddles the boundaries between biology, medicine, and computer science. Programs like Data Carpentry and Software Carpentry (two of the three organizations that make up “The Carpentries”) cater specifically to graduate students who want to improve their data or programming skills. This split matches a reality that we’ve always known. You’ve never needed a four-year computer science degree to get a programming job; you still don’t. There are many, many programmers who are self-taught, and some startup executives who never entered college (let alone finished it); as one programmer who left a senior position to found a successful startup once said in conversation, “I was making too much money building websites when I was in high school.” No doubt some of those who never entered college have made significant contributions in algorithms and theory. Boot camps and four-year institutions both have weaknesses. Traditional colleges and universities pay little attention to the parts of the job that aren’t software development: teamwork, testing, agile processes, as well as areas of software development that are central to the industry now, such as cloud computing. Students need to learn how to use databases and operating systems effectively, not design them. Boot camps, on the other hand, range from the excellent to the mediocre. Many go deep on a particular framework, like Rails or React.js, but don’t give students a broader introduction to programming. Many engage in ethically questionable practices around payment (boot camps aren’t cheap) and job placement. Picking a good boot camp may be as difficult as choosing an undergraduate college. To some extent, the weaknesses of boot camps and traditional colleges can be helped through apprenticeships and internships. However, even that requires care: many companies use the language of the “agile” and CI/CD, but have only renamed their old, ineffective processes. How can interns be placed in positions where they can learn modern programming practices, when the companies in which they’re placed don’t understand those practices? That’s a critical problem, because we expect that trained programmers will, in effect, be responsible for bringing these practices to the low-code programmers. Why? The promise is that low-code allows people to become productive and creative with little or no formal education. We aren’t doing anyone a service by sneaking educational requirements in through the back door. “You don’t have to know how to program, but you do have to understand deployment and testing”—that misses the point. But that’s also essential, if we want software built by low-code developers to be reliable and deployable—and if software created by citizen programmers can’t be deployed, “democratization” is a fraud. That’s another place where professional software developers fit in. We will need people who can create and maintain the pipelines by which software is built, tested, archived, and deployed. Those tools already exist for traditional code-heavy languages; but new tools will be needed for low-code frameworks. And the programmers who create and maintain those tools will need to have experience with current software development practices. They will become the new teachers, teaching everything about computing that isn’t coding. Education doesn’t stop there; good professionals are always learning. Acquiring new skills will be a part of both the blue-collar and white-collar programmer experience well beyond the pervasiveness of low-code. ## Rethinking the Industry If programmers change, so will the software industry. We see three changes. In the last 20 years, we’ve learned a lot about managing the software development process. That’s an intentionally vague phrase that includes everything from source management (which has a history that goes back to the 1970s) to continuous deployment pipelines. And we have to ask: if useful work is coming from low-code developers, how do we maintain that? What does GitHub for Excel, LabVIEW, or GPT-3 look like? When something inevitably breaks, what will debugging and testing look like when dealing with low-code programs? What does continuous delivery mean for applications written with SAP or PageMaker? Glitch, Copilot, and Microsoft’s Power Fx are the only low-code systems we’ve discussed that can answer this question right now. Glitch fits into CI/CD practice because it’s a system for writing less code, and copying more, so it’s compatible with our current tooling. Likewise, Copilot helps you write code in a traditional programming language that works well with CI/CD tools. Power Fx fits because it’s a traditional text-based language: Excel formulas without the spreadsheet. (It’s worth noting that Excel’s .xlsx files aren’t amenable to source control, nor do they have great tools for debugging and testing, which are a standard part of software development.) Extending fundamental software development practices like version control, automated testing, and continuous deployment to other low-code and no-code tools sounds like a job for programmers, and one that’s still on the to-do list. Making tool designers and builders more effective will undoubtedly lead to new and better tools. That almost goes without saying. But we hope that if coders become more effective, they will spend more time thinking about the code they write: how it will be used, what problems are they trying to solve, what are the ethical questions these problems raise, and so on. This industry has no shortage of badly designed and ethically questionable products. Rather than rushing a product into release without considering its implications for security and safety, perhaps making software developers more effective will let them spend more time thinking about these issues up front, and during the process of software development. Finally, an inevitable shift in team structure will occur across the industry, allowing programmers to focus on solving with code what low-code solutions can’t solve, and ensuring that what is solved through low-code solutions is carefully monitored and corrected. Just as spreadsheets can be buggy and an errant decimal or bad data point can sink businesses and economies, buggy low-code programs built by citizen programmers could just as easily cause significant headaches. Collaboration—not further division—between programmers and citizen programmers within a company will ensure that low-code solutions are productive, not disruptive as programming becomes further democratized. Rebuilding teams with this kind of collaboration and governance in mind could increase productivity for companies large and small—affording smaller companies who can’t afford specialization the ability to diversify their applications, and allowing larger companies to build more impactful and ethical software. ## Rethinking Code Itself Still, when we look at the world of low-code and no-code programming, we feel a nagging disappointment. We’ve made great strides in producing libraries that reduce the amount of code programmers need to write; but it’s still programming, and that’s a barrier in itself. We’ve seen limitations in other low-code or no-code approaches; they’re typically “no code until you need to write code.” That’s progress, but only progress of a sort. Many of us would rather program in Python than in PL/I or Fortran, but that’s a difference of quality, not of kind. Are there any ways to rethink programming at a fundamental level? Can we ever get beyond 80-character lines that, no matter how good our IDEs and refactoring tools might be, are really just virtual punch cards? Here are a few ideas. Brett Victor’s Dynamicland represents a complete rethinking of programming. It rejects the notion of programming with virtual objects on laptop screens; it’s built upon the idea of working with real-world objects, in groups, without the visible intermediation of computers. People “play” with objects on a tabletop; sensors detect and record what they’re doing with the objects. The way objects are arranged become the programs. It’s more like playing with Lego blocks (in real life, not some virtual world), or with paper and scissors, than the programming that we’ve become accustomed to. And the word “play” is important. Dynamicland is all about reenvisioning computing as play rather than work. It’s the most radical attempt at no-code programming that we’ve seen. Dynamicland is a “50-year project.” At this point, we’re 6 years in: only at the beginning. Is it the future? We’ll see. If you’ve followed quantum computing, you may have seen quantum circuit notation (shown in Figure 5), a way of writing quantum programs that looks sort of like music: a staff composed of lines representing qubits, with operations connecting those lines. We’re not going to discuss quantum programming; we find this notation suggestive for other reasons. Could it represent a different way to look at the programming enterprise? Kevlin Henney has talked about programming as managing space and time. Traditional programming languages are (somewhat) good about space; languages like C, C++, and Java require you to define datatypes and data structures. But we have few tools for managing time, and (unsurprisingly) it’s hard to write concurrent code. Music is all about time management. Think of a symphony and the 100 or so musicians as independent “threads” that have to stay synchronized—or think of a jazz band, where improvisation is central, but synchronization remains a must. Could a music-aware notation (such as Sonic Pi) lead to new ways for thinking about concurrency? And would such a notation be more approachable than virtual punch cards? This rethinking will inevitably fail if it tries too literally to replicate staves, note values, clefs and such; but it may be a way to free ourselves from thinking about business as usual. Figure 5. Quantum circuit notation (source: Programming Quantum Computers) Here’s an even more radical thought. At an early Biofabricate conference, a speaker from Microsoft was talking about tools for programming DNA. He said something mind-blowing: we often say that DNA is a “programming language,” but it has control structures that are unlike anything in our current programming languages. It’s not clear that those programming structures are representable in a text. Our present notion of computation—and, for that matter, of what’s “computable”—derives partly from the Turing machine (a thought experiment) and Von Neumann’s notion of how to build such a machine. But are there other kinds of machines? Quantum computing says so; DNA says so. What are the limits of our current understanding of computing, and what kinds of notation will it take to push beyond those limits? Finally, programming has been dominated by English speakers, and programming languages are, with few exceptions, mangled variants of English. What would programming look like in other languages? There are programming languages in a number of non-English languages, including Arabic, Chinese, and Amharic. But the most interesting is the Cree# language, because it isn’t just an adaptation of a traditional programming language. Cree# tries to reenvision programming in terms of the indigenous American Cree culture, which revolves around storytelling. It’s a programming language for stories, built around the logic of stories. And as such, it’s a different way of looking at the world. That way of looking at the world might seem like an arcane curiosity (and currently Cree# is considered an “esoteric programming language”); but one of the biggest problems facing the artificial intelligence community is developing systems that can explain the reason for a decision. And explanation is ultimately about storytelling. Could Cree# provide better ways of thinking about algorithmic explainability? ## Where We’ve Been and Where We’re Headed Does a new way of programming increase the number of people who are able to be creative with computers? It has to; in “The Rise of the No Code Economy”, the authors write that relying on IT departments and professional programmers is unsustainable. We need to enable people who aren’t programmers to develop the software they need. We need to enable people to solve their own computational problems. That’s the only way “digital transformation” will happen. We’ve talked about digital transformation for years, but relatively few companies have done it. One lesson to take from the COVID pandemic is that every business has to become an online business. When people can’t go into stores and restaurants, everything from the local pizza shop to the largest retailers needs to be online. When everyone is working at home, they are going to want tools to optimize their work time. Who is going to build all that software? There may not be enough programming talent to go around. There may not be enough of a budget to go around (think about small businesses that need to transact online). And there certainly won’t be the patience to wait for a project to work its way through an overworked IT department. Forget about yesterday’s arguments over whether everyone should learn to code. We are entering a business world in which almost everyone will need to code—and low-, no-, and yes-code frameworks are necessary to enable that. To enable businesses and their citizen programmers to be productive, we may see a proliferation of DSLs: domain-specific languages designed to solve specific problems. And those DSLs will inevitably evolve towards general purpose programming languages: they’ll need web frameworks, cloud capabilities, and more. “Enterprise low-code” isn’t all there is to the story. We also have to consider what low-code means for professional programmers. Doing more with less? We can all get behind that. But for professional programmers, “doing more with less” won’t mean using a templating engine and a drag-and-drop interface builder to create simple database applications. These tools inevitably limit what’s possible—that’s precisely why they’re valuable. Professional programmers will be needed to do what the low-code users can’t. They build new tools, and make the connections between these tools and the old tools. Remember that the amount of “glue code” that connects things rises as the square of the number of things being connected, and that most of the work involved in gluing components together is data integration, not just managing formats. Anyone concerned about computing jobs drying up should stop worrying; low-code will inevitably create more work, rather than less. There’s another side to this story, though: what will the future of programming look like? We’re still working with paradigms that haven’t changed much since the 1950s. As Kevlin Henney pointed out in conversation, most of the trendy new features in programming languages were actually invented in the 1970s: iterators, foreach loops, multiple assignment, coroutines, and many more. A surprising number of these go back to the CLU language from 1975. Will we continue to reinvent the past, and is that a bad thing? Are there fundamentally different ways to describe what we want a computer to do, and if so, where will those come from? We started with the idea that the history of programming was the history of “less code”: finding better abstractions, and building libraries to implement those abstractions—and that progress will certainly continue. It will certainly be aided by tools like Copilot, which will enable subject matter experts to develop software with less help from professional programmers. AI-based coding tools might not generate “less” code–but humans won’t be writing it. Instead, they’ll be thinking and analyzing the problems that they need to solve. But what happens next? A tool like Copilot can handle a lot of the “grunt work” that’s part of programming, but it’s (so far) built on the same set of paradigms and abstractions. Python is still Python. Linked lists and trees are still linked lists and trees, and getting concurrency right is still difficult. Are the abstractions we inherited from the past 70 years adequate to a world dominated by artificial intelligence and massively distributed systems? Probably not. Just as the two-dimensional grid of a spreadsheet allows people to think outside the box defined by lines of computer code, and just as the circuit diagrams of LabVIEW allow engineers to envision code as wiring diagrams, what will give us new ways to be creative? We’ve touched on a few: musical notation, genetics, and indigenous languages. Music is important because musical scores are all about synchronization at scale; genetics is important because of control structures that can’t be represented by our ancient IF and FOR statements; and indigenous languages help us to realize that human activity is fundamentally about stories. There are, no doubt, more. Is low-code the future—a “better abstraction”? We don’t know, but it will almost certainly enable different code. We would like to thank the following people whose insight helped inform various aspects of this report: Daniel Bryant, Anil Dash, Paul Ford, Kevlin Henney, Danielle Jobe, and Adam Olshansky. ### 11:42 "I work with very bad developers," writes Henry. It's a pretty simple example of some bad code: .comment { border: none; } If Me.ddlStatus.SelectedItem.Value = 2 Then Dim statuscode As... ### 05:42 ### 00:49 #### Guile-CV version 0.3.1 is released! (November 2021) This is a maintenance release, which fixes a bug in the pre-inst-env script, which is used by Guile-CV at build time and may also be used to test and run an uninstalled Guile-CV instance. ##### Changes since the previous version For a list of changes since the previous version, visit the NEWS file. For a complete description, consult the git summary and git log. The pre-inst-env.in script has been fixed and now adds a specific build time library location, so Guile 3.0 (>= 3.0.6) finds libguile-cv at build time. Guile 3.0 (>= 3.0.6) does not use libltdl anymore, and although in guile (system foreign-library) module, both the comment and the definition of augment-ltdl-library-path seems to take care of finding .so files either aside .la or in a .libs/ subdir, it does not work. ## Monday, 15 November ### 21:07 Dear all We're happy to announce that the documentation of Thalamus, the GNU Health Federation message and authentication server, resides now at the GH documentation portal! In addition to the portal, we are migrating all the documentation to reStructuredText. Having the documentation in RST format makes it simpler for collaboration. The user will also be able to access it even without Internet connectivity. MyGNUHealth (Personal Health Record) and Thalamus have been already migrated and translated to different languages (Spanish, German, French). The GH Hospital Management System component is in the process of migration. This requires quite a bit of work, updating the documentation and images from 40+ packages. At the end we'll have a very nice, structured GNU Health documentation portal! Happy and healthy hacking! Luis ### 19:00 ### 17:28 ### 12:00 Mike fired up a local copy of his company's Java application and found out that, at least running locally, the login didn't work. Since the available documentation didn't make it clear... ### 06:49 The post 1557 appeared first on Looking For Group. ### 05:07 ### 02:56 I can't believe the year is almost over. ### 00:35 About this Web 3 thing. Listen. I understood the Web in about a minute. Web 2.0, I played a role in defining and developing it. So you could say I got it. But Web 3? Why did they choose a name that promises so much, and more than anything -- simplicity. The idea of anything called "Web" being opaque, sounding like a scam, a VC's wet dream -- that's so counter to what the word means in tech. I shouldn't have to read a few white papers, and still not get it. Look, my intuition says that the web is the web is the web and there is no Web 3, there's just the web. I don't like that they usurped the name of something so simple and precious for something that looks scammy. I think they should have chosen a different name. I wrote this thread on Twitter this morning. • I’ve had a Peloton for a couple of weeks. • I got it so I could exercise when the weather is too nasty to go out for a ride or a walk. • I’ve done classes and rides with no content. I don’t like the classes. The impersonal “you can do I it” motivation is a real turn off. Who the heck are you, and yiu have no clue who I am. Ugh. • Why not have classes where you learn something? Then I’d look forward to it, instead of dreading it. • I’m going to keep trying with the classes. Basically I am already motivated, I love the high I get from exercise. But I wouldn’t mind using the time to learn stuff. Then this evening I took a class with a teacher I really liked. And it made all the difference. I got a better workout, and I had fun. I even talked back a few times. Totally got into it. Weird. Peloton ## Sunday, 14 November ### 15:14 Jeff Jarvis generously gave me support for the idea of using rivers in news orgs. I want to clarify that: • I appreciate the support. 😄 • I don't think I've ever "begged" though at times I imagine it might have seemed that way. • And I haven't been promoting rivers to them for quite some time, because I think the problems are much bigger now. I think the most important thing news orgs can do to improve their service is to include ideas and perspectives from outside the newsroom, esp those that are critical of their work. I would encourage a news org to run a river of sources their reporters consider authoritative. Blogs, newsletters, and whatever new forms come up in the future. This would initially be an internal resource, but the obvious next step is to share the resource with their readers, in the interest of increasing news flow, broadening perspectives, and also introducing transparency. And perhaps most important to introduce criticism of their work to their flow, something we need and they need. If I were going to beg for anything that would most likely be it. I wrote about how this would work in more detail in May 2015. When I read yet another NYT article or op-ed about how Facebook is letting us down, I keep wondering when they're going to look at how they contribute to this problem, how their coverage of Hillary's Emails, for example, gave Trump the White House, and set the country on the path to authoritarianism. I think they did far more damage than Facebook, and not surprisingly, that has not been examined to anywhere near the extent Facebook has. This is a serious problem, and it's not going to get better in the current news system. That's where rivers could make a huge difference, if they cared enough about the service they provide readers. PS: This is what a River of News looks like. ### 05:14 Shaenon: I like coffee, I like old-timey advertisements, so this month I’m drawing old-timey coffee advertisements. The coffeepots balanced on Moustachio’s hat are based on my little Moka pot, which I got during lockdown when I couldn’t go to out for fancy espresso drinks. It’s adorable, it makes amazing coffee, and it cost about fifteen bucks, all qualities of a brilliant invention. As usual, if you make a donation in any amount to the Skin Horse Tip Jar, or contribute any amount to our Patreon, we’ll give you a link to this wallpaper, designed for two computer desktop sizes and cell phones. Patreon contributors will continue to receive new wallpaper for the length of their contribution. As a bonus, you’ll also receive last year’s November wallpaper, one of my favorite things I’ve drawn: Channing: I have nothing witty to say other than that these all look so dang good and I would drink the heck out of all of them just to see if the inside was as good as the packaging. ## Saturday, 13 November ### 14:42 Someone once said that music is bass and drums, and all the rest is just dressing up. ### 09:07 Hi, all Common Lispers. In the previous article, I introduced the management of Lisp implementations with Roswell. One of the readers asked me how to install Roswell itself. Sorry, I forgot to mention it. Please look into the official article at GitHub Wiki. Even on Windows, it recently has become possible to install it with a single command. Quite easy. Today, I'm going to continue with Roswell: the installation of Common Lisp libraries and applications. ## Install from Quicklisp dist Quicklisp is the de-facto library registry. When you install Roswell, the latest versions of SBCL and Quicklisp are automatically set up. Let's try to see the value of ql:*quicklisp-home* in REPL to check where Quicklisp is loaded from.  ros run
* ql:*quicklisp-home*
#P"/home/fukamachi/.roswell/lisp/quicklisp/"



You see that Quicklisp is installed in ~/.roswell/lisp/quicklisp/.

To install a Common Lisp project using this Quicklisp, execute ros install command:

# Install a project from Quicklisp dist
$ros install <project name>   You probably remember ros install command is also used to install Lisp implementations. If you specify something other than the name of implementations, Roswell assumes that it's the name of an ASDF project. If the project is available in Quicklisp dist, it will be installed from Quicklisp. Installed files will be placed under ~/.roswell/lisp/quicklisp/dists/quicklisp/software/ along with its dependencies. If it's installed from Quicklisp, it may seem to be the same as ql:quickload. So you would think that this is just a command to be run from the terminal. In most cases, that's true. However, if the project being installed contains some command-line programs with the directory named roswell/, Roswell will perform an additional action. For example, Qlot provides qlot command. By running ros install qlot, Roswell installs the executable at ~/.roswell/bin/qlot. This shows that Roswell can be used as an installer not only for simple projects but also for command-line applications. Other examples of such projects are "lem", a text editor written in Common Lisp, and "mondo", a REPL program. I'll explain how to write such a project in another article someday. ## Install from GitHub How about installing a project that is not in Quicklisp? Or, in some cases, the monthly Quicklisp dist is outdated, and you may want to use the newer version. By specifying GitHub's user name and project name for ros install, you can install the project from GitHub. $ ros install <user name>/<project name>

# In the case of Qlot
$ros install fukamachi/qlot   Projects installed from GitHub will be placed under ~/.roswell/local-projects. To update it, run ros update: # Note that it is not "fukamachi/qlot".$ ros update qlot



Besides, you can also install a specific version by specifying a tag name or a branch name.

# Install Qlot v0.11.4 (tag name)
$ros install fukamachi/qlot/0.11.4 # Install the development version (branch name)$ ros install fukamachi/qlot/develop



### Manual installation

How about installing a project that doesn't exist in both Quicklisp and GitHub?

It's also easy. Just place the files under ~/.roswell/local-projects, and run ros install <project name>.

Let me explain a little about how it works.

This mechanism is based on the local-projects mechanism provided by Quicklisp.

The "~/.roswell/local-projects" directory can be treated just like the local-projects directory of Quicklisp.

As a side note, if you want to treat other directories like local-projects, just add the path to ros:*local-project-directories*. This is accomplished by adding Roswell-specific functions to asdf:*system-definition-search-functions*. Check it out if you are interested.

But, I personally think that this directory should be used with caution.

### Caution on the operation of local-projects

Projects placed under the local-projects directory can be loaded immediately after starting the REPL. I suppose many users use it for this convenience.

However, this becomes a problem when developing multiple projects on the same machine. Quicklisp's "local-projects" directory is user-local. Which means all projects will share it. Therefore, even if you think you are loading from Quicklisp, you may be loading a previously installed version from GitHub.

To avoid these dangers, I recommend using Qlot. If you are interested, please look into it.

Anyway, it is better to keep the number of local-projects to a minimum to avoid problems.

If you suspect that an unintended version of the library is loaded, you can check where the library is loaded by executing (ql:where-is-system :<project name>).

### Conclusion

I introduced how to install Common Lisp projects with Roswell.

• From Quicklisp
• ros install <project name>
• From GitHub
• ros install <user name>/<project name>
• ros install <user name>/<project name>/<tag>
• ros install <user name>/<project name>/<branch>
• Manual installation
• Place files under ~/.roswell/local-projects

## Friday, 12 November

### 17:21

Fridays, we open the Larchives, Lar’s extensive archive of art work oddities, and share a few pieces. When Lar arted the cover for one of the books inside the Looking For Group Adventure RPG Boxed Set, one of my favourite […]

The post Friday Lart – Bearly Owls appeared first on Looking For Group.

### 11:42

This week at Errr'd we return with some of our favorite samples. A screwy message or a bit of mojibake is the ordinary thing; the real gems are the errors that are themselves error...

## Thursday, 11 November

### 11:56

"Retry on failure" makes a lot of sense. If you try to connect to a database, but it fails, most of the time that's a transient failure. Just try again. HTTP request failed? Try...

### 07:07

The post 1556 appeared first on Looking For Group.

## Wednesday, 10 November

### 21:07

Need help getting your session proposal in good shape? We're holding office hours in #LibrePlanet on Libera.chat every Thursday at 1 PM (EDT/EST).

### 16:21

Tuesday, YOU are the star! We curate our favourites from the previous week’s comments on lfg.co and Facebook and remind you how clever you are. Here are your top comments for Looking For Group pages 1553 – 1554 Looking For […]

The post Top Comments – Pages 1553 – 1554 appeared first on Looking For Group.

{{ text }}

### 12:21

Anastacio knew of a programmer at his company by reputation only- and it wasn't a good reputation. In fact, it was bad enough that when this programmer was fired, no one- even people who...

## Tuesday, 09 November

### 14:42

I’m well-versed in the ups and downs of remote work. I’ve been doing some form thereof for most of my career, and I’ve met plenty of people who have a similar story. When companies ask for my help in building their ML/AI teams, I often recommend that they consider remote hires. Sometimes I’ll even suggest that they build their data function as a fully-remote, distributed group. (I’ll oversimplify for brevity, using “remote team” and “distributed team” interchangeably. And I’ll treat both as umbrella terms that cover “remote-friendly” and “fully-distributed.”)

Remote hiring has plenty of benefits. As an employer, your talent pool spans the globe and you save a ton of money on office rent and insurance. For the people you hire, they get a near-zero commute and a Covid-free workplace.

Then again, even though you really should build a remote team, you also shouldn’t. Not just yet. You first want to think through one very important question:

Do I, as a leader, really want a remote team?

### The Litmus Test

The key ingredient to successful remote work is, quite simply, whether company leadership wants it to work. Yes, it also requires policies, tooling, and re-thinking a lot of interactions. Not to mention, your HR team will need to double-check local laws wherever team members choose to live.  But before any of that, the people in charge have to actually want a remote team.

Here’s a quick test for the executives and hiring managers among you:

• As the Covid-19 pandemic forced your team to work from home, did you insist on hiring only local candidates (so they could eventually work in the office)?
• With wider vaccine rollouts and lower case counts, do you now require your team to spend some time in the office every week?
• Do you see someone as “not really part of the team” or “less suitable for promotion” because they don’t come into the office?

If you’ve said yes to any of these, then you simply do not want a distributed team. You want an in-office team that you begrudgingly permit to work from home now and then. And as long as you don’t truly want one, any attempts to build and support one will not succeed.

If you’ve said yes to any of these, then you simply do not want a distributed team. You want an in-office team that you begrudgingly permit to work from home now and then. And as long as you don’t truly want one, any attempts to build and support one will not succeed.

And if you don’t want that, that’s fine. I’m not here to change your mind.

But if you do want to build a successful remote team, and you want some ideas on how to make it work, read on.

### How You Say What You Have to Say

As a leader, most of your job involves communicating with people. This will require some adjustment in a distributed team environment.

A lot of you have developed a leadership style that’s optimized for everyone being in the same office space during working hours. That has cultivated poor, interruption-driven communication habits. It’s too easy to stop by someone’s office, pop over a cubicle wall, or bump into someone in the hallway and share some information with them.

With a remote team you’ll need to write these thoughts down instead. That also means deciding what you want to do before you even start writing, and then sticking with it after you’ve filed the request.

By communicating your thoughts in clear, unambiguous language, you’ve demonstrated your commitment to what you’re asking someone to do. You’re also leaving them a document they can refer to as they perform the task you’ve requested. This is key because, depending on work schedules, a person can’t just tap you on the shoulder to ask you to clarify a point.

(Side note: I’ve spent my career working with extremely busy people, and being one myself. That’s taught me a lot about how to communicate in written form. Short sentences, bullet points, and starting the message with the call-to-action—sometimes referred to as BLUF: Bottom Line Up-Front—will go a long way in making your e-mails clearer.)

The same holds true for meetings: the person who called the meeting should send an agenda ahead of time and follow up with recap notes. Attendees will be able to confirm their shared understanding of what is to be done and who is doing what.

Does this feel like a lot of documentation? That’s great. In my experience, what feels like over-communication for an in-office scenario is usually the right amount for a distributed team.

### Embracing Remote for What It Is

Grammar rules differ by language. You won’t get very far speaking the words of a new language while using grammatical constructs from your native tongue. It takes time, practice, and patience to learn the new language so that you can truly express yourself.  The path takes you from “this is an unnatural and uncomfortable word order” to “German requires that I put the verb’s infinitive at the end of the clause.  That’s just how it works.”

There are parallels here to leading a distributed team. It’s too easy to assume that “remote work” is just “people re-creating the in-office experience, from their kitchen tables.” It will most certainly feel unnatural and uncomfortable if you hold that perspective.  And it should feel weird, since optimizing for remote work will require re-thinking a lot of the whats and hows of team interactions and success metrics.  You start winning when you determine where a distributed team works out better than the in-office alternative.

Remote work is people getting things done from a space that is not your central office, on time schedules that aren’t strict 9-to-5, and maybe even communicating in text-based chat systems.  Remote work is checking your messages in the morning, and seeing a stream of updates from your night-owl teammates.  Remote work is its own thing, and trying to shoe-horn it into the shape of an in-office setup means losing out on all of the benefits.

Embracing remote teams will require letting go of outdated in-office tropes to accept some uncomfortable truths. People will keep working when you’re not looking over their shoulder.  Some of them will work even better when they can do so in the peace and quiet of an environment they control.  They can be fully present in a meeting, even if they’ve turned off their video. They can most certainly be productive on a work schedule that doesn’t match yours, while wearing casual attire.

The old tropes were hardly valid to begin with. And now, 18 months after diving head-first into remote work, those tropes are officially dead. It’s up to you to learn new ways to evaluate team (and team member) productivity. More importantly, in true remote work fashion, you’ll have to step back and trust the team you’ve hired.

### Exploring New Terrain

If distributed teamwork is new territory for your company, expect to stumble now and then. You’re walking through a new area and instead of following your trusty old map, you’re now creating the map. One step at a time, one stubbed toe at a time.

You’ll spend time defining new best practices that are specific to this environment. This will mean thinking through a lot more decisions than before—decisions that you used to be able to handle on autopilot—and as such you will find yourself saying “I don’t know” a lot more than you used to.

You’ll feel some of this friction when sorting out workplace norms.  What are “working hours,” if your team even has any?  Maybe all you need is a weekly group check-in, after which everyone heads in separate directions to focus on their work?  In that case, how will individuals specify their working hours and their off-time?  With so much asynchronous communication, there’s bound to be confusion around when a person is expected to pick up on an ongoing conversation in a chat channel, versus their name being @-mentioned, or contacting them by DM.  Setting those expectations will help the team shift into (the right kind of) autopilot, because they’ll know to not get frustrated when a person takes a few hours to catch up on a chat thread.  As a bonus, going through this exercise will sort out when you really need to hold a group meeting versus when you have to just make an announcement (e-mail) or pose a quick question (chat).

Security will be another source of friction.  When everyone is in the same physical office space, there’s little question of the “inside” versus the “outside” network.  But when your teammates are connecting to shared resources from home or a random cafe, how do you properly wall off the office from everything else? Mandating VPN usage is a start, but it’s hardly the entire picture.  There are also questions around company-issued devices having visibility into home-network traffic, and what they’re allowed to do with that information.  Or even a company laptop, hacked through the company network, infecting personal devices on the home LAN. Is your company’s work so sensitive that employees will require a separate, work-only internet service for their home office?  That would be fairly extreme—in my experience, I haven’t even seen banks go that far—but it’s not out of the realm of possibility.  At some point a CISO may rightfully determine that this is the best path.

Saying “I don’t know” is OK in all of these cases, so long as you follow that with “so let’s figure it out.” Be honest with your team to explain that you, as a group, may have to try a few rounds of something before it all settles. The only two sins here are to refuse to change course when it’s not working, and to revert to the old, familiar, in-office ways just to ease your cognitive burden. So long as you are thoughtful and intentional in your approach, you’ll succeed over the long run.

### It’s Here to Stay

Your data scientists (and developers, and IT ops team) have long known that remote work is possible. They communicate through Slack and collaborate using shared documents. They see that their “datacenter” is a cloud infrastructure. They already know that a lot of their day-to-day interactions don’t require everyone being in the same office. Company leadership is usually the last to pick up on this, which is why they tend to show the most resistance.

If adaptive leadership is the key to success with distributed teams, then discipline is the key to that adaptation. You’ll need the discipline to plan your communication, to disable your office autopilot, and to trust your team more.

You must focus on what matters—defining what needs to get done, and letting people do it—and learn to let go of what doesn’t. That will be uncomfortable, yes. But your job as a leader is to clear the path for people who are doing the implementation work. What makes them comfortable trumps what makes you comfortable.

Not every company will accept this. Some are willing to trade the benefits of a distributed team for what they perceive to be a superior in-office experience. And that’s fine. But for those who want it, remote is here to stay.

### 11:49

It's time to round up a few minor WTFs today. Some are bad, some are just funny, and some make you wonder what the meaning of all of this actually is. We'll start with Tom W. After winning a...

## Monday, 08 November

### 12:42

November 2021, and Brexit is still on-going. I am trying to refrain from posting wall-to-wall blog essays about how badly the on-going brexit is going, but it's been about 9-10 months since I last gnawed on the weeping sore, so here's an interim update.

(If apocalyptic political clusterfucks bore you, skip this blog entry.)

What has become most apparent this year is that Brexit is a utopian nation-building program that about 25-30% of the nation are really crazily enthusiastic about (emphasis on "crazy"—it's John Rogers' crazification factor at work here), and because they vote Tory, Johnson is shoveling red meat into the gimp cage on a daily basis.

Because Brexit is utopian it can never fail, it can only be failed. So it follows that if some aspect of Brexit goes sideways, traitors or insufficiently-enthusiastic wreckers must be at fault. (See also Bolshevism in the Lenin/early Stalin period.)

Alas, it turns out that the Brexiter politicians neglected to inform themselves of what the EU they were leaving even was, namely a legalistic international treaty framework. So they keep blundering about blindly violating legal agreements that trigger, or will eventually trigger, sanctions by their trading partners.

Now, the current government was elected in 2019 on the back of a "let's get Brexit done" campaign. In general, Conservative MPs fall into two baskets: True Believers and Corrupt Grifters. In normal times (i.e. not this century so far) the True Believers were tolerably useful insofar as they included Burkean small-c conservatives who believed in pragmatic government on behalf of the nation. However, around 1975 one particular wing of the True Believers gained control of the party. They were true believers all right, but Thatcher and her followers weren't pragmatists, they were ideologues. And by divorcing government from measurable outcomes—instead, making loyalty to an abstract program the acid test—they opened the door for the grifters, who could spout doubleplusgood duckspeak with the best of the Thatcherites and meanwhile quietly milk their connections for profit-making opportunities.

Thatcherism waxed and waned, but never really went away. And in Brexit, the grifters found an amazing opportunity: just swear allegience to the flag and gain access to power! Their leader, one Alexander Boris de Pfeffel Johnson, made his bones writing politically motivated hit-pieces in the newspapers, with the target most often being the EU: he's a profoundly amoral charlatan and opportunistic grifter who is currently presiding over a massive corruption scandal (the British euphemism is "sleaze": we aren't corrupt, corruption is for Johnny Foreigner). Part of the scandal is misuse of public funds during COVID19: the pandemic turned out to be an amazing profit-making opportunity (nobody mention Dido Harding and the £37Bn English "test and trace" system that, er, didn't work), or her Jockey Club connection to disgraced former Health Minister Matt Hancock). Or most recently, the Owen Paterson scandal, in which a massively corrupt Tory MP was given a slap on the wrist (a one month suspension from parliament) by the Parliamentary Standards Commission ... at which point the Prime Minister's heavy hitters tried to force a vote to abolish the the independent Parliamentary Commissioner for Standards. Which move couldn't possibly have anything to do with the Prime Minister himself being under investigation for corruption ...

Circa 1992-97, the final John Major government set a new high water mark for corruption in public office, with more ministerial resignations due to scandals than all previous governments combined going back to 1832. They'd been in power for 13 years in 1992, winning four elections along the way, and the grifting parasites had begun to overwhelm the host. But the Johnson government—in power for 11 years at this point (and also winning four consecutive elections: "four election wins in a row" seems to be some sort of watershed for blatant corruption)—has seen relatively few ministerial resignations due to scandals: because the PM doesn't think corruption is anything to be ashamed of.

When you're a grifter and the marks are about to notice what you're doing, standard procedure is to scream and shout and hork up a massive distraction. (Johnson's own term for this is "throw a dead cat on the table".)

The Tories focus-group tested "culture wars" in the run up to the 2019 election and discovered there was a public appetite for such things among their voter base (who trend elderly and poorly educated). Think MAGA. The transphobia campaign currently running is one such culture war: so is the war on wokeness that cross-infected the UK from you-know-who. It's insane. Turns out that about 80% of the shibboleths that infect the US hard right play well to the UK centre-right. The notable exception is vaccine resistance -- anti-vaxxers are a noisy but tiny fringe.

I note that this is predominantly an English disease. Scotland is mostly going in the opposite direction: Northern Ireland is deeply uneasy over the way Westminster seems to be throwing them under the bus over the NI border protocol, Wales ... not much news about Wales gets heard outside Wales, but they seem to be somewhere between Scotland and England on the political map. (Plaid Cymru, the Welsh nationalist party, are less successful than the SNP, who have comprehensively beaten Labour in Scotland: in Scotland the Tories are in second place in the polls by a whisker, but don't seem able to break through the 25% barrier.)

Anyway: the latest distraction is that Boris wants a war with France. Especially one he can turn off in an instant by throwing a switch or making a strategic concession (which the Tory-aligned media will spin as "victory" or blame on Labour Wreckers and Remoaner Parasites). The two things propping up his sagging junta are (a) a totally supine media environment and (b) COVID19, which turned up conveniently in time to be blamed for all the ills of Brexit. But COVID19 will go away soon, at which point it's going to be very hard to disguise the source of the economic damage. It turns out the UK's economic losses from brexit outweigh any economic gains by a factor of 178; we're seeing a roughly 4% decline in economic activity so far, and we're less than a year in.

Between the corrupt grifters, the catastrophic fallout from the most self-destructive economic policy of the century, and a ruling party that is selling seats in the House of Lords for £3M a pop to Party donors, we have plenty of reasons to expect many more dead cats to be flung on tables, and culture wars to be kicked off, over the coming months.

So:

Juche Britannia!

Sunlit Uplands!

Brexit means Brexit!

### 12:21

Over twenty years ago, Matt's employer started a project to replace a legacy system. Like a lot of legacy systems, no one actually knew exactly what it did. "Just read the code," is a...

### 06:49

The post 1555 appeared first on Looking For Group.

### 04:35

#### Guile-CV version 0.3.0 is released! (November 2021)

This is a maintenance release, which allows Guile-CV to work with Guile 3.0 (>= 3.0.7 to be specific). In addition, im-transpose performance has been improved.

The documentation has been restructured and follows the model we adopted for GNU G-Golf. The Configuring Guile's raised exception system section has been updated. Make sure you carefully read and apply the proposed changes.

##### Changes since the previous version

For a list of changes since the previous version, visit the NEWS file. For a complete description, consult the git summary and git log

### 02:56

What happens when you drink coffee in the METAVERSE???

## Sunday, 07 November

### 20:42

An event-ful month passed by for Kandria! Lots of developments in terms of conferences and networking. This, in addition to falling ill for a few days, left little time for actual dev again, though even despite everything we still have some news to share on that front as well!

## Swiss-Polish Game Jam

One of the major events this month was the Swiss-Polish game jam alongside GIC, which was organised largely by the Swiss embassy in Poland. Tim and I partnered up with three good fellows from Blindflug Studios, and made a small game called Eco Tower. The jam lasted only 48 hours, so it's nothing grand, but I'm still quite happy with how it turned out, and it was a blast working with the rest of the team!

You can find the game on itch.io.

## Game Industry Conference

The Game Industry Conference was pretty great! I had a fun time talking to the rest of Pro Helvetia and the other delegated teams, as well as the various attendees that checked out our booth. I wrote a lot more about it and the game jam in a previous weekly mailing list update, which, as an exception, you can see here.

## Digital Dragons

Over the course of our Poland visit we were also informed that we'd been accepted into the Digital Dragons Accelerator programme, which is very exciting! Digital Dragons is a Polish conference and organisation to support games, and with this new accelerator programme they're now also reaching out to non-polish developers to support their projects. Only 13 teams out of 97 from all over Europe were chosen, so we're really happy to have been accepted!

As part of the programme we'll be partnered with a Polish publishing company to settle on and then together achieve a set of milestones, over which the grant money of over 50k€ will be paid out. The partner will not be our publisher, just a partner, for the duration of this programme.

Now, you may be wondering what's in it for Poland, as just handing out a load of money to external studios sounds a bit too good to be true, and indeed there's a small catch. As part of the programme we have to first establish a company in Poland, to which the grant will be paid out, and with the hopes that you'll continue using this company after the accelerator ends. We're now in the process of establishing this company, and have already signed a contract with a law firm to help us out with everything involved.

In any case, this is all very exciting, and I'm sure we'll have more to share about all of this as time goes on.

## Nordic Games

Then this week was the Nordic Games Winter conference, with another MeetToMatch platform. We were also accepted into its "publisher market", which had us automatically paired up with 10 publishing firms for pitches on Tuesday. That, combined with law firm meetings, meant that on Tuesday I had 12 meetings almost back to back. Jeez!

I'm not hedging my bets on getting any publishing deals out of this yet, but it is still a great opportunity to grow our network and get our name and game out there into the collective mind of the industry. The response from the recruiters also generally seems favourable, which is really cool.

I do wish we had a new trailer though. While I still think our current VS trailer is good, I've now had to listen to it so many times during pitches and off that I really can't stand it anymore, ha ha! We'll hold off on that though, creating new content and hammering out that horizontal slice is far more important at this stage.

## Hotfix Release

There was a hotfix release along the line that clears out a bunch of critical bugs, and adds a few small features as well. You can get it from your usual link, or by signing up.

## Horizontal Slice

We're now well into the horizontal slice development, and I've started hammering out the level design for the lower part of region 1. I'm still very slow-going on that since I just lack the experience to do it easily, which in turn makes me loathe doing it, which in turn makes me do less of it, which in turn does not help my experience. Woe is me! Anyway, I'll just grit my teeth for now and get as much done as I can - I'll get better over time I'm sure!

As part of the level design process I've also started implementing more platforming mechanics such as the slide move, lava and oil liquids, a dash-recharge element, and recallable elevators. I'll have to add a few more things still, such as crumbling platforms, springs and springboards, wind, exhaust pipes, and conveyor belts.

## Tim

This month has been horizontal slice quest development, with the trip to Poland for GIC sandwiched in the middle. I'm sure Nick has covered this in depth above, but I wanted to add that it was an amazing experience for me: travelling to Poland and seeing a new country and culture (St. Martin's croissants / Rogals are AMAZING); the game jam where although as a writer I was somewhat limited (helped a bit with design, research and playtesting), it was nevertheless a great experience with the best result - and I got to shake hands with the Swiss ambassador!; the GIC conference itself, where it was a great feeling with Kandria live on the show floor, and watching players and devs get absorbed; the studio visit with Vile Monarch and 11 bit (Frostpunk is one of my favourite games). But the best thing was the people: getting to meet Nick in real life and see the man behind the magic, not to mention all the other devs, industry folk, and organisers from Switzerland and Poland. It was a real privilege to be part of the group.

I've also been continuing to help with the meet-to-match platform for both GIC, and Nordic Game this past week, filtering publishers to suit our needs and booking meetings. Aside from that, it's now full steam ahead on the horizontal slice! With the quest document updated with Nick's feedback, it's a strong roadmap for me to follow. I'm now back in-game getting my hands dirty with the scripting language - it feels good to be making new content, and pushing the story into the next act beyond the vertical slice.

## Fred

Fred's been very busy implementing the new moves for the Stranger, as well as doing all the animations for new NPC characters that we need in the extended storyline. One thing I'm very excited about is the generic villagers, as I want to add a little AI to them to make them walk about and really make the settlements feel more alive!

## Mikel

Similarly, Mikel's been hard at work finalising the tracks for the next regions and producing variants for the different levels of tension. I'm stoked to see how they'll work in-game! Here's a peek at one of the tracks:

## A minor note

I'll take this moment to indulge in a little side project. For some years now I've been producing physical desktop calendars, with my own art, design, and distribution thereof. If you like the art I make or would simply like to support what we do and get something small out of it, consider get one on Gumroad.

## The bottom line

As always, let's look at the roadmap from last month.

• Fix reported crashes and bugs

• Add a update notice to the main screen to avoid people running outdated versions

• Implement some more accessibility options

• Implement more combat and platforming moves

• Implement RPG mechanics for levelling and upgrades (partially done)

• Explore platforming items and mechanics (partially done)

• Practise platforming level design (partially done)

• Draft out region 2 main quest line levels and story

• Draft out region 3 main quest line levels and story

• Complete the horizontal slice

Well, we're starting to crunch away at that horizontal slice content. Still got a long way to go, though!

As always, I sincerely hope you give the new demo a try if you haven't yet. Let us know what you think when you do or if you have already!

### 13:35

We have published a video with an online session we made as an introduction of GNU poke internals for new developers.  We mainly covered the Poke compiler and the poke virtual machine (PVM) and how to extend them to implement new language constructions.

The video can be found at:

Thanks to Mohammad Reza-Nabipoor for editing the video!

### 04:42

Shaenon: I am always excited to see our comics on people’s walls, so thanks to Rob Davidoff for sharing this picture. The watercolor is by Rob’s wife’s grandmother. It’s an honor.

Channng: Beautiful framing job! Thank you for honoring our work in this way.

## Friday, 05 November

### 20:28

I got a couple of those Facebook Memories today that I’m glad I wrote. I’m grateful I saw them this morning, and I want to share them. November 5, 2018 […]

### 10:56

No, it's generally not nice to pick on people who fumble a second language. But TDWTF isn't here to be nice, it's here to be funny, or at least interesting. If nothing else, our final...

## Thursday, 04 November

### 15:07

We are happy to announce that Belén finished her PhD thesison"Understanding and designing technologies for everyday financialcollaboration" which contains many inspirational ideas for futurepayment systems like GNU Taler:

### 10:49

Apolena supports an application written by contractors many years ago. It tracks user activity for reporting purposes, as one does. They then want to report on this, because why else are you...

### 05:35

The post 1554 appeared first on Looking For Group.

## Wednesday, 03 November

### 14:00

Grouping For Looks is a page-by-page retelling of the Looking For Group saga through the lens of a mirror universe where Cale is a goateed tyrant and Richard is a holy soul trying to set him on a good path. […]

The post GFL – Page 0071 appeared first on Looking For Group.

### 11:00

We've talked about Microsoft's WebForms in the past. Having used it extensively in the era, it was a weird mismatch, an attempt to get Visual Basic-style GUI designer tools attached to web...

## Tuesday, 02 November

### 15:28

Tuesday, YOU are the star! We curate our favourites from the previous week’s comments on lfg.co and Facebook and remind you how clever you are. Here are your top comments for Looking For Group pages 1551 – 1552 Looking For […]

The post Top Comments – Pages 1551 – 1552 appeared first on Looking For Group.

### 12:35

There once was a developer who had a lot of hustle. They put out a shingle as a contractor, knocked on doors, made phone calls, and targeted those small businesses that needed something a little more...

### 12:28

While October’s news was dominated by Facebook’s (excuse me, Meta’s) continued problems (you’d think they’d get tired of the apology tour), the most interesting news comes from the AI world. I’m fascinated by the use of large language models to analyze the “speech” of whales, and to preserve endangered human languages. It’s also important that machine learning seems to have taken a step (pun somewhat intended) forward, with robots that teach themselves to walk by trial and error, and with robots that learn how to assemble themselves to perform specific tasks.

## AI

• The design studio Artefact has created a game to teach middle school students about algorithmic bias.
• Researchers are building large natural language models, potentially the size of GPT-3, to decode the “speech” of whales.
• A group at Berkeley has built a robot that uses reinforcement learning to teach itself to walk from scratch–i.e., through trial and error. They used two levels of simulation before loading the model into a physical robot.
• AI is reinventing computers: AI is driving new kinds of CPUs, new “out of the box” form factors (doorbells, appliances), decision-making rather than traditional computation. The “computer” as the computational device we know may be on the way out.
• Weird creatures: Unimals, or universal animals, are robots that can use AI to evolve their body shapes so they can solve problems more efficiently. Future generations of robotics might not be designed with fixed bodies, but have the capability to adapt their shape as needed.
• Would a National AI Cloud be a subsidy to Google, Facebook, et.al., a threat to privacy, or a valuable academic research tool?
• I’ve been skeptical about digital twins; they seem to be a technology looking for an application. However, Digital Twins (AI models of real-world systems, used for predicting their behavior) seem like a useful technology for optimizing the performance of large batteries.
• Digital Twins could provide a way to predict supply chain problems and work around shortages. They could allow manufacturers to navigate a compromise between just-in-time stocking processes, which are vulnerable to shortages, and resilience.
• Modulate is a startup currently testing real-time voice changing software. They provide realistic, human sounding voices that replace the user’s own voice. They are targeting gaming, but the software is useful in many situations where harassment is a risk.
• Voice copying algorithms were able to fool both people and voice-enabled devices roughly 50% of the time (30% for Azure’s voice recognition service, 62% for Alexa). This is a new front in deep fakery.
• Facebook AI Research has created a set of first-person (head-mounted camera) videos called Ego4D for training AI.  They want to build AI models that see the world “as a person sees it,” and be able to answer questions like “where did I leave my keys.” In essence, this means that they will need to collect literally everything that a subscriber does.  Although Facebook denies that they are thinking about commercial applications, there are obvious connections to Ray-Ban Stories and their interest in augmented reality.
• DeepMind is working on a deep learning model that can emulate the output of any algorithm.  This is called Neuro Algorithmic Reasoning; it may be a step towards a “general AI.”
• Microsoft and NVIDIA announce a 530 billion parameter natural language model named Megatron-Turing NLG 530B.  That’s bigger than GPT-3 (175B parameters).
• Can machine learning be used to document endangered indigenous languages and aid in language reclamation?
• Beethoven’s 10th symphony completed by AI: I’m not convinced that this is what Beethoven would have written, but this is better than other (human) attempts to complete the 10th that I’ve heard. It sounds like Beethoven, for the most part, though it quickly gets aimless.
• I’m still fascinated by techniques to foil face recognition. Here’s a paper about an AI system that designs minimal, natural-looking makeup that reshapes the parts of the face that face recognition algorithms are most sensitive to, without substantially altering a person’s appearance.

## Ethics

• Thoughtworks’ Responsible Tech Playbook is a curated collection of tools and techniques to help organizations become more aware of bias and become more inclusive and transparent.

## Programming

• Kerla is a Linux-like operating system kernel written in Rust that can run most Linux executables. I doubt this will ever be integrated into Linux, but it’s yet another sign that Rust has joined the big time.
• OSS Port is an open source tool that aims to help developers understand large codebases. It parses a project repository on GitHub and produces maps and tours of the codebase. It currently works with JavaScript, Go, Java, and Python, with Rust support promised soon.
• Turing Complete is a game about computer science. That about says it…
• wasmCloud is a runtime environment that can be used to build distributed systems with wasm in the cloud. WebAssembly was designed as a programming-language-neutral virtual machine for  browsers, but it increasingly looks like it will also find a home on the server side.
• Adobe Photoshop now runs in the browser, using wasm and Emscripten (the C++ toolchain for wasm).  In addition to compiling C++ to wasm, Emscripten also translates POSIX system calls to web API calls and converts OpenGL to WebGL.
• JQL (JSON Query Language) is a Rust-based language for querying JSON (what else?).

## Security

• Microsoft has launched an effort to train 250,000 cyber security workers in the US by 2025. This effort will work with community colleges. They estimate that it will only make up 50% of the shortfall in security talent.
• Integrating zero trust security into the software development lifecycle is really the only way forward for companies who rely on systems that are secure and available.
• A supply chain attack against a Node.js library (UA-Parser-JS) installs crypto miners and trojans for stealing passwords on Linux and Windows systems. The library’s normal function is to parse user agent strings, identifying the browser, operating system, and other parameters.
• A cybercrime group has created penetration testing consultancies whose purpose is to acquire clients and then gather information and initiate ransomware attacks against those clients.
• A federated cryptographic system will allow sharing of medical data without compromising patient privacy.  This is an essential element in “predictive, preventive, personalized, and participatory” medicine (aka P4).
• The European Parliament has taken steps towards banning surveillance based on biometric data, private face recognition databases, and predictive policing.
• Is it possible to reverse-engineer the data on which a model was trained? An attack against a fake face generator was able to identify the original faces in the training data. This has important implications for privacy and security, since it appears to generalize to other kinds of data.
• Adversarial attacks against machine learning systems present a different set of challenges for cybersecurity. Models aren’t code, and have their own vulnerabilities and attack vectors. Atlas is a project to define the the machine learning threat landscape. Tools to harden machine learning models against attack include IBM’s Adversarial Robustness Toolbox and Microsoft’s Counterfit.
• Researchers have discovered that you can encode malware into DNA that attacks sequencing software and gives the attacker control of the computer.  This attack hasn’t (yet) been found in the wild.
• Masscan is a next generation, extremely fast port scanner.  It’s similar to nmap, but much faster; it claims to be able to scan the entire internet in 6 minutes.
• ethr is an open source cross-platform network performance measurement tool developed by Microsoft in Go. Right now, it looks like the best network performance tool out there.
• Self-aware systems monitor themselves constantly and are capable of detecting (and even repairing) attacks.

## Infrastructure and Operations

• Interesting insights into how site reliability engineering actually works at Google. SRE is intentionally a scarce resource; teams should solve their own problems. Their goal is to help dev teams attain reliability and performance objectives with engineering rather than brute force.

## Devices and Things

• Amazon is working on an Internet-enabled refrigerator that will keep track of what’s in it and notify you when you’re low on supplies.  (And there are already similar products on the market.) Remember when this was joke?
• Consumer-facing AI: On one hand, “smart gadgets” present a lot of challenges and opportunities. On the other hand, it needs better deliverables than “smart” doorbells. Smart hearing aids that are field-upgradable as a subscription service?
• A drone has been used to deliver a lung for organ transplant. This is only the second time a drone has been used to carry organs for transplantation.
• Intel has released its next generation neuromorphic processor, Loihi. Neuromorphic processors are based on the structure of the brain, in which neurons asynchronously send each other signals. While they are still a research project, they appear to require much less power than traditional CPUs.

## Web

• ipleak and dnsleaktest are sites that tell you what information your browser leaks. They are useful tools if you’re interested in preserving privacy. The results can be scary.
• Dark design is the practice of designing interfaces that manipulate users into doing things they might not want to do, whether that’s agreeing to give up information about their web usage or clicking to buy a product. Dark patterns are already common, and becoming increasingly prevalent.
• Black Twitter has become the new “Green Book,” a virtual place for tips on dealing with a racist society. The original Green Book was a Jim Crow-era publication that told Black people where they could travel safely, which hotels would accept them, and where they were likely to become victims of racist violence.

## Quantum Computing

• A group at Duke University has made significant progress on error correcting quantum computing. They have created a “logical qubit” that can be read with a 99.4% probability of being correct. (Still well below what is needed for practical quantum computing.)
• There are now two claims of quantum supremacy from Chinese quantum computing projects.

## Miscellaneous

• Would our response to the COVID pandemic been better if it was approached as an engineering problem, rather than scientific research?

## Monday, 01 November

### 06:00

The post 1553 appeared first on Looking For Group.

### 02:56

Tonight's comic is about the perils of opening up.

## Sunday, 31 October

### 04:28

Shaenon: I can’t believe we never did this in previous years. I’m Kickstarting a Skin Horse 2022 calendar, using seasonal illustrations from my years of wallpaper illustrations. Want one? Go back the Kickstarter! It’ll be running until the end of November.

## Friday, 29 October

### 16:14

Over the past many years, I’ve included a lunch note for Daughter, Age 13, when she goes off to school each day.

(Honestly, I’m delighted – if a bit surprised – that she still asks for them.)

Usually, these are directly tied to something she’s studying, or her current fandoms. The first two weeks of the semester, for example, I sketched out the 14 Doctors (she’s recently become heavily invested in Doctor Who, and in fact her Halloween costume is David Tenant’s mod tenth Doctor.)

However, it’s Halloween, and I want to share the joy that is OVER THE GARDEN WALL with everyone. Particularly those who’ve not seen it.

OVER THE GARDEN WALL is a ten-episode series from Cartoon Network that ran in 2014, and is currently available on several streaming services. TLDR: it is magic, and perfect Halloween-season viewing.

Daughter, then Age 12, discovered, and introduced it to Judith and me. We fell in love with it immediately: it’s quirky, creepy, loving, warm, and quite wonderful. It can get intense for really young kids, but it is pure bliss.

So once October came around, I turned to OTGW for her lunch notes.

They’re only quick lunch notes, but I had a hugely enjoyable time with them. Most are based around quotes from the show. I sketch them out in pencil on Post-It notes, then ink them using the same pens I’ll draw DORK TOWER with: Faber-Castell PITT Artist Pens (fine nib).

But, much like SONG OF THE SEA, OVER THE GARDEN WALL is most definitely Halloween-adjacent, and you should watch (or rewatch) it, if you get the chance!

John

### 05:56

While watching Discovery to prepare for Ready Room, I had this sudden realization that my journey and Wesley’s journey are almost identical. I don’t think I’ve ever thought about it […]

## Thursday, 28 October

### 22:07

I wrote a book in 2004 called Just A Geek. Literally dozens of people read it, and a lot of them seemed to like it, but I have felt for years that it’s just been forgotten by pretty much everyone. About two years ago, I wrote a novel, and got it as close to finished as I could. My agent shopped it, and it was universally rejected. Like, it was so rejected, nobody even gave us notes on how to make it better. They were just, like, “NOPE.” I think it’s a neat little story, but clearly capital-P Publishing disagrees. Not gonna lie. I was devastated. But one of those editors remembered Just A Geek. He was also familiar with the writing I’d done since then, my mental health advocacy, and my story of surviving narcissistic abuse and neglect. He had this idea to revisit Just A Geek, annotate it, and include some more recent writing. The whole thing would go together and be an annotated memoir.

So I’ve worked on that for about two years, and today we get to announce that it’s a thing.

My publisher and I have this fantastic plan to do an awesome video announcement for the upcoming release of Still Just A Geek, my annotated memoir, which comes out April 12 in America, and 14 April in the UK.

I had this plan to maybe read a little of it, do some cool video stuff, and be fancy. And then I realized it’s Thursday, which is when all the gardeners come into my neighborhood, and the cacophony of leaf blowers and lawnmowers is just a little too much. I also have a ton of Star Trek: Discovery homework to do for Ready Room tomorrow, and holy crap I suddenly have more things to do than I have hours to get them done.

So that great video idea will be delayed for a little bit. It’ll still happen, I just don’t know when.

Am I just killing it with this book announcement or what? This is how you go viral and get lots of free media attention, y’all.

Really important stuff I want you to know:

I went through the entire text of Just A Geek, and annotated all of it. I feel like I’m only supposed to focus on the stuff I did that’s great, but … well, here’s a little bit from my introduction:

“Many times during the process, I wanted to quit. I kept coming across material that was embarrassing, poorly-written, immature, and worst of all, privileged and myopic. I shared all of this with my editor, my wife, my manager, my literary agent, and anyone else in my orbit who I trusted. “This really ought to be buried and forgotten in that landfill with the E.T. cartridges,” I told them. “Digging it all back up is not going to go well,” I said. They all assured me that confronting and owning that stuff in public, something I’d done privately, was important. I had to confront the parts that still fill me with shame and regret.”

So I did that, and it was uncomfortable, embarrassing, awkward, but ultimately healing and surprisingly cathartic. You may have noticed that I’ve spent much of the last several months remembering and writing about childhood trauma. Now you know why.

I also wrote

“I’m going to be honest: I’m terrified that I didn’t say the right things, take away the right lessons, atone appropriately for the parts of this that are gross. I know that I am not the person I was when I thought it was funny to make a childish, lazy, homophobic, joke. I am not the same person who didn’t even consider that a young woman, doing her job, was worthy of respect and kindness, because she was more useful to my male gaze as a character in a story that isn’t as good as I thought it was. I know I’m not that person, because those things—which are a small but significant part of my origin story—revolted me when I read them for the first time in over a decade. I mean, I physically recoiled from my own book. Those moments, and the privilege and ignorance that fueled them filled me with shame and regret. They still do. But confronting and learning from them allowed me to complete my origin story, as it turns out. It’s another thing I was unaware I needed to do, but, having done it, cannot imagine not doing.”

That’s the first … I don’t know, half, maybe two thirds, of this volume. The rest is new essays and speeches I’ve written in the last few years, which are also annotated.

If it all holds together the way I hope it does, it should tell a story of surviving childhood trauma, surviving a predatory industry, and in the most unexpected way, finding out exactly who I am, versus who I always thought I was supposed to be.

I hope it’s inspiring. I hope it’s entertaining. I hope it doesn’t suck. As you can tell, I am terrified.

I will be doing the audiobook, OBVIOUSLY. It will be released at the same time the print and ebook copies are released. We’re working on a plan to offer signed copies through indie bookshops. We’re talking about a virtual press tour. I’ll give you all more information as it gets locked in.

Okay. That’s it. That’s the big news. Please tell all your friends.

### 18:14

The post 1552 appeared first on Looking For Group.

### 17:35

Hey everybody.

So last February now, I, in a fit of optimism, thought I'd take on the Guile Potluck duties for 2021: asking people to submit the fun stuff they were up to, and then I'd blog about it. That didn't happen, for which I humbly apologize. But I should know by now that every time I actually commit to something publicly visible in free software, reality intervenes. So from now on, I promise that I will never again commit to anything.

But life is getting better, my vision problems are improving. My back is all healed and I can actually sit in an office chair all day without pain.

So yeah.

Anyway, while I shy away from term commitment, I do have intention to make good on old promises.

In the meantime (and one of the reasons I'm actually talking about feelings on this backup blog right now instead of my standard repository of feelings) I do have to do something about my always-neglected primary website Lonely Cactus which apparently has gone to blog heaven.  A pity. There was some cool stuff on there.

My hope is to get Lonely Cactus up and running on a different set of technologies, as a learning exercise.  Maybe a GNU/Hurd VM. Maybe Guix.  Because if you're going to do something weird, might as well go all the way.

But in real life, if you're keeping score, I have returned to /dev/null. Single again, no kids in the house anymore, unfit, no church life to speak of. I still own this dilapidated, century-old house in Los Angeles, and have a day job, so I'm better off than billions of people. And I'm lucky in that comparatively few people I know have died during the plague year.

Time for life v4.0, or v5.0. I'm not sure of my current revision number.

## Wednesday, 27 October

### 22:49

About once a year or so, I look back through my blog archives just to see what I’ve written about, and to see where I am now, relative to where […]

### 22:07

Images are available at https://trisquel.info/download or directly at https://cdimage.trisquel.info/ and its mirrors.

This minor update to the 9.x "Etiona" series is intended to provide an up to date set of ISO images, both for use as an installation medium and as a live environment with newer packages. This addresses two main security concerns in the 9.0 original ISO images:

Along with those fixes, the release includes any other security update published upstream since we published Etiona, and the latest version of the Mozilla-based "Abrowser" (v93).

These updates will contribute to keep the v9.0 branch in good working order as it will continue to be actively maintained until April 2023.

In other news, the development of Trisquel 10 is ongoing at great pace, with initial ISO images being now available for testing at https://cdbuilds.trisquel.org/nabia/ Please note that these images are not yet   intended for production usage, so use them only for testing and development or (as it is true in any case) at your own risk.

## Tuesday, 26 October

### 16:28

Tuesday, YOU are the star! We curate our favourites from the previous week’s comments on lfg.co and Facebook and remind you how clever you are. Here are your top comments for Looking For Group pages 1549 – 1550. Looking For […]

The post Top Comments – Pages 1549 – 1550 appeared first on Looking For Group.

### 16:07

There are times when what looked like the right design choice some years back comes out as an odd choice as time passes. The beloved guix environment tool is having that fate. Its command-line interface has become non-intuitive and annoying for the most common use cases. Since it could not be changed without breaking compatibility in fundamental ways, we devised a new command meant to progressively replace it; guix shell—that’s the name we unimaginatively ended up with—has just landed after a three-week review period, itself a followup to discussions and hesitations on the best course of action.

This post introduces guix shell, how it differs from guix environment, the choices we made, and why we hope you will like it.

# The story of guix environment

The guix environment command started its life in 2014, when Guix was a two-year old baby and the whole community could fit in a small room. It had one purpose: “to assist hackers in creating reproducible development environments”. It was meant to be similar in spirit to VirtualEnv or Bundler, but universal—not limited to a single language. You would run:

guix environment inkscape


… and obtain an interactive shell with all the packages needed to hack on Inkscape; in that shell, the relevant environment variables—PATH, CPATH, PKG_CONFIG_PATH, and so on—would automatically point to a profile created on the fly and containing the compiler, libraries, and tools Inkscape depends on, but not Inkscape itself.

Only a year later did it become clear that there are cases where one would want to create an environment containing specific packages, rather than an environment containing the dependencies of packages. To address that, David Thompson proposed the --ad-hoc option:

guix environment --ad-hoc inkscape -- inkscape


… would create an environment containing only Inkscape, and would then launch the inkscape command in that environment. Many features were added over the years, such as the invaluable --container option, but these two modes, development and “ad hoc”, are the guts of it.

Fast forward six years: today, there’s consensus that the name --ad-hoc is confusing for newcomers and above all, that the “ad hoc” mode should be the default. This is the main problem that guix shell addresses.

# Doing what you’d expect

Changing the default mode from “development environment” to “ad hoc” is technically easy, but how to do that without breaking compatibility is harder. This led to lengthy discussions, including proposals of mechanisms to choose between the new and old semantics.

In the end, keeping the guix environment name while allowing it to have different semantics was deemed dangerous. For one thing, there’s lots of material out there that demoes guix environment—blog posts, magazine articles, on-line courses—and it would have been impossible to determine whether they refer to the “new” or to the “old” semantics. We reached the conclusion that it would be easier to use a new command name and to eventually deprecate guix environment.

With guix shell, the default is to create an environment that contains the packages that appear on the command line; to launch Inkscape, run:

guix shell inkscape -- inkscape


The --ad-hoc option is gone! Likewise, to spawn an ephemeral development environment containing Python and a couple of libraries, run:

guix shell python python-numpy python-scipy -- python3


Now, if you want, say, the development environment of Inkscape, add the --development or -D option right before:

guix shell -D inkscape


You can add Git and GDB on top of it like so:

guix shell -D inkscape git gdb


(Note that -D only applies to the immediately following package, inkscape in this case.) It’s more concise and more natural than with guix environment. As can be seen in the manual, all the other options supported by guix environment remain available in guix shell.

# Short-hands for development environments

A convention that’s become quite common is for developers to provide a guix.scm at the top of their project source tree, so that others can start a development environment right away:

guix environment -l guix.scm


The guix.scm file would contain a package definition for the project at hand, as in this example. This option is known as -f in guix shell, for consistency with other commands, and the equivalent command is:

guix shell -D -f guix.scm


Since all Guix commands accept a “manifest” with -m, another option is to provide a manifest.scm file and to run:

guix shell -m manifest.scm


“Wouldn’t it be nice if guix shell would automatically follow these conventions when not given any argument?”, some suggested. As in the case of Bundler, direnv, or typical build tools from Meson to Make, having a default file name can save typing and contribute to a good user experience for frequently-used commands. In this spirit, guix shell automatically loads guix.scm or manifest.scm, from the current directory or an ancestor thereof, such that entering a project to hack on it is as simple as:

cd ~/my/project/src
guix shell


Worry not: guix shell loads guix.scm or manifest.scm if and only if you have first added its directory to ~/.config/guix/shell-authorized-directories. Otherwise guix shell warns you and prints a hint that you can copy/paste if you want to authorize the directory.

# Caching environments

With that in place, guix shell can pretty much fill the same role as direnv and similar tools, with one difference though: speed. When all the packages are already in store, guix shell can take one to a few seconds to run, depending on the package set, on whether you’re using a solid state device (SSD) or a “spinning” hard disk, and so on. It’s acceptable but prohibitively slow for direnv-like use cases.

To address that, guix shell maintains a profile cache for the -D -f guix.scm and -m manifest.scm cases. On a hot cache, it runs in 0.1 second. All it has to do is fork a shell with the right environment variable definitions; it does not talk to guix-daemon, and it does not even read guix.scm or manifest.scm (it’s possible to forcefully update the cache with --rebuild-cache).

That makes guix shell usable even for short-lived commands like make:

guix shell -- make


Hopefully it’ll change the way we use the tool!

# The shell doctor

While revamping this command-line interface, the idea of a “shell doctor” came up. In interactive use, guix shell sets environment variables and spawns a shell, but it’s not uncommon for the shell to mess up with the whole environment. Why? Because, contrary to documented practice, it’s quite common for users to define or override environment variables in the startup files of non-login shells, ~/.bashrc for Bash, ~/.zshrc for Zsh. Instead, environment variable definitions should go to the startup file of login shells—~/.bash_profile, ~/.profile, or similar. But let’s face it: it’s a subtle distinction that few of us know or care about.

As a result, users of Guix, especially on distros other than Guix System, would often be disappointed when running guix environment --pure and yet find that PATH contains non-Guix entries, that there’s a bogus LD_LIBRARY_PATH definition, and whatnot. Now, they can call the doctor, so to speak, to obtain a diagnosis of the health of their shell by adding the --check flag:

guix shell --check python python-numpy


The command creates an environment containing Python and NumPy, spawns an interactive shell, checks the environment variables as seen by the shell, and prints a warning if PATH or PYTHONPATH in this case have been overridden. It does not tell users where the problem comes from—it cannot guess—but it tells them if something’s wrong, which is a first step.

Of course, the best way to sidestep these problems is to pass --container, which gives a fresh, isolated environment that does not contain those startup files. That’s not always an option though, for instance on systems lacking support for unprivileged user namespaces, so --check comes in handy there.

# Try it!

Just run guix pull to get this shiny new guix shell thingie!

If you don’t feel ready yet, that’s OK: guix environment won’t disappear overnight. We have a written commitment to keep it around until May, 1st 2023. Though overall, we hope you’ll find the guix shell` interface easier to use and compelling enough that you’ll be willing to switch overnight!

GNU Guix is a transactional package manager and an advanced distribution of the GNU system that respects user freedom. Guix can be used on top of any system running the Hurd or the Linux kernel, or it can be used as a standalone operating system distribution for i686, x86_64, ARMv7, AArch64 and POWER9 machines.

In addition to standard package management features, Guix supports transactional upgrades and roll-backs, unprivileged package management, per-user profiles, and garbage collection. When used as a standalone GNU/Linux distribution, Guix offers a declarative, stateless approach to operating system configuration management. Guix is highly customizable and hackable through Guile programming interfaces and extensions to the Scheme language.

## Feeds

19:35, Monday, 29 November 20:16, Monday, 29 November
a bag of four grapes 19:42, Monday, 29 November 20:24, Monday, 29 November
A Smart Bear: Startups and Marketing for Geeks 19:35, Monday, 29 November 20:16, Monday, 29 November
Anarcho's blog 20:00, Monday, 29 November 20:44, Monday, 29 November
Ansible 19:35, Monday, 29 November 20:15, Monday, 29 November
Bad Science 19:49, Monday, 29 November 20:38, Monday, 29 November
Black Doggerel 19:35, Monday, 29 November 20:16, Monday, 29 November
Blog – Official site of Stephen Fry 19:49, Monday, 29 November 20:38, Monday, 29 November
Broodhollow 19:35, Monday, 29 November 20:16, Monday, 29 November
Charlie Brooker | The Guardian 19:42, Monday, 29 November 20:24, Monday, 29 November
Charlie's Diary 19:42, Monday, 29 November 20:30, Monday, 29 November
Chasing the Sunset - Comics Only 19:49, Monday, 29 November 20:38, Monday, 29 November
Clay Shirky 20:00, Monday, 29 November 20:44, Monday, 29 November
Coding Horror 19:42, Monday, 29 November 20:29, Monday, 29 November
Cory Doctorow – Boing Boing 19:35, Monday, 29 November 20:16, Monday, 29 November
Cory Doctorow's craphound.com 19:42, Monday, 29 November 20:24, Monday, 29 November
Ctrl+Alt+Del Comic 19:42, Monday, 29 November 20:30, Monday, 29 November
Cyberunions 19:49, Monday, 29 November 20:38, Monday, 29 November
David Mitchell | The Guardian 20:00, Monday, 29 November 20:43, Monday, 29 November
Debian GNU/Linux System Administration Resources 19:35, Monday, 29 November 20:16, Monday, 29 November
Deeplinks 20:00, Monday, 29 November 20:44, Monday, 29 November
Diesel Sweeties webcomic by rstevens 20:00, Monday, 29 November 20:43, Monday, 29 November
Dilbert 19:49, Monday, 29 November 20:38, Monday, 29 November
Dork Tower 19:42, Monday, 29 November 20:24, Monday, 29 November
Economics from the Top Down 20:00, Monday, 29 November 20:43, Monday, 29 November
Edmund Finney's Quest to Find the Meaning of Life 20:00, Monday, 29 November 20:43, Monday, 29 November
Eerie Cuties 19:42, Monday, 29 November 20:29, Monday, 29 November
EFF Action Center 20:00, Monday, 29 November 20:43, Monday, 29 November
Enspiral Tales - Medium 20:00, Monday, 29 November 20:45, Monday, 29 November
Erin Dies Alone 19:42, Monday, 29 November 20:29, Monday, 29 November
Falkvinge on Liberty 19:42, Monday, 29 November 20:30, Monday, 29 November
Flipside 19:42, Monday, 29 November 20:24, Monday, 29 November
Flipside 20:00, Monday, 29 November 20:45, Monday, 29 November
Free software jobs 19:35, Monday, 29 November 20:15, Monday, 29 November
Full Frontal Nerdity by Aaron Williams 19:42, Monday, 29 November 20:30, Monday, 29 November
General Protection Fault: The Comic Strip 19:42, Monday, 29 November 20:30, Monday, 29 November
George Monbiot 20:00, Monday, 29 November 20:43, Monday, 29 November
Girl Genius 20:00, Monday, 29 November 20:43, Monday, 29 November
God Hates Astronauts 19:42, Monday, 29 November 20:30, Monday, 29 November
Graeme Smith 20:00, Monday, 29 November 20:44, Monday, 29 November
Groklaw 19:42, Monday, 29 November 20:30, Monday, 29 November
Grrl Power 19:42, Monday, 29 November 20:24, Monday, 29 November
Hackney Anarchist Group 19:49, Monday, 29 November 20:38, Monday, 29 November
http://dynamic.boingboing.net/cgi-bin/mt/mt-cp.cgi?__mode=feed&_type=posts&blog_id=1&id=1 20:00, Monday, 29 November 20:45, Monday, 29 November
http://eng.anarchoblogs.org/feed/atom/ 19:56, Monday, 29 November 20:42, Monday, 29 November
http://feed43.com/3874015735218037.xml 19:56, Monday, 29 November 20:42, Monday, 29 November
http://feeds2.feedburner.com/GeekEtiquette?format=xml 20:00, Monday, 29 November 20:43, Monday, 29 November
http://fulltextrssfeed.com/ 20:00, Monday, 29 November 20:43, Monday, 29 November
http://london.indymedia.org/articles.rss 19:42, Monday, 29 November 20:29, Monday, 29 November
http://the-programmers-stone.com/feed/ 19:42, Monday, 29 November 20:29, Monday, 29 November
http://thecommune.co.uk/feed/ 20:00, Monday, 29 November 20:45, Monday, 29 November
http://www.airshipentertainment.com/buck/buckcomic/buck.rss 19:49, Monday, 29 November 20:38, Monday, 29 November
http://www.airshipentertainment.com/growf/growfcomic/growf.rss 20:00, Monday, 29 November 20:44, Monday, 29 November
http://www.airshipentertainment.com/myth/mythcomic/myth.rss 19:42, Monday, 29 November 20:24, Monday, 29 November
http://www.baen.com/baenebooks 20:00, Monday, 29 November 20:44, Monday, 29 November
http://www.dcscience.net/feed/medium.co 19:49, Monday, 29 November 20:38, Monday, 29 November
http://www.feedsapi.com/makefulltextfeed.php?url=http%3A%2F%2Fwww.somethingpositive.net%2Fsp.xml&what=auto&key=&max=7&links=preserve&exc=&privacy=I+accept 20:00, Monday, 29 November 20:44, Monday, 29 November
http://www.steampunkmagazine.com/inside/feed/ 19:35, Monday, 29 November 20:16, Monday, 29 November
http://www.tinycat.co.uk/feed/ 19:35, Monday, 29 November 20:15, Monday, 29 November
https://hackbloc.org/rss.xml 19:35, Monday, 29 November 20:16, Monday, 29 November
https://kajafoglio.livejournal.com/data/atom/ 19:49, Monday, 29 November 20:38, Monday, 29 November
https://philfoglio.livejournal.com/data/atom/ 19:42, Monday, 29 November 20:29, Monday, 29 November
https://studiofoglio.livejournal.com/data/atom/ 19:56, Monday, 29 November 20:42, Monday, 29 November
https://web.randi.org/?format=feed&type=rss 20:00, Monday, 29 November 20:43, Monday, 29 November
https://www.DropCatch.com/domain/ubuntuweblogs.org 19:56, Monday, 29 November 20:42, Monday, 29 November
https://www.freedompress.org.uk:443/news/feed/ 19:42, Monday, 29 November 20:30, Monday, 29 November
https://www.goblinscomic.com/category/comics/feed/ 19:35, Monday, 29 November 20:15, Monday, 29 November
https://www.hackneysolidarity.info/rss.xml 20:00, Monday, 29 November 20:45, Monday, 29 November
https://www.newstatesman.com/feeds/blogs/laurie-penny.rss 19:35, Monday, 29 November 20:16, Monday, 29 November
https://www.patreon.com/graveyardgreg/posts/comic.rss 19:42, Monday, 29 November 20:29, Monday, 29 November
https://www.rightmove.co.uk/rss/property-for-sale/find.html?locationIdentifier=REGION^876&maxPrice=240000&minBedrooms=2&displayPropertyType=houses&oldDisplayPropertyType=houses&primaryDisplayPropertyType=houses&oldPrimaryDisplayPropertyType=houses&numberOfPropertiesPerPage=24 20:00, Monday, 29 November 20:43, Monday, 29 November
Humble Bundle Blog 19:42, Monday, 29 November 20:29, Monday, 29 November
I, Cringely 19:42, Monday, 29 November 20:30, Monday, 29 November
Irregular Webcomic! 19:35, Monday, 29 November 20:16, Monday, 29 November
Joel on Software 19:56, Monday, 29 November 20:42, Monday, 29 November
Judith Proctor's Journal 19:35, Monday, 29 November 20:15, Monday, 29 November
Krebs on Security 19:35, Monday, 29 November 20:16, Monday, 29 November
Lambda the Ultimate - Programming Languages Weblog 19:35, Monday, 29 November 20:15, Monday, 29 November
LLVM Project Blog 20:00, Monday, 29 November 20:45, Monday, 29 November
Looking For Group 20:00, Monday, 29 November 20:44, Monday, 29 November
Loomio Blog 19:56, Monday, 29 November 20:42, Monday, 29 November
LWN.net 19:35, Monday, 29 November 20:16, Monday, 29 November
Menage a 3 20:00, Monday, 29 November 20:44, Monday, 29 November
Mimi and Eunice 20:00, Monday, 29 November 20:45, Monday, 29 November
Neil Gaiman's Journal 19:35, Monday, 29 November 20:15, Monday, 29 November
Nina Paley 19:42, Monday, 29 November 20:29, Monday, 29 November
O Abnormal – Scifi/Fantasy Artist 20:00, Monday, 29 November 20:45, Monday, 29 November
Oglaf! -- Comics. Often dirty. 19:42, Monday, 29 November 20:30, Monday, 29 November
Oh Joy Sex Toy 20:00, Monday, 29 November 20:44, Monday, 29 November
Order of the Stick 20:00, Monday, 29 November 20:44, Monday, 29 November
Original Fiction – Tor.com 19:42, Monday, 29 November 20:24, Monday, 29 November
OSnews 20:00, Monday, 29 November 20:45, Monday, 29 November
Past Events 19:42, Monday, 29 November 20:30, Monday, 29 November
Paul Graham: Unofficial RSS Feed 20:00, Monday, 29 November 20:45, Monday, 29 November
Penny Arcade 19:42, Monday, 29 November 20:24, Monday, 29 November
Penny Red 20:00, Monday, 29 November 20:45, Monday, 29 November
PHD Comics 19:49, Monday, 29 November 20:38, Monday, 29 November
Phil's blog 19:42, Monday, 29 November 20:30, Monday, 29 November
Planet Debian 20:00, Monday, 29 November 20:45, Monday, 29 November
Planet GNU 19:35, Monday, 29 November 20:16, Monday, 29 November
Planet GridPP 19:42, Monday, 29 November 20:29, Monday, 29 November
Planet Lisp 19:49, Monday, 29 November 20:38, Monday, 29 November
Pluralistic: Daily links from Cory Doctorow 19:35, Monday, 29 November 20:15, Monday, 29 November
Property is Theft! 19:35, Monday, 29 November 20:15, Monday, 29 November
PS238 by Aaron Williams 19:42, Monday, 29 November 20:30, Monday, 29 November
QC RSS 19:42, Monday, 29 November 20:29, Monday, 29 November
Radar 19:42, Monday, 29 November 20:24, Monday, 29 November
RevK®'s ramblings 19:56, Monday, 29 November 20:42, Monday, 29 November
Richard Stallman's Political Notes 19:49, Monday, 29 November 20:38, Monday, 29 November
Scenes From A Multiverse 19:42, Monday, 29 November 20:29, Monday, 29 November
Schneier on Security 19:35, Monday, 29 November 20:15, Monday, 29 November
SCHNEWS.ORG.UK 20:00, Monday, 29 November 20:44, Monday, 29 November
Scripting News 19:42, Monday, 29 November 20:24, Monday, 29 November
Seth's Blog 19:56, Monday, 29 November 20:42, Monday, 29 November
Skin Horse 19:42, Monday, 29 November 20:24, Monday, 29 November
Spinnerette 20:00, Monday, 29 November 20:44, Monday, 29 November
Starslip by Kris Straub 19:42, Monday, 29 November 20:24, Monday, 29 November
Tales From the Riverbank 19:49, Monday, 29 November 20:38, Monday, 29 November
The Adventures of Dr. McNinja 20:00, Monday, 29 November 20:45, Monday, 29 November
The Bumpycat sat on the mat 19:35, Monday, 29 November 20:15, Monday, 29 November
The Command Line 19:56, Monday, 29 November 20:42, Monday, 29 November
The Daily WTF 19:56, Monday, 29 November 20:42, Monday, 29 November
The Monochrome Mob 19:35, Monday, 29 November 20:16, Monday, 29 November
The Non-Adventures of Wonderella 20:00, Monday, 29 November 20:43, Monday, 29 November
The Old New Thing 20:00, Monday, 29 November 20:44, Monday, 29 November
The Open Source Grid Engine Blog 19:42, Monday, 29 November 20:29, Monday, 29 November
The Phoenix Requiem 19:35, Monday, 29 November 20:15, Monday, 29 November
The Rogues Gallery 19:42, Monday, 29 November 20:30, Monday, 29 November
The Stranger, Seattle's Only Newspaper: Savage Love 20:00, Monday, 29 November 20:45, Monday, 29 November
TorrentFreak 20:00, Monday, 29 November 20:43, Monday, 29 November
towerhamletsalarm 19:56, Monday, 29 November 20:42, Monday, 29 November
Twokinds 19:42, Monday, 29 November 20:24, Monday, 29 November
UK Indymedia Features 19:42, Monday, 29 November 20:24, Monday, 29 November
Uploads from ne11y 19:56, Monday, 29 November 20:42, Monday, 29 November