Robin Sloan
the lab
February 2023

Buoyed by the flood

The Bridge at Villeneuve-la-Garenne, 1872, Alfred Sisley
The Bridge at Villeneuve-la-Garenne, 1872, Alfred Sisley

Hello from the lab! A short edition this month.

I have a few links and thoughts I want to share, about both the internet and AI. The internet stuff follows immediately; AI is further down.

This is an archived edition of Robin’s lab newsletter. You can sign up to receive future editions using the form at the bottom of the page.

Here is Rachel Binx, a brilliant artist and creative technologist, on the unbear­able sameness of the modern web.

I love the spirit of Rachel’s Unicode Arrows, an addition to the “quick lookup reference page” genre … with merch 🤩


Here is a project very aligned with this newsletter:

(we)bsite is a living collection of internet dreams from people like you, inhabitants of the internet. It aims to create space to hold, show, and uplift everyday visions and hopes for the internet.


Here’s an inter­esting app called feeeed, beautifully presented.

The app’s creator, Nate Parrott, writes:

More aspirationally, it’s an app that lets you “follow anything,” including data sources that are personal to you, like your step count, weather, and anything you wish to remind yourself of from time to time.

The “anything” extends to arbitrary portions of web pages, which you can clip and “follow” live — like magic peepholes across the web. This feature is built into the Arc web browser, too — Nate works at The Browser Company, so I suspect this is no coincidence — and, in both places, I find it very appealing and provocative. Transclusion-y, even!


You know … I continue to think my web e-book template is pretty great 🤓


The spec­i­fi­ca­tions for RSS and Atom were “finished” in the mid-2000s. I believe there’s a great oppor­tu­nity now to crack them open again, even (or especially) in a bottom-up, unsanctioned way.

As an example, here’s Colin Walker’s proposal for a “now” namespace in RSS, building on the idea of the /now status page.

Even if Colin’s proposal here isn’t exactly the right next step, I believe the simple act of thinking about these trans­for­ma­tions and extensions — describing and pitching them — is extremely useful and produc­tive. It’s good practice; a way of reviving atrophied muscles.

*whispers* I think the cranky RSS spec-heads have all retired. Let’s mess around!


Here’s something called a bridging system, intended to “increase mutual under­standing and trust across divides, creating space for produc­tive conflict, deliberation, or cooperation.” From the paper’s abstract:

We give examples of bridging systems across three domains: recom­mender systems on social media, software for conducting civic forums, and human-facilitated group deliberation. We argue that these examples can be more mean­ing­fully under­stood as processes for attention-allocation (as opposed to “content distribution” or “amplification”), and develop a corre­sponding framework to explore similarities — and oppor­tu­ni­ties for bridging — across these seemingly disparate domains. We focus partic­u­larly on the potential of bridging-based ranking to bring the benefits of offline bridging into spaces which are already governed by algorithms.

This choice is easy

I never want to be too scold-y, but permit me one judgment:

Anyone who adds one of those email newsletter pop-ups to a website demeans them­selves and makes the world worse for everyone else.

People and orga­ni­za­tions add them because “they work”: a website with a pop-up recruits more email subscribers than one without. If I’m running a website for, say, a non-profit devoted to nursing sick penguins, I will of course argue: “More subscribers means more donations … and think of the penguins!”

But the operator of the website for swamp restora­tion says the same thing. And of the website selling custom-embroidered tea towels. Think of the embroiderers!

This is a collec­tive action problem. Any of these decisions, consid­ered separately, is rela­tively inoffensive … and those penguins ARE in rough shape … but all together, they produce a web that is shock­ingly ugly and rude.

It runs deeper than that.

The philoso­pher Immanuel Kant reasoned his way into a hot-rodded version of the Golden Rule: treat humans as ends unto them­selves, never means to an end.

The newsletter pop-up treats website visitors as means only — a faceless flow of inter­ac­tions to be optimized, rather than a parade of indi­vid­uals having real expe­ri­ences in the world.

No indi­vidual in history ever said, “Wow, I’m glad this website blasted a newsletter pop-up in my face.” Certainly, no web designer ever said it, confronted with one. And yet: the pop-ups, they are blasted.

I am not insensate to the imper­a­tives of attention and commerce. I’ve grown many email newslet­ters over the years, and I operate the e-commerce website for my small business. All of these things are material; directly and indirectly, they pay my bills.

Yet I’ve never blasted a newsletter pop-up, and I never will. Instead, I just make the newsletter easy to find 😇

So many choices — moral, economic, aesthetic — are vexing, ambiguous, legit­i­mately challenging. This choice is easy. The pop-up increases newsletter subscriptions, and even­tu­ally sales: so what? Let them go. Earn that attention and business in better ways. Partic­i­pate in the produc­tion of a shared space that is beautiful and respectful, rather than the opposite.

The AI stuff

Street in Moret, 1885-1895, Alfred Sisley
Street in Moret, 1885-1895, Alfred Sisley

ChatGPT is the product that launched a thousand essays! Good: because this is exactly what we ought to be writing about, and worrying about, and arguing about. The terrain of the fast-moving AI is aesthet­i­cally rich, polit­i­cally fraught, econom­i­cally consequential; the perfect setting for wide-ranging discussion.

Frank Lantz is an important figure in the study and practice of video games, and also simply a great humanist. His newsletter on “games, philosophy, and art in the age of AI” is newly-launched, with a magnetic energy:

I feel like I’ve been training my whole life for this moment.

Midway through the first edition, Frank lays out three strong convic­tions about art and AI; I found all three convincing and energizing.


There’s no better guide to AI in 2023 and beyond than Jack Clark, a longtime jour­nalist turned practitioner, first at OpenAI and now at Anthropic.

His weekly newsletter combines an insider’s view of recent advances with original sci-fi imaginings, which take the shape of tiny scin­til­lating scenarios; dreams, provocations, warnings.

Here’s my humble addition to the “wot I did with the AI” genre:

I was recently on the hunt for a telescope that would be good for planet-viewing. I have no interest in faint stars or “challenging” targets; I just want to peep Jupiter and see its big red eye staring back.

The Google search was a riot of images, prices, capabilities, availabilities. Was I snared briefly by some zombified SEO content? Of course. Did I find my way to a few truly beautiful telescope reviews, presented on old-growth, blue-link web pages? I sure did.

In the end, I made a great selection, one that I’m happy and excited about.

In parallel, I asked ChatGPT for recom­men­da­tions. Its reply was fluent and confident; it explained roughly what kind of telescope I ought to look for, then recom­mended three specific models.

It was basically like asking a sales associate at a physical shop.

That raises the question: do I rely on sales as­so­ciates at physical shops? I absolutely do not! The very idea seems unhinged to me. What do you mean, you’re going to show me three alternatives? I want to see thirty!

You could argue that my brain has been broken by a decade of riotous googling. I guess it’s possible … but I don’t think so. I love embarking on these searches. I feel confident, capable, buoyed by the flood. If there ever really was a kind of “web surfing”, this is it.

For my part, I would not forsake the power of full-spectrum, multi-tab search for an AI’s neat recommendation, just as I would not forsake it for a human’s neat recom­men­da­tion … 

 … not unless I really trusted that human: their expertise and independence. I’m thinking of those terrific telescope reviews I found, deeply nerdy, rich with context. What would it mean to trust a ChatGPT-alike in that way? It doesn’t feel possible, presently — in part because these systems are so slippery, so malleable. They have fluency, but nothing that could be called integrity.

Everything changes, of course: and just as I learned to search, confi­dently and capably, over the past decade and more, I’ll learn to … do something … with ChatGPT-alikes in the decade to come.

I just don’t think it will be “searching”.


Here is some compelling AI image generation: a bundle of false movie stills, The Lord of the Rings as if directed by John Boorman in 1981.

It’s inter­esting to observe, in YouTube’s tower of recommendations, this sub-genre taking shape. Inter­esting also to note that these are not “videos” in any proper sense, just slideshows. So why are they posted here? For the recom­men­da­tion juice, of course!

A ground truth of the 21st century: if you want anyone to find it … put it on YouTube.

In mid-2022, I wrote

AI artists are genre artists, too. Our genre is: “I see what you did there.”

and these videos are exemplars of that genre, for better and for worse.

That’s it for February! You’ll receive my next lab newsletter on March 25.

From Oakland,

Robin

February 2023, Oakland