Hi, it’s Charley, and this is Untangled, a newsletter and podcast about technology, people, and power.
Can’t afford a subscription and value Untangled’s offerings? Let me know! You only need to reply to this email, and I’ll add you to the paid subscription list, no questions asked.
I’m in northern Italy this week. The Dolomites are bonkers y’all.
Now on to the show!
👇 ICYMI
I interviewed Caitlin Dewey, author of the newsletter Links I Would Gchat You If We Were Friends about using AI-generated images to grieve following her third miscarriage.
I interviewed Tamara Kneese, author of the book Death Glitch: How Techno-Solutionism Fails Us in This Life and Beyond about what happens online when we die.
I published an essay about how AI might alter loss, grieving, and our ability to move forward.
🔗 Some Links
At this point, no one should be surprised that algorithmic systems reflect and encode social and demographic biases. (I’ve written about that here, here, and here.) But just in case, a great analysis by the Rest of the World shows that AI systems propagate “bias, stereotypes, and reductionism when it comes to national identities”, too.
A lot of people are freaking out about how AI might upend the U.S. election. But in a great piece, aptly titled “Time to Calm Down about AI and Politics,”
explains that the emerging use cases are — so far, anyway — much more mundane.You may have received this response if you recently used Stable Diffusion to generate an image. Why? Hackers hacked a popular graphical user interface to protest art theft.
🏆Favorite Paper of the Week
My favorite paper of the week is “ChatGPT is bullshit” by Michael Townsen Hicks, James Humphries, and Joe Slater. The authors argue that “describing AI misrepresentations as bullshit is both a more useful and more accurate way of predicting and discussing the behaviour of these systems.” The paper further explains that calling these misrepresentations a ‘hallucination’ isn’t harmless because it “lends itself to the confusion that the machines are in some way misperceiving but are nonetheless trying to convey something that they believe or have perceived.” Want to dig into this line of thinking? Read my essay “AI isn’t hallucinating. We are” or the very first issue I wrote about ChatGPT, where I explained why it’s best understood as a bullshitter.
📰 Untangle the News
The tech story of the month? Google integrated an ‘AI overview’ into its search function. Now when you search for something online, Google will offer an AI-generated response before sharing its usual list of links. Of course, since large language models (LLMs) can only provide a probabilistic guess based on their training data, the overviews offered some real nutty results. Did you know that we should all incorporate rocks into our diet and use glue to ensure the cheese sticks to the crust of our pizza?
Keep reading with a 7-day free trial
Subscribe to Untangled with Charley Johnson to keep reading this post and get 7 days of free access to the full post archives.