Are you a chatbot problem?
Microsoft's “Open Agentic Web,” Meta's stolen books, and the problem with Apple Intelligence.

This was the week everything seemingly became a problem for a chatbot to solve. Your mental health can now be solved with a chatbot. Radicalization can be solved with a chatbot. Getting over your ex? Try a chatbot! Of course, none of these things can actually be solved with a chatbot, but that’s not stopping every technologist and entrepreneur with dollar signs in their sights from trying.
If we’re not careful, we’ll slip into what Langdon Winner called “reverse adaptation” or “the adjustment of human ends to match the character of the available means.” In short, we’ll lose sight of our purpose and goals and instead adapt them to fit the constraints of a chatbot. As Winner puts it in his 1977 book Autonomous Technology: Technics-out-of-Control as a Theme in Political Thought:
“Abstract general ends — health, safety, comfort, nutrition, shelter, mobility, happiness, and so forth — become highly instrument specific. The desire to move becomes the desire to possess an automobile; the need to communicate becomes the necessity of having telephone service; the need to eat becomes a need for a refrigerator, stove, and convenient supermarket. … Once individual and social ends have become so identified, there is no avoiding this kind of affirmation.”
Don’t reduce your mental health to chatbot form — or even the emotional work of getting over your ex! The work of being a human in this messy world is deeper, harder, and more complex than the shallow interface of a chatbot affords. You might leave the interaction feeling better — chatbots have become total sycophants — but the problem will be there waiting for you when you’re done prompting.
Free Copy of E-Book
Get a free copy of my first e-book, AI Untangled, by filling out this short survey.
What I’m reading this week
‘Artificial Intelligence’
Apple’s failure to ship an AI product at the same speed as other tech giants is in part a result of its commitment to privacy. (More) According to Mark Gurman, Apple is falling behind not just because of its internal restrictions on using customer data, but because it also allows non-customers to opt-out of letting their data train Apple Intelligence. As a result, writes Gurman, Apple researchers are “more heavily reliant on datasets it licenses from third parties and on so-called synthetic data—artificial data created expressly to train AI.” Everything you want to know about the doom loop of synthetic data (Untangled Deep Dive - Synthetic Data)
Chatbots don’t have free speech rights (at least for now) and are instead a product subject to “strict liability,” according to a federal judge. (More) This means AI developers can be held to a higher standard, something I’ve long argued for (Untangled Deep Dive).
How does a philosopher see AI differently from a technologist? In one of the best debates over ‘AI’ I’ve witnessed, philosopher Shannon Vallor and Google VP Agüera y Arcas get into questions like, is AI ‘intelligent’, does it reason, and what’s at stake in the words we use? (More). Vallor wins easily in my estimation, but the value is in the debate is in clarifying the premises from which each start. Listen to my conversation with Vallor on her latest book, The AI Mirror. (Untangled Podcast)
New research shows that Meta’s Llama memorized (re: stole!) copyrighted books. While the extent of memorization varies, the implications for copyright cases might be significant. As the authors dryly write: “Notably, the models themselves could be deemed infringing copies of the training data they’ve memorized. Copyright law offers the destruction of infringing materials as a remedy.” (More)
A new start-up will insure companies against harms and damages to a customer or third-party caused by an ‘AI’ product. (More) This will undoubtedly lead to more reckless use cases, but it will also mature the industry’s risk-management practices. The growth of this industry will also tell you just how much companies consider so-called ‘hallucinations’ a bug or a feature. Here’s what you need to know about ‘hallucinations’ (Untangled Deep Dive)
Microsoft wants to build the ‘Agentic web’ on a new open protocol. (More)
Confused about the environmental costs of AI? MIT ran the numbers. (More)
Media, Crypto, etc.
The Chicago Sun-Times and The Philadelphia Inquirer published stories with egregious falsehoods (re: book recommendations of books that don’t exist!) because the author relied on a chatbot. The story is a deeper reflection of the state of local news. (More). Want to dive deep into the fragmentation of our shared reality and how we might reimagine local news? (Untangled Deep Dive)
FCC Chair Brendan Carr is happily letting internet service providers merge, just so long as they end DEI programs. (More)
Congress is set to pass legislation that would enable the use of ‘stablecoins,’ a digital asset pegged to the value of other (in theory) stable assets, like the U.S. dollar, gold, etc. Dive deep into the far-reaching geopolitical and human rights implications of stablecoins (More)
xAI’s Grok recently spewed nonsense about a “white genocide” taking place in South Africa. There’s not one going on, to be clear. But the incident revealed how poorly written new guidelines can alter the outputs of a chatbot.
distilled the lesson perfectly: chatbots are political, not magic. As Read explained in his excellent newsletter, “An A.I. that works like magic can have a spooky persuasive power, but an A.I. we know how to control should be subject to the same suspicion (not to mention political contestation) as any newspaper or cable channel.”
“The most successful founders do not set out to create companies. They are on a mission to create something closer to a religion, and at some point it turns out that forming a company is the easiest way to do so.'“ - Sam Altman, 2013 (from Karen Hao’s fantastic new book, Empire of AI)
P.S. If you enjoy my emails, move them to your primary inbox and let me know by sending a reply or clicking the poll below. I read every response.