Hi, I’m Charley, and this is Untangled, a newsletter about our sociotechnical world, and how to change it.
Come work with me! The initiative I lead at Data & Society is hiring for a Community Manager. Learn more here.
Check out my new course, Sociotechnical Systems Change in Practice. The first cohort will take place on January 11 and 12, and you can sign up here.
Last week I interviewed Mozilla’s
and Nik Marda on the potential of public AI, and the week prior I shared my conversation with AI reporter Karen Hao on OpenAI’s mythology, Meta’s secret, and Microsoft’s hypocrisy.
🚨 This is your last chance to get Untangled 40 percent off. Even better, I partnered with Anya Kamenetz to offer you her great newsletter The Golden Hour for free! Signing up for Untangled right now means you’ll get $140 in value for $54.
On to the show!
This week I spoke with
, professor of computer science at Princeton University and director of its Center for Information Technology Policy. I spoke with Arvind about his great new book with , AI Snake Oil: What Artificial Intelligence Can Do, What it Can’t, and How to Tell the Difference. We discuss:The difference between generative AI and predictive AI, and why we’re both more concerned by the latter.
Whether generative AI systems can ‘understand’ and ‘reason.’
The difference between intelligence and power and why Arvind isn’t so concerned by the supposed existential threats of AI.
Why artificial intelligence appeals to broken institutions.
How Arvind would change AI discourse.
How technical and social experts misunderstand one another.
What a Trump second term means for AI regulation.
What excites Arvind about how his children will experience new technologies, and what makes him nervous.
More soon,
Charley
Share this post